From iwienand at redhat.com Thu Mar 1 01:00:11 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 1 Mar 2018 12:00:11 +1100 Subject: [openstack-dev] Zanata upgrade to version 4 In-Reply-To: References: Message-ID: <0a8866b6-106e-777d-67ec-9a2db8c67701@redhat.com> On 02/27/2018 09:32 PM, Frank Kloeker wrote: > We will take the chance now to upgrade our translation platform to a > new version. This has been completed and translate.o.o is now running 4.3.3. For any issues reply, or catch any infra-root in #openstack-infra Thanks -i From sam47priya at gmail.com Thu Mar 1 02:43:04 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 1 Mar 2018 11:43:04 +0900 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> Message-ID: Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers wrote: > My claim to being a masakari person is pretty weak, but still I'd like > to say hello too :-) Please ping me (aspiers on IRC) if you guys are > meeting up! > > > Bhor, Dinesh wrote: > >> Hi Greg, >> >> >> We below are present: >> >> >> Tushar Patil(tpatil) >> >> Yukinori Sagara(sagara) >> >> Abhishek Kekane(abhishekk) >> >> Dinesh Bhor(Dinesh_Bhor) >> >> >> Thank you, >> >> Dinesh Bhor >> >> >> ________________________________ >> From: Waines, Greg >> Sent: 28 February 2018 19:22:26 >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this >> week ? >> >> >> Any masakari folks at the PTG this week ? >> >> >> >> Would be interested in meeting up and chatting, >> >> let me know, >> >> Greg. >> >> ______________________________________________________________________ >> Disclaimer: This email and any attachments are sent in strictest >> confidence >> for the sole use of the addressee and may contain legally privileged, >> confidential, and proprietary data. If you are not the intended recipient, >> please advise the sender by replying promptly to this email and then >> delete >> and destroy this email and any attachments without any further use, >> copying >> or forwarding. >> > > __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Thu Mar 1 09:57:59 2018 From: eumel at arcor.de (Frank Kloeker) Date: Thu, 01 Mar 2018 10:57:59 +0100 Subject: [openstack-dev] Zanata upgrade to version 4 In-Reply-To: <0a8866b6-106e-777d-67ec-9a2db8c67701@redhat.com> References: <0a8866b6-106e-777d-67ec-9a2db8c67701@redhat.com> Message-ID: <779829df51eac68c50770f00c6b6446a@arcor.de> Am 2018-03-01 02:00, schrieb Ian Wienand: > On 02/27/2018 09:32 PM, Frank Kloeker wrote: >> We will take the chance now to upgrade our translation platform to a >> new version. > > This has been completed and translate.o.o is now running 4.3.3. For > any issues reply, or catch any infra-root in #openstack-infra Good morning from the PTG, thanks Ian, for the very professional work. This is a great progress to got Zanata 4 now online. As I tested and seen so far there are no issues, only a big count of Zanata imports. So please go on and merge the last changes into the project repos. many thanks Frank From rbowen at redhat.com Thu Mar 1 10:07:30 2018 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 1 Mar 2018 10:07:30 +0000 Subject: [openstack-dev] [RDO] Queens packages available Message-ID: <92e43e47-1e31-f585-72e4-a2c8f2deb237@redhat.com> The RDO community is pleased to announce the general availability of the RDO build for OpenStack Queens for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Queens is the 17th release from the OpenStack project, which is the work of more than 1600 contributors from around the world (source - http://stackalytics.com/ ). The release is making its way out to the CentOS mirror network, and should be on your favorite mirror site momentarily. The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO, and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. New and Improved Interesting things in the Queens release include: * Ironic now supports Neutron routed networks with flat networking and introduces support for Nova traits when scheduling * RDO now includes rsdclient, an OpenStack client plugin for Rack Scale Design architecture * Support for octaviaclient and Octavia Horizon plugin has been added to improve Octavia service deployments. * Tap-as-a-Service (TaaS) network extension to the OpenStack network service (Neutron) has been included. * Multi-vendor Modular Layer 2 (ML2) driver networking-generic-switch si now available of operators deploying RDO Queens. Other improvements include: * Most of the bundled intree tempest plugins have been moved to their own repository during Queens cycle. RDO has adapted plugin packages for these new model. * In an effort to improve the quality and reduce the delivery time for our users, RDO keeps refining and automating all required processes needed to build, test and publish the packages included in RDO distribution. Note that packages for OpenStack projects with cycle-trailing release models[*] will be created after a release is delivered according to the OpenStack Queens schedule. [*] https://releases.openstack.org/reference/release_models.html#cycle-trailing Contributors During the Queens cycle, we saw the following new contributors: Aditya Ramteke Jatan Malde Ade Lee James Slagle Alex Schultz Artom Lifshitz Mathieu Bultel Petr Viktorin Radomir Dopieralski Mark Hamzy Sagar Ippalpalli Martin Kopec Victoria Martinez de la Cruz Harald Jensas Kashyap Chamarthy dparalen Thiago da Silva chenxing Johan Guldmyr David J Peacock Sagi Shnaidman Jose Luis Franco Arza Welcome to all of you, and thank you so much for participating! But, we wouldn’t want to overlook anyone. Thank you to all 76 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories, and is provided in no particular order: Yatin Karel Aditya Ramteke Javier Pena Alfredo Moralejo Christopher Brown Jon Schlueter Chandan Kumar Haikel Guemar Emilien Macchi Jatan Malde Pradeep Kilambi Luigi Toscano Alan Pevec Eric Harney Ben Nemec Matthias Runge Ade Lee Jakub Libosvar Thierry Vignaud Alex Schultz Juan Antonio Osorio Robles Mohammed Naser James Slagle Jason Joyce Artom Lifshitz Lon Hohberger rabi Dmitry Tantsur Oliver Walsh Mathieu Bultel Steve Baker Daniel Mellado Terry Wilson Tom Barron Jiri Stransky Ricardo Noriega Petr Viktorin Juan Antonio Osorio Robles Eduardo Gonzalez Radomir Dopieralski Mark Hamzy Sagar Ippalpalli Martin Kopec Ihar Hrachyshka Tristan Cacqueray Victoria Martinez de la Cruz Bernard Cafarelli Harald Jensas Assaf Muller Kashyap Chamarthy Jeremy Liu Daniel Alvarez Mehdi Abaakouk dparalen Thiago da Silva Brad P. Crochet chenxing Johan Guldmyr Antoni Segura Puimedon David J Peacock Sagi Shnaidman Jose Luis Franco Arza Julie Pichon David Moreau-Simard Wes Hayutin Attila Darazs Gabriele Cerami John Trowbridge Gonéri Le Bouder Ronelle Landy Matt Young Arx Cruz Joe H. Rahme marios Sofer Athlan-Guyot Paul Belanger Getting Started There are three ways to get started with RDO. To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order. Finally, if you want to try out OpenStack, but don’t have the time or hardware to run it yourself, visit TryStack, where you can use a free public OpenStack instance, running RDO packages, to experiment with the OpenStack management interface and API, launch instances, configure networks, and generally familiarize yourself with OpenStack. (TryStack is not, at this time, running Queens, although it is running RDO.) Getting Help The RDO Project participates in a Q&A service at ask.openstack.org. We also have our users at lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailng lists archives are all available at https://mail.rdoproject.org You can also find extensive documentation on the RDO docs site. The #rdo channel on Freenode IRC is also an excellent place to find help and give help. We also welcome comments and requests on the CentOS mailing lists and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience in the RDO venues. Getting Involved To get involved in the OpenStack RPM packaging effort, see the RDO community pages and the CentOS Cloud SIG page. See also the RDO packaging documentation. Join us in #rdo on the Freenode IRC network, and follow us at @RDOCommunity on Twitter. If you prefer Facebook, we’re there too, and also Google+. -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From shakhat at gmail.com Thu Mar 1 10:44:28 2018 From: shakhat at gmail.com (Ilya Shakhat) Date: Thu, 1 Mar 2018 11:44:28 +0100 Subject: [openstack-dev] [DriverLog] DriverLog future Message-ID: Hi! For those who do not know, DriverLog is a community registry of 3rd-party drivers for OpenStack hosted together with Stackalytics [1]. The project started 4 years ago and by now contains information about 220 drivers. The data from DriverLog is also consumed by official Marketplace [2]. Here I would like to discuss directions for DriverLog and 3rd-party driver registry as general. 1) Being a single community-wide registry was good initially, it allowed to quickly collect description for most of drivers in a single place. But in a long term this approach stopped working - not many projects remember to update the information stored in some random place, right? Mike already pointed to this problem a year ago [3] and the idea was to move driver list to projects (and thus move responsibility to them too) and have an aggregated list of drivers produced by infra. Do we have any progress in this direction? Is it a time to start deprecation of DriverLog and consider transition during Rocky release? 2) As a project with 4 years history DriverLog's list only increased over the time with quite few removals. Now it still has drivers with the latest version Liberty or drivers for non-maintained projects (e.g. Fuel). While it maybe makes sense to keep all of them for operators who run older versions, it may produce a feeling that the majority of drivers are old. One of solutions for this is to show by default drivers for active releases only (Pike and ahead). If done this will apply to both DriverLog and Marketplace. Any other ideas or suggestions? Thanks, I [1] http://stackalytics.com/report/driverlog [2] https://www.openstack.org/marketplace/drivers/ [3] http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gang.sungjin at gmail.com Thu Mar 1 10:50:12 2018 From: gang.sungjin at gmail.com (SungJin Kang) Date: Thu, 1 Mar 2018 19:50:12 +0900 Subject: [openstack-dev] [OpenStack-I18n] Zanata upgrade to version 4 In-Reply-To: <779829df51eac68c50770f00c6b6446a@arcor.de> References: <0a8866b6-106e-777d-67ec-9a2db8c67701@redhat.com> <779829df51eac68c50770f00c6b6446a@arcor.de> Message-ID: OH~~~~ COOLLLLLL 2018-03-01 18:57 GMT+09:00 Frank Kloeker : > Am 2018-03-01 02:00, schrieb Ian Wienand: > >> On 02/27/2018 09:32 PM, Frank Kloeker wrote: >> >>> We will take the chance now to upgrade our translation platform to a >>> new version. >>> >> >> This has been completed and translate.o.o is now running 4.3.3. For >> any issues reply, or catch any infra-root in #openstack-infra >> > > Good morning from the PTG, > > thanks Ian, for the very professional work. This is a great progress to > got Zanata 4 now online. > As I tested and seen so far there are no issues, only a big count of > Zanata imports. So please go on and merge the last changes into the project > repos. > > many thanks > > Frank > > > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Mar 1 11:20:11 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 1 Mar 2018 11:20:11 +0000 Subject: [openstack-dev] [DriverLog] DriverLog future In-Reply-To: References: Message-ID: <85c671e347c44579b75fc188ea14b014@AUSX13MPS308.AMER.DELL.COM> Having 3rd party CI report results automatically will be helpful. While it is possible for PTLs to report per release which drivers should be listed in marketplace for release, currently PTLs are signed for this extra work. Driver owners submitting DriverLog updates per release – not a big deal. Extra work on Ilya. I think we can define a rule for removal. If driver entry was not updated for 2(?) releases then remove it. We can run questionnaire for what the right # for it. Thanks, Arkady From: Ilya Shakhat [mailto:shakhat at gmail.com] Sent: Thursday, March 1, 2018 4:44 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [DriverLog] DriverLog future Hi! For those who do not know, DriverLog is a community registry of 3rd-party drivers for OpenStack hosted together with Stackalytics [1]. The project started 4 years ago and by now contains information about 220 drivers. The data from DriverLog is also consumed by official Marketplace [2]. Here I would like to discuss directions for DriverLog and 3rd-party driver registry as general. 1) Being a single community-wide registry was good initially, it allowed to quickly collect description for most of drivers in a single place. But in a long term this approach stopped working - not many projects remember to update the information stored in some random place, right? Mike already pointed to this problem a year ago [3] and the idea was to move driver list to projects (and thus move responsibility to them too) and have an aggregated list of drivers produced by infra. Do we have any progress in this direction? Is it a time to start deprecation of DriverLog and consider transition during Rocky release? 2) As a project with 4 years history DriverLog's list only increased over the time with quite few removals. Now it still has drivers with the latest version Liberty or drivers for non-maintained projects (e.g. Fuel). While it maybe makes sense to keep all of them for operators who run older versions, it may produce a feeling that the majority of drivers are old. One of solutions for this is to show by default drivers for active releases only (Pike and ahead). If done this will apply to both DriverLog and Marketplace. Any other ideas or suggestions? Thanks, I [1] http://stackalytics.com/report/driverlog [2] https://www.openstack.org/marketplace/drivers/ [3] http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 1 11:24:03 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 1 Mar 2018 11:24:03 +0000 Subject: [openstack-dev] [nova][ptg] room change after lunch to the lunch room (Hogan Suite) Message-ID: <42B9F4D9-4256-403F-874F-1EFD439B301C@gmail.com> Hey everyone, Because it’s uncomfortably cold in Davin Suite, I’ve booked Hogan Suite (the lunch room) for us to use after lunch for the rest of today. So please be sure to go to Hogan Suite this afternoon for the Nova/Ironic and Nova/Neutron sessions. Cheers, -melanie From ekuvaja at redhat.com Thu Mar 1 12:09:54 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Thu, 1 Mar 2018 12:09:54 +0000 Subject: [openstack-dev] [Glance] Changes to Glance core team Message-ID: Hi all, At the start of the cycle is good time to have a look of the Glance reviewers and based on the discussions amongs the group during and before the PTG I'd like to propose following changes: 1) Adding Sean McGinnis to Glance core. The current active core team has been very positive about including Sean and he feels like he is comfortable to take the +2 responsibility on. It might take some more time for him to get fully familiar with Glance code base so we will leave him room to approve changes to the parts of Glance he feels to be ready and grow his expertise across. 2) Removing Flavio Percoco from Glance core. Flavio requested to be removed already couple of cycles ago and we did beg him to stick around to help with the Interoperable Image Import which of he has been integral part of designing since the very beginning and due to his knowledge of the internals of the Glance tasks. The majority of this work is finished and we would like to thank Flavio for his help and hard work for Glance community. 3) removing Mike Fedosin from glance core. Mike joined back to glance ocre when we did desperately need help reviewing changes and we are definitely grateful for his efforts to help us out when needed. By the looks of it, Mike has moved on to different responsibilities. As usual, if the circumstances changes and Flavio or Mike will find time and interest to serve our community again, we would be more than happy to fast-track them back to the core team. I'd like to take the opportunity to give big thanks for all of them for their help and contributions to Glance and I do hope seeing them all around for the cycles to come. I'll leave until next week before doing these changes in case I have missed something that has changed in the situation recently. best, Erno jokke_ Kuvaja From rosmaita.fossdev at gmail.com Thu Mar 1 12:15:58 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 1 Mar 2018 07:15:58 -0500 Subject: [openstack-dev] [Glance] Changes to Glance core team In-Reply-To: References: Message-ID: +1 to all the above. Thanks for Flavio and Mike for all they've done for Glance, and looking forward to working more with Sean in the future. cheers, brian On Thu, Mar 1, 2018 at 7:09 AM, Erno Kuvaja wrote: > Hi all, > > At the start of the cycle is good time to have a look of the Glance > reviewers and based on the discussions amongs the group during and > before the PTG I'd like to propose following changes: > > 1) Adding Sean McGinnis to Glance core. The current active core team > has been very positive about including Sean and he feels like he is > comfortable to take the +2 responsibility on. It might take some more > time for him to get fully familiar with Glance code base so we will > leave him room to approve changes to the parts of Glance he feels to > be ready and grow his expertise across. > > 2) Removing Flavio Percoco from Glance core. Flavio requested to be > removed already couple of cycles ago and we did beg him to stick > around to help with the Interoperable Image Import which of he has > been integral part of designing since the very beginning and due to > his knowledge of the internals of the Glance tasks. The majority of > this work is finished and we would like to thank Flavio for his help > and hard work for Glance community. > > 3) removing Mike Fedosin from glance core. Mike joined back to glance > ocre when we did desperately need help reviewing changes and we are > definitely grateful for his efforts to help us out when needed. By the > looks of it, Mike has moved on to different responsibilities. > > As usual, if the circumstances changes and Flavio or Mike will find > time and interest to serve our community again, we would be more than > happy to fast-track them back to the core team. > > I'd like to take the opportunity to give big thanks for all of them > for their help and contributions to Glance and I do hope seeing them > all around for the cycles to come. > > I'll leave until next week before doing these changes in case I have > missed something that has changed in the situation recently. > > best, > Erno jokke_ Kuvaja > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Mar 1 12:19:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 1 Mar 2018 12:19:03 +0000 Subject: [openstack-dev] [DriverLog] DriverLog future In-Reply-To: References: Message-ID: <5e624c84-41f0-7e12-fb66-2a514655f2a0@gmail.com> On 3/1/2018 10:44 AM, Ilya Shakhat wrote: > > For those who do not know, DriverLog is a community registry of > 3rd-party drivers for OpenStack hosted together with Stackalytics [1]. > The project started 4 years ago and by now contains information about > 220 drivers. The data from DriverLog is also consumed by official > Marketplace [2]. > > Here I would like to discuss directions for DriverLog and 3rd-party > driver registry as general. > > 1) Being a single community-wide registry was good initially, it allowed > to quickly collect description for most of drivers in a single place. > But in a long term this approach stopped working - not many projects > remember to update the information stored in some random place, right? > > Mike already pointed to this problem a year ago [3] and the idea was to > move driver list to projects (and thus move responsibility to them too) > and have an aggregated list of drivers produced by infra. Do we have any > progress in this direction? Is it a time to start deprecation of > DriverLog and consider transition during Rocky release? > > 2) As a project with 4 years history DriverLog's list only increased > over the time with quite few removals. Now it still has drivers with the > latest version Liberty or drivers for non-maintained projects (e.g. > Fuel). While it maybe makes sense to keep all of them for operators who > run older versions, it may produce a feeling that the majority of > drivers are old. One of solutions for this is to show by default drivers > for active releases only (Pike and ahead). If done this will apply to > both DriverLog and Marketplace. > > Any other ideas or suggestions? As having recently went through that repo to update some of the nova driver maintainers, I noted the very old status of several of them. I agree this information should live in the per-project repo documentation, not in a centralized location. Nova does a decent job about keeping the virt driver feature support matrix up to date, but definitely not when it's a separate repo. This is a similar problem to the centralized docs issue addressed as a community in Pike. The OSIC team tried working on a feature classification effort [1] for a few releases which was similar to the driver log, specifically for showing which drivers and features had CI coverage. That work is *very* incomplete and no longer maintained, and I've actually been suggesting lately that we drop it since misinformation is almost worse than no information. I suggested to Mike the other day that at the very least, the driver log docs should put a big red warning, like in [1], that the information may be old. [1] https://docs.openstack.org/nova/latest/user/feature-classification.html -- Thanks, Matt From ton.kazakov at gmail.com Thu Mar 1 12:20:08 2018 From: ton.kazakov at gmail.com (Anton Kazakov) Date: Thu, 1 Mar 2018 16:20:08 +0400 Subject: [openstack-dev] [oslo.db] oslo_db "max_retries" option In-Reply-To: <876451519797304@web34o.yandex.ru> References: <876451519797304@web34o.yandex.ru> Message-ID: Hi all, Matt wrote: > Wouldn't it be a good idea to check for more general DBError? > > So like catching Exception? How are you going to distinguish from > IntegrityErrors which shouldn't be retried, which are also DBErrors? > Are IntegrityErrors really possible when testing new engine's connection? Usually, test queries are simple, like SELECT 1; 2018-02-28 9:55 GMT+04:00 Vitalii Solodilov : > Hi folks! > > I have a question about oslo_db "max_retries" option. > https://github.com/openstack/oslo.db/blob/master/oslo_db/ > sqlalchemy/engines.py#L381 > Why only DBConnectionError is considered as a reason for reconnecting here? > Wouldn't it be a good idea to check for more general DBError? > For example, DB host is down at the time of engine creation, but will > become running some time later. > > -- > Best regards, > > Vitalii Solodilov > > -- Best regards, Anton Kazakov -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Mar 1 12:22:54 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 1 Mar 2018 17:52:54 +0530 Subject: [openstack-dev] [Glance] Changes to Glance core team In-Reply-To: References: Message-ID: Big +1 to Sean, he has been doing wonderful job for us. Thank you, Flavio and Mike, hope to see you back. Cheers, Abhishek On 01-Mar-2018 17:40, "Erno Kuvaja" wrote: > Hi all, > > At the start of the cycle is good time to have a look of the Glance > reviewers and based on the discussions amongs the group during and > before the PTG I'd like to propose following changes: > > 1) Adding Sean McGinnis to Glance core. The current active core team > has been very positive about including Sean and he feels like he is > comfortable to take the +2 responsibility on. It might take some more > time for him to get fully familiar with Glance code base so we will > leave him room to approve changes to the parts of Glance he feels to > be ready and grow his expertise across. > > 2) Removing Flavio Percoco from Glance core. Flavio requested to be > removed already couple of cycles ago and we did beg him to stick > around to help with the Interoperable Image Import which of he has > been integral part of designing since the very beginning and due to > his knowledge of the internals of the Glance tasks. The majority of > this work is finished and we would like to thank Flavio for his help > and hard work for Glance community. > > 3) removing Mike Fedosin from glance core. Mike joined back to glance > ocre when we did desperately need help reviewing changes and we are > definitely grateful for his efforts to help us out when needed. By the > looks of it, Mike has moved on to different responsibilities. > > As usual, if the circumstances changes and Flavio or Mike will find > time and interest to serve our community again, we would be more than > happy to fast-track them back to the core team. > > I'd like to take the opportunity to give big thanks for all of them > for their help and contributions to Glance and I do hope seeing them > all around for the cycles to come. > > I'll leave until next week before doing these changes in case I have > missed something that has changed in the situation recently. > > best, > Erno jokke_ Kuvaja > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 1 12:43:39 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 01 Mar 2018 12:43:39 +0000 Subject: [openstack-dev] [nova][ptg] room change after lunch to the lunch room (Hogan Suite) In-Reply-To: <42B9F4D9-4256-403F-874F-1EFD439B301C@gmail.com> References: <42B9F4D9-4256-403F-874F-1EFD439B301C@gmail.com> Message-ID: On Mar 1, 2018, at 11:24, melanie witt wrote: Because it’s uncomfortably cold in Davin Suite, I’ve booked Hogan Suite (the lunch room) for us to use after lunch for the rest of today. So please be sure to go to Hogan Suite this afternoon for the Nova/Ironic and Nova/Neutron sessions. Update: the Ironic team will be heading straight to the Croke Park Hotel right after lunch, so we could use the time 13:30 - 14:00 to cover the other NUMA spec in the Hogan Suite (lunch room) before we head out of the venue to the Croke Park Hotel, if that is cool with everyone. Then, we’ll have to figure out where we can meet for Nova/Ironic and Nova/Neutron sessions at Croke Park Hotel afterward. -melanie -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 1 12:50:23 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Mar 2018 06:50:23 -0600 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> Message-ID: <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> > On Feb 28, 2018, at 16:58, Jay S Bryant wrote: > > Team, > > Just a reminder that we will be having our team photo at 9 am tomorrow before the Cinder/Nova cross project session. Please be at the registration desk before 9 to be in the photo. > > We will then have the Cross Project session in the Nova room as it sounds like it is somewhat larger. I will have sound clips in hand to make sure things don't get too serious. > > Finally, an update on dinner for tomorrow night. I have moved dinner to a closer venue: > > Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, Dublin 9 > > I have reservations for 7:30 pm. It isn't too difficult a walk from Croke Park (even in a blizzard) and it is a great pub. > > Thanks for a great day today! > > See you all tomorrow! Let's make it a great one! ;-) > Jay > Any plan now that there is a 4pm curfew? From sean.mcginnis at gmx.com Thu Mar 1 12:53:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 1 Mar 2018 06:53:55 -0600 Subject: [openstack-dev] [Glance] Changes to Glance core team In-Reply-To: References: Message-ID: <199416BA-D3BF-40A4-9C3B-9A7AB28228ED@gmx.com> Thanks team - glad to be able to help out and I am hoping I can dig in more and be able to contribute where I can! Sean > On Mar 1, 2018, at 06:22, Abhishek Kekane wrote: > > Big +1 to Sean, he has been doing wonderful job for us. > Thank you, Flavio and Mike, hope to see you back. > > Cheers, > > Abhishek > > On 01-Mar-2018 17:40, "Erno Kuvaja" > wrote: > Hi all, > > At the start of the cycle is good time to have a look of the Glance > reviewers and based on the discussions amongs the group during and > before the PTG I'd like to propose following changes: > > 1) Adding Sean McGinnis to Glance core. The current active core team > has been very positive about including Sean and he feels like he is > comfortable to take the +2 responsibility on. It might take some more > time for him to get fully familiar with Glance code base so we will > leave him room to approve changes to the parts of Glance he feels to > be ready and grow his expertise across. > > 2) Removing Flavio Percoco from Glance core. Flavio requested to be > removed already couple of cycles ago and we did beg him to stick > around to help with the Interoperable Image Import which of he has > been integral part of designing since the very beginning and due to > his knowledge of the internals of the Glance tasks. The majority of > this work is finished and we would like to thank Flavio for his help > and hard work for Glance community. > > 3) removing Mike Fedosin from glance core. Mike joined back to glance > ocre when we did desperately need help reviewing changes and we are > definitely grateful for his efforts to help us out when needed. By the > looks of it, Mike has moved on to different responsibilities. > > As usual, if the circumstances changes and Flavio or Mike will find > time and interest to serve our community again, we would be more than > happy to fast-track them back to the core team. > > I'd like to take the opportunity to give big thanks for all of them > for their help and contributions to Glance and I do hope seeing them > all around for the cycles to come. > > I'll leave until next week before doing these changes in case I have > missed something that has changed in the situation recently. > > best, > Erno jokke_ Kuvaja > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 1 13:24:22 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 1 Mar 2018 13:24:22 +0000 Subject: [openstack-dev] [nova][ptg] room change after lunch to the lunch room (Hogan Suite) In-Reply-To: <42B9F4D9-4256-403F-874F-1EFD439B301C@gmail.com> References: <42B9F4D9-4256-403F-874F-1EFD439B301C@gmail.com> Message-ID: <8E31816C-0C70-4DF8-96A3-1A6FEAC4B61E@gmail.com> > On Mar 1, 2018, at 11:24, melanie witt wrote: > > Because it’s uncomfortably cold in Davin Suite, I’ve booked Hogan Suite (the lunch room) for us to use after lunch for the rest of today. > > So please be sure to go to Hogan Suite this afternoon for the Nova/Ironic and Nova/Neutron sessions. Update: we have access to a meeting area on the 4th floor of Croke Park Hotel so please go there to continue the Nova session. The Ironic team won’t be available until 3:00 PM so we’ll figure out what we can cover in the meantime. Thanks, -melanie From jungleboyj at gmail.com Thu Mar 1 14:12:30 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 1 Mar 2018 08:12:30 -0600 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> Message-ID: On 3/1/2018 6:50 AM, Sean McGinnis wrote: >> On Feb 28, 2018, at 16:58, Jay S Bryant wrote: >> >> Team, >> >> Just a reminder that we will be having our team photo at 9 am tomorrow before the Cinder/Nova cross project session. Please be at the registration desk before 9 to be in the photo. >> >> We will then have the Cross Project session in the Nova room as it sounds like it is somewhat larger. I will have sound clips in hand to make sure things don't get too serious. >> >> Finally, an update on dinner for tomorrow night. I have moved dinner to a closer venue: >> >> Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, Dublin 9 >> >> I have reservations for 7:30 pm. It isn't too difficult a walk from Croke Park (even in a blizzard) and it is a great pub. >> >> Thanks for a great day today! >> >> See you all tomorrow! Let's make it a great one! ;-) >> Jay >> > Any plan now that there is a 4pm curfew? > Don't think this impacts the plan.  The curfew is just a recommendation and not a requirement. I will ensure that the restaurant is still open but they were planning to be last night. I will be there for anyone who wants to join. Jay From melwittt at gmail.com Thu Mar 1 14:24:44 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 1 Mar 2018 14:24:44 +0000 Subject: [openstack-dev] [nova][ptg] nova/neutron and nova/ironic cross-project sessions today Message-ID: Hi everyone, Just wanted to send a fresh email that we’re having the nova/neutron and nova/ironic cross-project sessions going on today at the Croke Park Hotel. Nova/neutron is in-progress now at the breakfast area of the restaurant and nova/ironic will start at 4:00 PM, probably also in the breakfast area. Please check http://ptg.openstack.org/ptg.html for the latest info, we’re keeping it updated. Sorry for all the confusion. -melanie From jimmy at openstack.org Thu Mar 1 14:40:56 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 01 Mar 2018 14:40:56 +0000 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> Message-ID: <5A9810F8.7090202@openstack.org> You might give a call just to check. Everyone seems to be closing... > Jay S Bryant > March 1, 2018 at 2:12 PM > On 3/1/2018 6:50 AM, Sean McGinnis wrote: > > Don't think this impacts the plan. The curfew is just a > recommendation and not a requirement. > > I will ensure that the restaurant is still open but they were planning > to be last night. > > I will be there for anyone who wants to join. > > Jay > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Thu Mar 1 14:44:36 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Thu, 1 Mar 2018 14:44:36 +0000 Subject: [openstack-dev] [magnum][keystone] clusters, trustees and projects In-Reply-To: References: <2119f48f-c2b6-0087-3ad0-0bd77b210cd5@gmail.com> Message-ID: Hello, After discussion with the keystone team at the above session, keystone will not provide a way to transfer trusts nor application credentials, since it doesn't address the above problem (the member that leaves the team can auth with keystone if he has the trust/app-creds). In magnum we need a way for admins and the cluster owner to rotate the trust or app-creds and certificates. We can leverage the existing rotate_ca api for rotating the ca and at the same time the trust. Since this api is designed only to rotate the ca, we can add a cluster action to transter ownership of the cluster. This action should be allowed to be executed by the admin or the current owner of a given cluster. At the same time, the trust created by heat for every stack suffers from the same problem, we should check with the heat team what is their plan. Cheers, Spyros On 27 February 2018 at 20:53, Ricardo Rocha wrote: > Hi Lance. > > On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad > wrote: > > > > > > On 02/26/2018 10:17 AM, Ricardo Rocha wrote: > >> Hi. > >> > >> We have an issue on the way Magnum uses keystone trusts. > >> > >> Magnum clusters are created in a given project using HEAT, and require > >> a trust token to communicate back with OpenStack services - there is > >> also integration with Kubernetes via a cloud provider. > >> > >> This trust belongs to a given user, not the project, so whenever we > >> disable the user's account - for example when a user leaves the > >> organization - the cluster becomes unhealthy as the trust is no longer > >> valid. Given the token is available in the cluster nodes, accessible > >> by users, a trust linked to a service account is also not a viable > >> solution. > >> > >> Is there an existing alternative for this kind of use case? I guess > >> what we might need is a trust that is linked to the project. > > This was proposed in the original application credential specification > > [0] [1]. The problem is that you're sharing an authentication mechanism > > with multiple people when you associate it to the life cycle of a > > project. When a user is deleted or removed from the project, nothing > > would stop them from accessing OpenStack APIs if the application > > credential or trust isn't rotated out. Even if the credential or trust > > were scoped to the project's life cycle, it would need to be rotated out > > and replaced when users come and go for the same reason. So it would > > still be associated to the user life cycle, just indirectly. Otherwise > > you're allowing unauthorized access to something that should be > protected. > > > > If you're at the PTG - we will be having a session on application > > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration > > room [3]. > > Thanks for the reply, i now understand the issue. > > I'm not at the PTG. Had a look at the etherpad but it seems app > credentials will have a similar lifecycle so not suitable for the use > case above - for the same reasons you mention. > > I wonder what's the alternative to achieve what we need in Magnum? > > Cheers, > Ricardo > > > [0] https://review.openstack.org/#/c/450415/ > > [1] https://review.openstack.org/#/c/512505/ > > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg > > [3] http://ptg.openstack.org/ptg.html > >> > >> I believe the same issue would be there using application credentials, > >> as the ownership is similar. > >> > >> Cheers, > >> Ricardo > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Mar 1 15:00:33 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 1 Mar 2018 09:00:33 -0600 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> Message-ID: On 3/1/2018 6:50 AM, Sean McGinnis wrote: >> On Feb 28, 2018, at 16:58, Jay S Bryant wrote: >> >> Team, >> >> Just a reminder that we will be having our team photo at 9 am tomorrow before the Cinder/Nova cross project session. Please be at the registration desk before 9 to be in the photo. >> >> We will then have the Cross Project session in the Nova room as it sounds like it is somewhat larger. I will have sound clips in hand to make sure things don't get too serious. >> >> Finally, an update on dinner for tomorrow night. I have moved dinner to a closer venue: >> >> Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, Dublin 9 >> >> I have reservations for 7:30 pm. It isn't too difficult a walk from Croke Park (even in a blizzard) and it is a great pub. >> >> Thanks for a great day today! >> >> See you all tomorrow! Let's make it a great one! ;-) >> Jay >> > Any plan now that there is a 4pm curfew? > Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on your country of origin.  6:30 at Fagans. I will update the etherpad. Jay From dougal at redhat.com Thu Mar 1 15:43:27 2018 From: dougal at redhat.com (Dougal Matthews) Date: Thu, 1 Mar 2018 15:43:27 +0000 Subject: [openstack-dev] [mistral] What's new in latest CloudFlow? In-Reply-To: References: Message-ID: Hey Guy, Thanks for sharing this update. I need to find time to try it out. The biggest issue for me is the lack of keystone support. I wonder if any of the code in tripleo-ui could be used to help with KeyStone support. It is a front-end JavaScript GUI. https://github.com/openstack/tripleo-ui Cheers, Dougal On 26 February 2018 at 09:10, Shaanan, Guy (Nokia - IL/Kfar Sava) < guy.shaanan at nokia.com> wrote: > CloudFlow [1] is an open-source web-based GUI tool that helps visualize > and debug Mistral workflows. > > > > With the latest release [2] of CloudFlow (v0.5.0) you can: > > * Visualize the flow of workflow executions > > * Identify the execution path of a single task in huge workflows > > * Search Mistral by any entity ID > > * Identify long-running tasks at a glance > > * Easily distinguish between simple task (an action) and a sub workflow > execution > > * Follow tasks with a `retry` and/or `with-items` > > * 1-click to copy task's input/output/publish/params values > > * See complete workflow definition and per task definition YAML > > * And more... > > > > CloudFlow is easy to install and run (and even easier to upgrade), and we > appreciate any feedback and contribution. > > > > CloudFlow currently supports unauthenticated Mistral or authentication > with KeyCloak (openid-connect implementation). A support for Keystone will > be added in the near future. > > > > You can try CloudFlow now on your Mistral Pike/Queens, or try it on the > online demo [3]. > > > > [1] https://github.com/nokia/CloudFlow > > [2] https://github.com/nokia/CloudFlow/releases/latest > > [3] http://yaqluator.com:8000 > > > > > > Thanks, > > *-----------------------------------------------------* > > *Guy Shaanan* > > Full Stack Web Developer, CI & Internal Tools > > CloudBand @ Nokia Software, Nokia, ISRAEL > > Guy.Shaanan at nokia.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Thu Mar 1 15:57:31 2018 From: flavio at redhat.com (Flavio Percoco) Date: Thu, 1 Mar 2018 16:57:31 +0100 Subject: [openstack-dev] [Glance] Changes to Glance core team In-Reply-To: References: Message-ID: <20180301155729.rdmqhe76b673xn35@redhat.com> On 01/03/18 12:09 +0000, Erno Kuvaja wrote: >2) Removing Flavio Percoco from Glance core. Flavio requested to be >removed already couple of cycles ago and we did beg him to stick >around to help with the Interoperable Image Import which of he has >been integral part of designing since the very beginning and due to >his knowledge of the internals of the Glance tasks. The majority of >this work is finished and we would like to thank Flavio for his help >and hard work for Glance community. Makes sense to me! +1 Thanks for all the fish. :) Flavio -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 862 bytes Desc: not available URL: From guy.shaanan at nokia.com Thu Mar 1 16:37:09 2018 From: guy.shaanan at nokia.com (Shaanan, Guy (Nokia - IL/Kfar Sava)) Date: Thu, 1 Mar 2018 16:37:09 +0000 Subject: [openstack-dev] [mistral] What's new in latest CloudFlow? In-Reply-To: References: Message-ID: Hi Dougal, Yes, it probably does help. I haven’t found any proper Keystone JavaScript library (if anyone knows about one let me know). From: Dougal Matthews [mailto:dougal at redhat.com] Sent: Thursday, March 1, 2018 17:43 To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [mistral] What's new in latest CloudFlow? Hey Guy, Thanks for sharing this update. I need to find time to try it out. The biggest issue for me is the lack of keystone support. I wonder if any of the code in tripleo-ui could be used to help with KeyStone support. It is a front-end JavaScript GUI. https://github.com/openstack/tripleo-ui Cheers, Dougal On 26 February 2018 at 09:10, Shaanan, Guy (Nokia - IL/Kfar Sava) > wrote: CloudFlow [1] is an open-source web-based GUI tool that helps visualize and debug Mistral workflows. With the latest release [2] of CloudFlow (v0.5.0) you can: * Visualize the flow of workflow executions * Identify the execution path of a single task in huge workflows * Search Mistral by any entity ID * Identify long-running tasks at a glance * Easily distinguish between simple task (an action) and a sub workflow execution * Follow tasks with a `retry` and/or `with-items` * 1-click to copy task's input/output/publish/params values * See complete workflow definition and per task definition YAML * And more... CloudFlow is easy to install and run (and even easier to upgrade), and we appreciate any feedback and contribution. CloudFlow currently supports unauthenticated Mistral or authentication with KeyCloak (openid-connect implementation). A support for Keystone will be added in the near future. You can try CloudFlow now on your Mistral Pike/Queens, or try it on the online demo [3]. [1] https://github.com/nokia/CloudFlow [2] https://github.com/nokia/CloudFlow/releases/latest [3] http://yaqluator.com:8000 Thanks, ----------------------------------------------------- Guy Shaanan Full Stack Web Developer, CI & Internal Tools CloudBand @ Nokia Software, Nokia, ISRAEL Guy.Shaanan at nokia.com __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Thu Mar 1 16:41:26 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Thu, 1 Mar 2018 16:41:26 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall>, Message-ID: Hi All, Can we meet tomorrow morning between 9:00 and 10:00 A.M? Regards, Tushar Patil ________________________________ From: Sam P Sent: Thursday, March 1, 2018 11:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers > wrote: My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh > wrote: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg > Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Mar 1 16:45:37 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 1 Mar 2018 16:45:37 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> Message-ID: I’m available ... where will we meet up ? Greg. From: "Patil, Tushar" Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, March 1, 2018 at 4:41 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Can we meet tomorrow morning between 9:00 and 10:00 A.M? Regards, Tushar Patil ________________________________ From: Sam P Sent: Thursday, March 1, 2018 11:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers > wrote: My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh > wrote: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg > Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Thu Mar 1 17:08:08 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 01 Mar 2018 17:08:08 -0000 Subject: [openstack-dev] [kolla] kolla-ansible 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for kolla-ansible for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kolla-ansible/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/kolla-ansible/log/?h=stable/queens Release notes for kolla-ansible can be found at: http://docs.openstack.org/releasenotes/kolla-ansible/ From no-reply at openstack.org Thu Mar 1 17:11:41 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Thu, 01 Mar 2018 17:11:41 -0000 Subject: [openstack-dev] [kolla] kolla 6.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for kolla for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kolla/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/kolla/log/?h=stable/queens Release notes for kolla can be found at: http://docs.openstack.org/releasenotes/kolla/ From liujiong at gohighsec.com Thu Mar 1 17:50:40 2018 From: liujiong at gohighsec.com (Jiong Liu) Date: Thu, 1 Mar 2018 09:50:40 -0800 Subject: [openstack-dev] [barbican][castellan] Stepping down from core Message-ID: <000e01d3b185$d1b827b0$75287710$@gohighsec.com> Kaitlin, thank you for all your contribution over the past years! Wish you all the best in your new career! > Hi Barbicaneers, > I will be moving on to other projects at work and will not have time to contribute to OpenStack anymore. I am stepping down as core reviewer as I will not be able to maintain my responsibilities. It's been a great 4.5 years working on OpenStack and a > fulfilling 3 years as a Barbican core reviewer. > The recent growing interest in Castellan and Barbican for key management to support new security features is encouraging. The rest of the Barbican team will do a great job managing Barbican, Castellan, and Castellan-UI. > If you have any pressing concerns or questions, you can still reach me by email. > Thanks so much, > Kaitlin Farr From Tushar.Patil at nttdata.com Thu Mar 1 19:32:39 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Thu, 1 Mar 2018 19:32:39 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> , Message-ID: Hi Greg, We can meet near croke park hotel reception at 9:00am. Is it ok? Regards Tushar ________________________________________ From: Waines, Greg Sent: Friday, March 2, 2018 1:45:37 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? I’m available ... where will we meet up ? Greg. From: "Patil, Tushar" Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, March 1, 2018 at 4:41 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Can we meet tomorrow morning between 9:00 and 10:00 A.M? Regards, Tushar Patil ________________________________ From: Sam P Sent: Thursday, March 1, 2018 11:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers > wrote: My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh > wrote: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg > Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Greg.Waines at windriver.com Thu Mar 1 19:40:55 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 1 Mar 2018 19:40:55 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> Message-ID: <879FF448-9BC4-4B14-A1CB-232C4F1A8FE8@windriver.com> Sounds good ... see you there. Greg. From: "Patil, Tushar" Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, March 1, 2018 at 7:32 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi Greg, We can meet near croke park hotel reception at 9:00am. Is it ok? Regards Tushar ________________________________________ From: Waines, Greg > Sent: Friday, March 2, 2018 1:45:37 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? I’m available ... where will we meet up ? Greg. From: "Patil, Tushar" > Reply-To: "openstack-dev at lists.openstack.org" > Date: Thursday, March 1, 2018 at 4:41 PM To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Can we meet tomorrow morning between 9:00 and 10:00 A.M? Regards, Tushar Patil ________________________________ From: Sam P > Sent: Thursday, March 1, 2018 11:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers > wrote: My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh > wrote: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg > Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rocha.porto at gmail.com Thu Mar 1 19:48:46 2018 From: rocha.porto at gmail.com (Ricardo Rocha) Date: Thu, 1 Mar 2018 20:48:46 +0100 Subject: [openstack-dev] [magnum][keystone] clusters, trustees and projects In-Reply-To: References: <2119f48f-c2b6-0087-3ad0-0bd77b210cd5@gmail.com> Message-ID: Hi. I had added an item for this: https://bugs.launchpad.net/magnum/+bug/1752433 after the last reply and a bit of searching around. It's not urgent but we already got a couple cases in our deployment. Cheers, Ricardo On Thu, Mar 1, 2018 at 3:44 PM, Spyros Trigazis wrote: > Hello, > > After discussion with the keystone team at the above session, keystone > will not provide a way to transfer trusts nor application credentials, > since it doesn't address the above problem (the member that leaves the team > can auth with keystone if he has the trust/app-creds). > > In magnum we need a way for admins and the cluster owner to rotate the > trust or app-creds and certificates. > > We can leverage the existing rotate_ca api for rotating the ca and at the > same > time the trust. Since this api is designed only to rotate the ca, we can > add a cluster action to transter ownership of the cluster. This action > should be > allowed to be executed by the admin or the current owner of a given cluster. > > At the same time, the trust created by heat for every stack suffers from the > same problem, we should check with the heat team what is their plan. > > Cheers, > Spyros > > On 27 February 2018 at 20:53, Ricardo Rocha wrote: >> >> Hi Lance. >> >> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad >> wrote: >> > >> > >> > On 02/26/2018 10:17 AM, Ricardo Rocha wrote: >> >> Hi. >> >> >> >> We have an issue on the way Magnum uses keystone trusts. >> >> >> >> Magnum clusters are created in a given project using HEAT, and require >> >> a trust token to communicate back with OpenStack services - there is >> >> also integration with Kubernetes via a cloud provider. >> >> >> >> This trust belongs to a given user, not the project, so whenever we >> >> disable the user's account - for example when a user leaves the >> >> organization - the cluster becomes unhealthy as the trust is no longer >> >> valid. Given the token is available in the cluster nodes, accessible >> >> by users, a trust linked to a service account is also not a viable >> >> solution. >> >> >> >> Is there an existing alternative for this kind of use case? I guess >> >> what we might need is a trust that is linked to the project. >> > This was proposed in the original application credential specification >> > [0] [1]. The problem is that you're sharing an authentication mechanism >> > with multiple people when you associate it to the life cycle of a >> > project. When a user is deleted or removed from the project, nothing >> > would stop them from accessing OpenStack APIs if the application >> > credential or trust isn't rotated out. Even if the credential or trust >> > were scoped to the project's life cycle, it would need to be rotated out >> > and replaced when users come and go for the same reason. So it would >> > still be associated to the user life cycle, just indirectly. Otherwise >> > you're allowing unauthorized access to something that should be >> > protected. >> > >> > If you're at the PTG - we will be having a session on application >> > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration >> > room [3]. >> >> Thanks for the reply, i now understand the issue. >> >> I'm not at the PTG. Had a look at the etherpad but it seems app >> credentials will have a similar lifecycle so not suitable for the use >> case above - for the same reasons you mention. >> >> I wonder what's the alternative to achieve what we need in Magnum? >> >> Cheers, >> Ricardo >> >> > [0] https://review.openstack.org/#/c/450415/ >> > [1] https://review.openstack.org/#/c/512505/ >> > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg >> > [3] http://ptg.openstack.org/ptg.html >> >> >> >> I believe the same issue would be there using application credentials, >> >> as the ownership is similar. >> >> >> >> Cheers, >> >> Ricardo >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From s at cassiba.com Thu Mar 1 20:16:20 2018 From: s at cassiba.com (Samuel Cassiba) Date: Thu, 1 Mar 2018 12:16:20 -0800 Subject: [openstack-dev] [chef] Pike cookbooks released Message-ID: Ohai! The Chef OpenStack team is excited to announce that the 16.0 release of the cookbooks is fresh out of the oven! This corresponds with the Pike release of OpenStack. The cookbooks have been published to Supermarket under the OpenStack namespace located at https://supermarket.chef.io/users/openstack The following cookbooks received updates with this release: - openstack-block-storage - openstack-common - openstack-compute - openstack-dashboard - openstack-identity - openstack-image - openstack-integration-test - openstack-network - openstack-ops-database - openstack-ops-messaging - openstack-orchestration - openstack-telemetry In this release, we also leverage the following external cookbooks that were updated in tandem: - openstackclient - openstack-dns (Designate) The main focus of the release has been cookbook stabilization and improvement of functional testing. Local testing has been overhauled in favor of Test Kitchen (https://kitchen.ci) and InSpec (https://www.inspec.io/), which provides a more consistent interface. The RDBMS flavor has also changed to MariaDB, dropping MySQL from the tested scenarios. This also marks the first release developed and tested on Chef 13, with Chef 12 now being unsupported in master. If you need to use an older release of OpenStack with Chef 13, this will give you a blueprint for what needs to be backported. Prost! Your humble cook, Samuel Cassiba (sc` / scas) From corey.bryant at canonical.com Thu Mar 1 21:27:27 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 1 Mar 2018 16:27:27 -0500 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 16.04 LTS Message-ID: Hi All, The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Queens on Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Details of the Queens release can be found at: https://www.openstack.org/software/queens To get access to the Ubuntu Queens packages: Ubuntu 16.04 LTS ------------------------ You can enable the Ubuntu Cloud Archive pocket for OpenStack Queens on Ubuntu 16.04 installations by running the following commands: sudo add-apt-repository cloud-archive:queens sudo apt update The Ubuntu Cloud Archive for Queens includes updates for: aodh, barbican, ceilometer, ceph (12.2.2), cinder, congress, designate, designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar. For a full list of packages and versions, please refer to [0]. Branch Package Builds ------------------------------- If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s: sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka sudo add-apt-repository ppa:openstack-ubuntu-testing/newton sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata sudo add-apt-repository ppa:openstack-ubuntu-testing/pike sudo add-apt-repository ppa:openstack-ubuntu-testing/queens Reporting bugs --------------------- If you have any issues please report bugs using the 'ubuntu-bug' tool to ensure that bugs get logged in the right place in Launchpad: sudo ubuntu-bug nova-conductor Thanks to everyone who has contributed to OpenStack Queens, both upstream and downstream! Have fun and see you in Rocky! Regards, Corey (on behalf of the Ubuntu OpenStack team) [0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Thu Mar 1 21:30:40 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Thu, 1 Mar 2018 21:30:40 +0000 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: Hi, there. This is a page of a similar unittest issue. https://bugs.launchpad.net/heat-dashboard/+bug/1752527 We merged the following patch to fix the issue, and hope it also fix the trouble described here. https://review.openstack.org/#/c/548924/ As for the minified javascript files, we are working on removing from the source code, and uploading as xstatic packages. Just need a little more time to finish it. Best regards, Xinni On Tue, Feb 27, 2018 at 10:48 AM, Thomas Goirand wrote: > On 02/23/2018 09:29 AM, Xinni Ge wrote: > > Hi there, > > > > We are aware of the javascript embedded issue, and working on it now, > > the patch will be summited later. > > > > As for the unittest failure, we are still investigating it. We will > > contant you as soon as we find out the cause. > > > > Sorry to bring troubles to you. We will be grateful if you could wait > > for a little longer. > > > > Best Regards, > > > > Xinni > > Hi, > > Thanks for this message. This lowers the frustration! :) > Let me know if there's any patch I could review. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From rochelle.grober at huawei.com Thu Mar 1 22:37:10 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Thu, 1 Mar 2018 22:37:10 +0000 Subject: [openstack-dev] [DriverLog] DriverLog future In-Reply-To: <5e624c84-41f0-7e12-fb66-2a514655f2a0@gmail.com> References: <5e624c84-41f0-7e12-fb66-2a514655f2a0@gmail.com> Message-ID: > -----Original Message----- > From: Matt Riedemann [mailto:mriedemos at gmail.com] > Sent: Thursday, March 01, 2018 4:19 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [DriverLog] DriverLog future > > On 3/1/2018 10:44 AM, Ilya Shakhat wrote: > > > > For those who do not know, DriverLog is a community registry of > > 3rd-party drivers for OpenStack hosted together with Stackalytics [1]. > > The project started 4 years ago and by now contains information about > > 220 drivers. The data from DriverLog is also consumed by official > > Marketplace [2]. > > > > Here I would like to discuss directions for DriverLog and 3rd-party > > driver registry as general. > > > > 1) Being a single community-wide registry was good initially, it > > allowed to quickly collect description for most of drivers in a single place. > > But in a long term this approach stopped working - not many projects > > remember to update the information stored in some random place, right? > > > > Mike already pointed to this problem a year ago [3] and the idea was > > to move driver list to projects (and thus move responsibility to them > > too) and have an aggregated list of drivers produced by infra. Do we > > have any progress in this direction? Is it a time to start deprecation > > of DriverLog and consider transition during Rocky release? > > > > 2) As a project with 4 years history DriverLog's list only increased > > over the time with quite few removals. Now it still has drivers with > > the latest version Liberty or drivers for non-maintained projects (e.g. > > Fuel). While it maybe makes sense to keep all of them for operators > > who run older versions, it may produce a feeling that the majority of > > drivers are old. One of solutions for this is to show by default > > drivers for active releases only (Pike and ahead). If done this will > > apply to both DriverLog and Marketplace. If you want to default to showing only drivers for active releases, you have to provide a method for users to find which drivers are available for *any* specific release no matter how old (although Juno is likely the furthest back we would want to go). There are lots of people who haven't upgraded to "living" releases, but still need to maintain their clouds, which might mean getting an as yet not acquired driver for their cloud release. Remember, even Interop certification goes back three releases. You can unclutter the pages a bit by defaulting to displaying current drivers, but you must still provide the historical lists. --Rocky > > > > Any other ideas or suggestions? > > As having recently went through that repo to update some of the nova driver > maintainers, I noted the very old status of several of them. > > I agree this information should live in the per-project repo documentation, > not in a centralized location. Nova does a decent job about keeping the virt > driver feature support matrix up to date, but definitely not when it's a > separate repo. This is a similar problem to the centralized docs issue > addressed as a community in Pike. > > The OSIC team tried working on a feature classification effort [1] for a few > releases which was similar to the driver log, specifically for showing which > drivers and features had CI coverage. That work is *very* incomplete and no > longer maintained, and I've actually been suggesting lately that we drop it > since misinformation is almost worse than no information. > > I suggested to Mike the other day that at the very least, the driver log docs > should put a big red warning, like in [1], that the information may be old. > > [1] https://docs.openstack.org/nova/latest/user/feature-classification.html > > -- > > Thanks, > > Matt > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lijie at unitedstack.com Fri Mar 2 08:31:03 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Fri, 2 Mar 2018 16:31:03 +0800 Subject: [openstack-dev] [cinder] about clone the volume Message-ID: Hi,all This is the spec [0] about clone a volume.And I find the clone function in cinder driver.py.But I don't know why we don't provide a restful api about it.Can you tell me more about this ?Thank you very much. The link is here. Re:https://blueprints.launchpad.net/cinder/+spec/add-cloning-support-to-cinder Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Fri Mar 2 08:57:30 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 2 Mar 2018 08:57:30 +0000 Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? In-Reply-To: <879FF448-9BC4-4B14-A1CB-232C4F1A8FE8@windriver.com> References: <3AF6B015-3D76-4123-B2B0-B3B527EEEB8E@windriver.com> <20180228230300.pgajhg5u5rjv3nyb@arabian.linksys.moosehall> , <879FF448-9BC4-4B14-A1CB-232C4F1A8FE8@windriver.com> Message-ID: Hi Grey, We are waiting on right side of croke park hotel reception. Regards Tushar ________________________________________ From: Waines, Greg Sent: Friday, March 2, 2018 4:40:55 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Sounds good ... see you there. Greg. From: "Patil, Tushar" Reply-To: "openstack-dev at lists.openstack.org" Date: Thursday, March 1, 2018 at 7:32 PM To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi Greg, We can meet near croke park hotel reception at 9:00am. Is it ok? Regards Tushar ________________________________________ From: Waines, Greg > Sent: Friday, March 2, 2018 1:45:37 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? I’m available ... where will we meet up ? Greg. From: "Patil, Tushar" > Reply-To: "openstack-dev at lists.openstack.org" > Date: Thursday, March 1, 2018 at 4:41 PM To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Can we meet tomorrow morning between 9:00 and 10:00 A.M? Regards, Tushar Patil ________________________________ From: Sam P > Sent: Thursday, March 1, 2018 11:43 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Hi All, Really sorry, I couldn't make it to PTG. However, as Dinesh said some team members are at PTG. I can connect on Zoom or Skype or Hangout... etc. Let me know the time if you are meeting up.. Thanks.. --- Regards, Sampath On Thu, Mar 1, 2018 at 8:03 AM, Adam Spiers > wrote: My claim to being a masakari person is pretty weak, but still I'd like to say hello too :-) Please ping me (aspiers on IRC) if you guys are meeting up! Bhor, Dinesh > wrote: Hi Greg, We below are present: Tushar Patil(tpatil) Yukinori Sagara(sagara) Abhishek Kekane(abhishekk) Dinesh Bhor(Dinesh_Bhor) Thank you, Dinesh Bhor ________________________________ From: Waines, Greg > Sent: 28 February 2018 19:22:26 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Any masakari folks at the PTG this week ? Any masakari folks at the PTG this week ? Would be interested in meeting up and chatting, let me know, Greg. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev openstack-dev mailing list - lists.openstack.org Mailing Lists lists.openstack.org This list for the developers of OpenStack to discuss development issues and roadmap. It is focused on the next release of OpenStack: you should post on this list if ... ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From emilien at redhat.com Fri Mar 2 09:24:59 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 2 Mar 2018 09:24:59 +0000 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: A quick update: - Discussed with Jiri Tomasek from TripleO UI squad and he agreed that his squad would start to use Storyboard, and experiment it. - I told him I would take care of making sure all UI bugs created in Launchpad would be moved to Storyboard. - Talked with Kendall and we agreed that we would move forward and migrate TripleO UI bugs to Storyboard. - TripleO UI Squad would report feedback about storyboard to the storyboard team with the help of other TripleO folks (me at least, I'm willing to help). Hopefully this is progress and we can move forward. More updates to come about migration during the next days... Thanks everyone involved in these productive discussions. On Wed, Jan 17, 2018 at 12:33 PM, Thierry Carrez wrote: > Clint Byrum wrote: > > [...] > > That particular example board was built from tasks semi-automatically, > > using a tag, by this script running on a cron job somewhere: > > > > https://git.openstack.org/cgit/openstack-infra/zuul/ > tree/tools/update-storyboard.py?h=feature/zuulv3 > > > > We did this so that we could have a rule "any task that is open with > > the zuulv3 tag must be on this board". Jim very astutely noticed that > > I was not very good at being a robot that did this and thus created the > > script to ease me into retirement from zuul project management. > > > > The script adds new things in New, and moves tasks automatically to > > In Progress, and then removes them when they are completed. We would > > periodically groom the "New" items into an appropriate lane with the > hopes > > of building what you might call a rolling-sprint in Todo, and calling > > out blocked tasks in a regular meeting. Stories were added manually as > > a way to say "look in here and add tasks", and manually removed when > > the larger effort of the story was considered done. > > > > I rather like the semi-automatic nature of it, and would definitely > > suggest that something like this be included in Storyboard if other > > groups find the board building script useful. This made a cross-project > > effort between Nodepool and Zuul go more smoothly as we had some more > > casual contributors to both, and some more full-time. > > That's a great example that illustrates StoryBoard design: rather than > do too much upfront feature design, focus on primitives and expose them > fully through a strong API, then let real-world usage dictate patterns > that might result in future features. > > The downside of this approach is of course getting enough usage on a > product that appears a bit "raw" in terms of features. But I think we > are closing on getting that critical mass :) > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Fri Mar 2 09:40:33 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 2 Mar 2018 09:40:33 +0000 Subject: [openstack-dev] [cinder] about clone the volume In-Reply-To: References: Message-ID: Hi Lijie, We use Create Volume API for this with 'source_volid' param. It's available in Rest API and cinderclient. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Mar 2, 2018 at 8:31 AM, 李杰 wrote: > Hi,all > > This is the spec [0] about clone a volume.And I find the clone > function in cinder driver.py.But I don't know why we don't provide a > restful api about it.Can you tell me more about this ?Thank you very much. > The link is here. > Re:https://blueprints.launchpad.net/cinder/+spec/ > add-cloning-support-to-cinder > > > > > > > > Best Regards > Lijie > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Fri Mar 2 11:09:13 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Fri, 2 Mar 2018 19:09:13 +0800 Subject: [openstack-dev] [cinder] about clone the volume In-Reply-To: References: Message-ID: Thank you ------------------ Original ------------------ From: "Ivan Kolodyazhny"; Date: Fri, Mar 2, 2018 05:40 PM To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [cinder] about clone the volume Hi Lijie, We use Create Volume API for this with 'source_volid' param. It's available in Rest API and cinderclient. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Fri, Mar 2, 2018 at 8:31 AM, 李杰 wrote: Hi,all This is the spec [0] about clone a volume.And I find the clone function in cinder driver.py.But I don't know why we don't provide a restful api about it.Can you tell me more about this ?Thank you very much. The link is here. Re:https://blueprints.launchpad.net/cinder/+spec/add-cloning-support-to-cinder Best Regards Lijie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Fri Mar 2 13:29:00 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Mar 2018 13:29:00 +0000 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: Hi Xinni, I looked at your patch which drops the vendors stuffs, but I still have a question. The patch introduces some SCSS like: - bootstrap.scss - angular-notify.scss - angular-material.scss Aren't they another type of "vendors" stuffs? I don't understand why switching to SCSS solves the embedded "vendors" problem? I would like to know opinions from zigo and Corey. Thanks, Akihiro 2018-03-01 21:30 GMT+00:00 Xinni Ge : > Hi, there. > > This is a page of a similar unittest issue. > https://bugs.launchpad.net/heat-dashboard/+bug/1752527 > > We merged the following patch to fix the issue, and hope it also fix the > trouble described here. > https://review.openstack.org/#/c/548924/ > > As for the minified javascript files, we are working on removing from the > source code, > and uploading as xstatic packages. > Just need a little more time to finish it. > > > Best regards, > > Xinni > > On Tue, Feb 27, 2018 at 10:48 AM, Thomas Goirand wrote: >> >> On 02/23/2018 09:29 AM, Xinni Ge wrote: >> > Hi there, >> > >> > We are aware of the javascript embedded issue, and working on it now, >> > the patch will be summited later. >> > >> > As for the unittest failure, we are still investigating it. We will >> > contant you as soon as we find out the cause. >> > >> > Sorry to bring troubles to you. We will be grateful if you could wait >> > for a little longer. >> > >> > Best Regards, >> > >> > Xinni >> >> Hi, >> >> Thanks for this message. This lowers the frustration! :) >> Let me know if there's any patch I could review. >> >> Cheers, >> >> Thomas Goirand (zigo) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Fri Mar 2 13:33:59 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 02 Mar 2018 13:33:59 +0000 Subject: [openstack-dev] [PTG] Team Photo Links Message-ID: Hello Everyone! I uploaded all the photos to dropbox and they are organized by team! Feel free to download and do what you like with them. Dropbox: https://www.dropbox.com/sh/dtei3ovfi7z74vo/AAB6nLiArw8fYBiO-X_vDGyna?dl=0 - Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri Mar 2 13:47:25 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 2 Mar 2018 14:47:25 +0100 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal Message-ID: Hello. As Zuul documentation [0] explains, the names "check", "gate", and "post" may be altered for more advanced pipelines. Is it doable to introduce, for particular openstack projects, multiple check stages/steps as check-1, check-2 and so on? And is it possible to make the consequent steps reusing environments from the previous steps finished with? Narrowing down to tripleo CI scope, the problem I'd want we to solve with this "virtual RFE", and using such multi-staged check pipelines, is reducing (ideally, de-duplicating) some of the common steps for existing CI jobs. For example, we may want to omit running all of the OVB or multinode (non-upgrade) jobs deploying overclouds, when the *undercloud* fails to install. This case makes even more sense, when undercloud is deployed from the same heat templates (aka Containerized Undercloud) and uses the same packages and containers images, as overcloud would do! Or, maybe, just stop the world, when tox failed at the step1 and do nothing more, as it makes very little sense to run anything else (IMO), if the patch can never be gated with a failed tox check anyway... What I propose here, is to think and discuss, and come up with an RFE, either for tripleo, or zuul, or both, of the following scenarios (examples are tripleo/RDO CI specific, though you can think of other use cases ofc!): case A. No deduplication, simple multi-staged check pipeline: * check-1: syntax only, lint/tox * check-2 : undercloud install with heat and containers * check-3 : undercloud install with heat and containers, build overcloud images (if not multinode job type), deploy overcloud... (repeats OVB jobs as is, basically) case B. Full de-duplication scenario (consequent steps re-use the previous steps results, building "on-top"): * check-1: syntax only, lint/tox * check-2 : undercloud unstall, reuses nothing from the step1 prolly * check-3 : build overcloud images, if not multinode job type, extends stage 2 * check-4: deploy overcloud, extends stages 2/3 * check-5: upgrade undercloud, extends stage 2 * check-6: upgrade overcloud, extends stage 4 (looking into future...) * check-7: deploy openshift/k8s on openstack and do e2e/conformance et al, extends either stage 4 or 6 I believe even the simplest 'case A' would reduce the zuul queues for tripleo CI dramatically. What do you think folks? See also PTG tripleo CI notes [1]. [0] https://docs.openstack.org/infra/zuul/user/concepts.html [1] https://etherpad.openstack.org/p/tripleo-ptg-ci -- Best regards, Bogdan Dobrelya, Irc #bogdando From sundar.nadathur at intel.com Fri Mar 2 14:00:23 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 2 Mar 2018 14:00:23 +0000 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> Message-ID: <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> Hello Nova team, During the Cyborg discussion at Rocky PTG, we proposed a flow for FPGAs wherein the request spec asks for a device type as a resource class, and optionally a function (such as encryption) in the extra specs. This does not seem to work well for the usage model that I'll describe below. An FPGA device may implement more than one function. For example, it may implement both compression and encryption. Say a cluster has 10 devices of device type X, and each of them is programmed to offer 2 instances of function A and 4 instances of function B. More specifically, the device may implement 6 PCI functions, with 2 of them tied to function A, and the other 4 tied to function B. So, we could have 6 separate instances accessing functions on the same device. In the current flow, the device type X is modeled as a resource class, so Placement will count how many of them are in use. A flavor for 'RC device-type-X + function A' will consume one instance of the RC device-type-X. But this is not right because this precludes other functions on the same device instance from getting used. One way to solve this is to declare functions A and B as resource classes themselves and have the flavor request the function RC. Placement will then correctly count the function instances. However, there is still a problem: if the requested function A is not available, Placement will return an empty list of RPs, but we need some way to reprogram some device to create an instance of function A. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Mar 2 14:19:03 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 2 Mar 2018 14:19:03 +0000 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: <31269a1f-deee-071a-a247-f155404ce83a@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> <31269a1f-deee-071a-a247-f155404ce83a@redhat.com> Message-ID: Talking with clarkb during PTG, we'll need to transform tripleo-quickstart-utils into a non-forked repo - or move the roles to an existing repo. But we can't continue to maintain this fork. Raoul, let us know what you think is best (move repo to OpenStack or move modules to an existing upstream repo). Thanks, On Fri, Feb 16, 2018 at 3:12 PM, Raoul Scarazzini wrote: > On 16/02/2018 15:41, Wesley Hayutin wrote: > [...] > > Using galaxy is an option however we would need to make sure that galaxy > > is proxied across the upstream clouds. > > Another option would be to follow the current established pattern of > > adding it to the requirements file [1] > > Thanks Bogdan, Raoul! > > [1] https://github.com/openstack/tripleo-quickstart/ > blob/master/quickstart-extras-requirements.txt > > This is how we're using it today into the internal pipelines, so once we > will have the tripleo-ha-utils (or whatever it will be called) it will > be just a matter of adding it into the file. In the end I think that > once the project will be created either way of using it will be fine. > > Thanks for your involvement on this folks! > > -- > Raoul Scarazzini > rasca at redhat.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Fri Mar 2 14:27:10 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 2 Mar 2018 14:27:10 +0000 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: Hi, Akihiro, The patch I submitted previously didn't solve the embedded problem. I would like to fix out the whole thing in several steps, because the current status of static files arrangment is quite messy. I will upload the javascript and css style files as xstatic-* projects and remove them from the code later on. I wanted to solve the unittest problem ASAP and assumed it will take some time to create xstatic repos and get the approval from infra team, so I just submitted this patch to let unittest go well at first. Thanks for asking, I am happy to hear from your all about the js and static files issue. Best regards, Xinni On Fri, Mar 2, 2018 at 1:29 PM, Akihiro Motoki wrote: > Hi Xinni, > > I looked at your patch which drops the vendors stuffs, but I still > have a question. > > The patch introduces some SCSS like: > - bootstrap.scss > - angular-notify.scss > - angular-material.scss > > Aren't they another type of "vendors" stuffs? > I don't understand why switching to SCSS solves the embedded "vendors" > problem? > > I would like to know opinions from zigo and Corey. > > Thanks, > Akihiro > > > 2018-03-01 21:30 GMT+00:00 Xinni Ge : > > Hi, there. > > > > This is a page of a similar unittest issue. > > https://bugs.launchpad.net/heat-dashboard/+bug/1752527 > > > > We merged the following patch to fix the issue, and hope it also fix the > > trouble described here. > > https://review.openstack.org/#/c/548924/ > > > > As for the minified javascript files, we are working on removing from the > > source code, > > and uploading as xstatic packages. > > Just need a little more time to finish it. > > > > > > Best regards, > > > > Xinni > > > > On Tue, Feb 27, 2018 at 10:48 AM, Thomas Goirand > wrote: > >> > >> On 02/23/2018 09:29 AM, Xinni Ge wrote: > >> > Hi there, > >> > > >> > We are aware of the javascript embedded issue, and working on it now, > >> > the patch will be summited later. > >> > > >> > As for the unittest failure, we are still investigating it. We will > >> > contant you as soon as we find out the cause. > >> > > >> > Sorry to bring troubles to you. We will be grateful if you could wait > >> > for a little longer. > >> > > >> > Best Regards, > >> > > >> > Xinni > >> > >> Hi, > >> > >> Thanks for this message. This lowers the frustration! :) > >> Let me know if there's any patch I could review. > >> > >> Cheers, > >> > >> Thomas Goirand (zigo) > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > 葛馨霓 Xinni Ge > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Mar 2 14:31:11 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 2 Mar 2018 14:31:11 +0000 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> Message-ID: On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: > Hello Nova team, > >     During the Cyborg discussion at Rocky PTG, we proposed a flow for > FPGAs wherein the request spec asks for a device type as a resource > class, and optionally a function (such as encryption) in the extra > specs. This does not seem to work well for the usage model that I’ll > describe below. > > An FPGA device may implement more than one function. For example, it may > implement both compression and encryption. Say a cluster has 10 devices > of device type X, and each of them is programmed to offer 2 instances of > function A and 4 instances of function B. More specifically, the device > may implement 6 PCI functions, with 2 of them tied to function A, and > the other 4 tied to function B. So, we could have 6 separate instances > accessing functions on the same device. > > In the current flow, the device type X is modeled as a resource class, > so Placement will count how many of them are in use. A flavor for ‘RC > device-type-X + function A’ will consume one instance of the RC > device-type-X.  But this is not right because this precludes other > functions on the same device instance from getting used. > > One way to solve this is to declare functions A and B as resource > classes themselves and have the flavor request the function RC. > Placement will then correctly count the function instances. However, > there is still a problem: if the requested function A is not available, > Placement will return an empty list of RPs, but we need some way to > reprogram some device to create an instance of function A. Clearly, nova is not going to be reprogramming devices with an instance of a particular function. Cyborg might need to have a separate agent that listens to the nova notifications queue and upon seeing an event that indicates a failed build due to lack of resources, then Cyborg can try and reprogram a device and then try rebuilding the original request. Best, -jay From amotoki at gmail.com Fri Mar 2 14:37:13 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 2 Mar 2018 14:37:13 +0000 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: Thanks for clarification. I was a bit confused with the two things because the patch dropped some embedded stuffs like font-awesome. Let's address the "embedded" problem soon :) Akihiro 2018-03-02 14:27 GMT+00:00 Xinni Ge : > Hi, Akihiro, > > The patch I submitted previously didn't solve the embedded problem. > I would like to fix out the whole thing in several steps, because the > current status of static files arrangment is quite messy. > > I will upload the javascript and css style files as xstatic-* projects and > remove them from the code later on. > > I wanted to solve the unittest problem ASAP and assumed it will take some > time to create xstatic repos and get the approval from infra team, > so I just submitted this patch to let unittest go well at first. > > Thanks for asking, I am happy to hear from your all about the js and static > files issue. > > Best regards, > > Xinni > > On Fri, Mar 2, 2018 at 1:29 PM, Akihiro Motoki wrote: >> >> Hi Xinni, >> >> I looked at your patch which drops the vendors stuffs, but I still >> have a question. >> >> The patch introduces some SCSS like: >> - bootstrap.scss >> - angular-notify.scss >> - angular-material.scss >> >> Aren't they another type of "vendors" stuffs? >> I don't understand why switching to SCSS solves the embedded "vendors" >> problem? >> >> I would like to know opinions from zigo and Corey. >> >> Thanks, >> Akihiro >> >> >> 2018-03-01 21:30 GMT+00:00 Xinni Ge : >> > Hi, there. >> > >> > This is a page of a similar unittest issue. >> > https://bugs.launchpad.net/heat-dashboard/+bug/1752527 >> > >> > We merged the following patch to fix the issue, and hope it also fix the >> > trouble described here. >> > https://review.openstack.org/#/c/548924/ >> > >> > As for the minified javascript files, we are working on removing from >> > the >> > source code, >> > and uploading as xstatic packages. >> > Just need a little more time to finish it. >> > >> > >> > Best regards, >> > >> > Xinni >> > >> > On Tue, Feb 27, 2018 at 10:48 AM, Thomas Goirand >> > wrote: >> >> >> >> On 02/23/2018 09:29 AM, Xinni Ge wrote: >> >> > Hi there, >> >> > >> >> > We are aware of the javascript embedded issue, and working on it now, >> >> > the patch will be summited later. >> >> > >> >> > As for the unittest failure, we are still investigating it. We will >> >> > contant you as soon as we find out the cause. >> >> > >> >> > Sorry to bring troubles to you. We will be grateful if you could wait >> >> > for a little longer. >> >> > >> >> > Best Regards, >> >> > >> >> > Xinni >> >> >> >> Hi, >> >> >> >> Thanks for this message. This lowers the frustration! :) >> >> Let me know if there's any patch I could review. >> >> >> >> Cheers, >> >> >> >> Thomas Goirand (zigo) >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > 葛馨霓 Xinni Ge >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rasca at redhat.com Fri Mar 2 14:38:54 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Fri, 2 Mar 2018 15:38:54 +0100 Subject: [openstack-dev] [TripleO][CI][QA] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <37fb190d-693a-1749-5e4e-0cfba68466d2@redhat.com> <07c67333-272c-d652-65aa-d5bd6cd59809@redhat.com> <31269a1f-deee-071a-a247-f155404ce83a@redhat.com> Message-ID: <3c74a379-5c5e-d899-da09-647b015cd81d@redhat.com> On 02/03/2018 15:19, Emilien Macchi wrote: > Talking with clarkb during PTG, we'll need to transform > tripleo-quickstart-utils into a non-forked repo - or move the roles to > an existing repo. But we can't continue to maintain this fork. > Raoul, let us know what you think is best (move repo to OpenStack or > move modules to an existing upstream repo). > Thanks, Hey Emilien, I prepared this [1] in which some folks started to have a look at, maybe it's what we need to move on on this. If you think something else needs to be done, let me know, I'll work on it. Thanks, [1] https://review.openstack.org/#/c/548874 -- Raoul Scarazzini rasca at redhat.com From emilien at redhat.com Fri Mar 2 14:51:19 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 2 Mar 2018 14:51:19 +0000 Subject: [openstack-dev] [tripleo] Team photo at PTG Message-ID: Please find the photo in attachment. Was good to see you folks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tripleo-ptg-rocky.jpeg Type: image/jpeg Size: 2111628 bytes Desc: not available URL: From corvus at inaugust.com Fri Mar 2 15:56:42 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 02 Mar 2018 07:56:42 -0800 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: (Bogdan Dobrelya's message of "Fri, 2 Mar 2018 14:47:25 +0100") References: Message-ID: <87lgfacvyd.fsf@meyer.lemoncheese.net> Bogdan Dobrelya writes: > Hello. > As Zuul documentation [0] explains, the names "check", "gate", and > "post" may be altered for more advanced pipelines. Is it doable to > introduce, for particular openstack projects, multiple check > stages/steps as check-1, check-2 and so on? And is it possible to make > the consequent steps reusing environments from the previous steps > finished with? > > Narrowing down to tripleo CI scope, the problem I'd want we to solve > with this "virtual RFE", and using such multi-staged check pipelines, > is reducing (ideally, de-duplicating) some of the common steps for > existing CI jobs. What you're describing sounds more like a job graph within a pipeline. See: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies for how to configure a job to run only after another job has completed. There is also a facility to pass data between such jobs. > For example, we may want to omit running all of the OVB or multinode > (non-upgrade) jobs deploying overclouds, when the *undercloud* fails > to install. This case makes even more sense, when undercloud is > deployed from the same heat templates (aka Containerized Undercloud) > and uses the same packages and containers images, as overcloud would > do! Or, maybe, just stop the world, when tox failed at the step1 and > do nothing more, as it makes very little sense to run anything else > (IMO), if the patch can never be gated with a failed tox check > anyway... > > What I propose here, is to think and discuss, and come up with an RFE, > either for tripleo, or zuul, or both, of the following scenarios > (examples are tripleo/RDO CI specific, though you can think of other > use cases ofc!): > > case A. No deduplication, simple multi-staged check pipeline: > > * check-1: syntax only, lint/tox > * check-2 : undercloud install with heat and containers > * check-3 : undercloud install with heat and containers, build > overcloud images (if not multinode job type), deploy > overcloud... (repeats OVB jobs as is, basically) > > case B. Full de-duplication scenario (consequent steps re-use the > previous steps results, building "on-top"): > > * check-1: syntax only, lint/tox > * check-2 : undercloud unstall, reuses nothing from the step1 prolly > * check-3 : build overcloud images, if not multinode job type, extends > stage 2 > * check-4: deploy overcloud, extends stages 2/3 > * check-5: upgrade undercloud, extends stage 2 > * check-6: upgrade overcloud, extends stage 4 > (looking into future...) > * check-7: deploy openshift/k8s on openstack and do e2e/conformance et > al, extends either stage 4 or 6 > > I believe even the simplest 'case A' would reduce the zuul queues for > tripleo CI dramatically. What do you think folks? See also PTG tripleo > CI notes [1]. > > [0] https://docs.openstack.org/infra/zuul/user/concepts.html > [1] https://etherpad.openstack.org/p/tripleo-ptg-ci Creating a job graph to have one job use the results of the previous job can make sense in a lot of cases. It doesn't always save *time* however. It's worth noting that in OpenStack's Zuul, we have made an explicit choice not to have long-running integration jobs depend on shorter pep8 or tox jobs, and that's because we value developer time more than CPU time. We would rather run all of the tests and return all of the results so a developer can fix all of the errors as quickly as possible, rather than forcing an iterative workflow where they have to fix all the whitespace issues before the CI system will tell them which actual tests broke. -Jim From e0ne at e0ne.info Fri Mar 2 16:48:39 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 2 Mar 2018 16:48:39 +0000 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> Message-ID: Hi Jay, Will Fagans serve us tonight? Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant wrote: > > > On 3/1/2018 6:50 AM, Sean McGinnis wrote: > >> On Feb 28, 2018, at 16:58, Jay S Bryant wrote: >>> >>> Team, >>> >>> Just a reminder that we will be having our team photo at 9 am tomorrow >>> before the Cinder/Nova cross project session. Please be at the >>> registration desk before 9 to be in the photo. >>> >>> We will then have the Cross Project session in the Nova room as it >>> sounds like it is somewhat larger. I will have sound clips in hand to make >>> sure things don't get too serious. >>> >>> Finally, an update on dinner for tomorrow night. I have moved dinner to >>> a closer venue: >>> >>> Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, Dublin >>> 9 >>> >>> I have reservations for 7:30 pm. It isn't too difficult a walk from >>> Croke Park (even in a blizzard) and it is a great pub. >>> >>> Thanks for a great day today! >>> >>> See you all tomorrow! Let's make it a great one! ;-) >>> Jay >>> >>> Any plan now that there is a 4pm curfew? >> >> Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on your > country of origin. 6:30 at Fagans. > > I will update the etherpad. > > Jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From muroi.masahito at lab.ntt.co.jp Fri Mar 2 17:08:58 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Sat, 3 Mar 2018 02:08:58 +0900 Subject: [openstack-dev] [Blazar] skip next weekly meeting Message-ID: <762f7ceb-38c5-8789-c365-3cde7528978f@lab.ntt.co.jp> Dear Blazar folks, We had a great discussion in the Dublin PTG. Unfortunately, most of members are still stuck in Dublin because of heavy snow and will not be able to back their home by the meeting. So let's skip the next weekly meeting on 6th March. best regards, Masahito From whayutin at redhat.com Fri Mar 2 17:24:08 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 2 Mar 2018 12:24:08 -0500 Subject: [openstack-dev] [tripleo] looking for feedback on running triple-quickstart libvirt as root Message-ID: Greetings, I've heard from a number of people in the TripleO community that they would prefer the default libvirt settings for quickstart be changed from user sessions to root system settings [1]. If you have any thoughts or opinions please share them in the launchpad bug https://bugs.launchpad.net/tripleo/+bug/1752909 Thanks! [1] https://wiki.libvirt.org/page/FAQ#What_is_the_difference_between_qemu:.2F.2F.2Fsystem_and_qemu:.2F.2F.2Fsession.3F_Which_one_should_I_use.3F -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Fri Mar 2 17:47:28 2018 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Fri, 2 Mar 2018 11:47:28 -0600 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> Message-ID: <2f7e2b0e-90b1-836f-0f1f-d80684ec3db8@gmail.com> Ivan, I sent another note but will also respond in this thread. Yes, they will serve us tonight.  It is a somewhat limited menu but I stopped to look at it and it still looked good. Sidewalks on the way to the restaurant were not in too bad of shape. Jay On 3/2/2018 10:48 AM, Ivan Kolodyazhny wrote: > Hi Jay, > > Will Fagans serve us tonight? > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant > wrote: > > > > On 3/1/2018 6:50 AM, Sean McGinnis wrote: > > On Feb 28, 2018, at 16:58, Jay S Bryant > > wrote: > > Team, > > Just a reminder that we will be having our team photo at 9 > am tomorrow before the Cinder/Nova cross project session.  > Please be at the registration desk before 9 to be in the > photo. > > We will then have the Cross Project session in the Nova > room as it sounds like it is somewhat larger.  I will have > sound clips in hand to make sure things don't get too serious. > > Finally, an update on dinner for tomorrow night.  I have > moved dinner to a closer venue: > > Fagan's Bar and Restaurant:  146 Drumcondra Rd Lower, > Drumcondra, Dublin 9 > > I have reservations for 7:30 pm.  It isn't too difficult a > walk from Croke Park (even in a blizzard) and it is a > great pub. > > Thanks for a great day today! > > See you all tomorrow!  Let's make it a great one!  ;-) > Jay > > Any plan now that there is a 4pm curfew? > > Dinner has been rescheduled for Friday night 3/2 or 2/3 depending > on your country of origin.  6:30 at Fagans. > > I will update the etherpad. > > Jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Fri Mar 2 18:26:58 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 2 Mar 2018 18:26:58 +0000 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: <2f7e2b0e-90b1-836f-0f1f-d80684ec3db8@gmail.com> References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> <2f7e2b0e-90b1-836f-0f1f-d80684ec3db8@gmail.com> Message-ID: I'm in the pub now, and they are closing down On 2 Mar 2018 5:48 pm, "Jay S. Bryant" wrote: > Ivan, > > I sent another note but will also respond in this thread. > > Yes, they will serve us tonight. It is a somewhat limited menu but I > stopped to look at it and it still looked good. > > Sidewalks on the way to the restaurant were not in too bad of shape. > > Jay > > On 3/2/2018 10:48 AM, Ivan Kolodyazhny wrote: > > Hi Jay, > > Will Fagans serve us tonight? > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant wrote: > >> >> >> On 3/1/2018 6:50 AM, Sean McGinnis wrote: >> >>> On Feb 28, 2018, at 16:58, Jay S Bryant wrote: >>>> >>>> Team, >>>> >>>> Just a reminder that we will be having our team photo at 9 am tomorrow >>>> before the Cinder/Nova cross project session. Please be at the >>>> registration desk before 9 to be in the photo. >>>> >>>> We will then have the Cross Project session in the Nova room as it >>>> sounds like it is somewhat larger. I will have sound clips in hand to make >>>> sure things don't get too serious. >>>> >>>> Finally, an update on dinner for tomorrow night. I have moved dinner >>>> to a closer venue: >>>> >>>> Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, >>>> Dublin 9 >>>> >>>> I have reservations for 7:30 pm. It isn't too difficult a walk from >>>> Croke Park (even in a blizzard) and it is a great pub. >>>> >>>> Thanks for a great day today! >>>> >>>> See you all tomorrow! Let's make it a great one! ;-) >>>> Jay >>>> >>>> Any plan now that there is a 4pm curfew? >>> >>> Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on >> your country of origin. 6:30 at Fagans. >> >> I will update the etherpad. >> >> Jay >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksnhr.tech at gmail.com Fri Mar 2 18:28:26 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Sat, 3 Mar 2018 03:28:26 +0900 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: Hi Thomas(zigo), I found an issue which is included in https://review.openstack.org/#/c/548924/ (you did cherry pick last night) In short, this issue makes it impossible to install heat-dashboard.. I landed fix for this. https://review.openstack.org/#/c/549214/ Could you kindly pick up this for your package ? Sorry again for your inconvenience. Regards, Kaz(kazsh) 2018-03-02 23:37 GMT+09:00 Akihiro Motoki : > Thanks for clarification. > I was a bit confused with the two things because the patch dropped > some embedded stuffs like font-awesome. > Let's address the "embedded" problem soon :) > > Akihiro > > 2018-03-02 14:27 GMT+00:00 Xinni Ge : >> Hi, Akihiro, >> >> The patch I submitted previously didn't solve the embedded problem. >> I would like to fix out the whole thing in several steps, because the >> current status of static files arrangment is quite messy. >> >> I will upload the javascript and css style files as xstatic-* projects and >> remove them from the code later on. >> >> I wanted to solve the unittest problem ASAP and assumed it will take some >> time to create xstatic repos and get the approval from infra team, >> so I just submitted this patch to let unittest go well at first. >> >> Thanks for asking, I am happy to hear from your all about the js and static >> files issue. >> >> Best regards, >> >> Xinni >> >> On Fri, Mar 2, 2018 at 1:29 PM, Akihiro Motoki wrote: >>> >>> Hi Xinni, >>> >>> I looked at your patch which drops the vendors stuffs, but I still >>> have a question. >>> >>> The patch introduces some SCSS like: >>> - bootstrap.scss >>> - angular-notify.scss >>> - angular-material.scss >>> >>> Aren't they another type of "vendors" stuffs? >>> I don't understand why switching to SCSS solves the embedded "vendors" >>> problem? >>> >>> I would like to know opinions from zigo and Corey. >>> >>> Thanks, >>> Akihiro >>> >>> >>> 2018-03-01 21:30 GMT+00:00 Xinni Ge : >>> > Hi, there. >>> > >>> > This is a page of a similar unittest issue. >>> > https://bugs.launchpad.net/heat-dashboard/+bug/1752527 >>> > >>> > We merged the following patch to fix the issue, and hope it also fix the >>> > trouble described here. >>> > https://review.openstack.org/#/c/548924/ >>> > >>> > As for the minified javascript files, we are working on removing from >>> > the >>> > source code, >>> > and uploading as xstatic packages. >>> > Just need a little more time to finish it. >>> > >>> > >>> > Best regards, >>> > >>> > Xinni >>> > >>> > On Tue, Feb 27, 2018 at 10:48 AM, Thomas Goirand >>> > wrote: >>> >> >>> >> On 02/23/2018 09:29 AM, Xinni Ge wrote: >>> >> > Hi there, >>> >> > >>> >> > We are aware of the javascript embedded issue, and working on it now, >>> >> > the patch will be summited later. >>> >> > >>> >> > As for the unittest failure, we are still investigating it. We will >>> >> > contant you as soon as we find out the cause. >>> >> > >>> >> > Sorry to bring troubles to you. We will be grateful if you could wait >>> >> > for a little longer. >>> >> > >>> >> > Best Regards, >>> >> > >>> >> > Xinni >>> >> >>> >> Hi, >>> >> >>> >> Thanks for this message. This lowers the frustration! :) >>> >> Let me know if there's any patch I could review. >>> >> >>> >> Cheers, >>> >> >>> >> Thomas Goirand (zigo) >>> >> >>> >> >>> >> __________________________________________________________________________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> > >>> > >>> > -- >>> > 葛馨霓 Xinni Ge >>> > >>> > >>> > __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> 葛馨霓 Xinni Ge >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From duncan.thomas at gmail.com Fri Mar 2 18:34:12 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 2 Mar 2018 18:34:12 +0000 Subject: [openstack-dev] [cinder][ptg] Dinner Outing Update and Photo Reminder ... In-Reply-To: References: <63f47b3a-5703-3095-d1c6-93fc45d7a19e@gmail.com> <7E6D7CD0-B168-4978-BB22-C881A186792E@gmx.com> <2f7e2b0e-90b1-836f-0f1f-d80684ec3db8@gmail.com> Message-ID: Met with Jay, looking at alternatives On 2 Mar 2018 6:26 pm, "Duncan Thomas" wrote: > I'm in the pub now, and they are closing down > > On 2 Mar 2018 5:48 pm, "Jay S. Bryant" wrote: > >> Ivan, >> >> I sent another note but will also respond in this thread. >> >> Yes, they will serve us tonight. It is a somewhat limited menu but I >> stopped to look at it and it still looked good. >> >> Sidewalks on the way to the restaurant were not in too bad of shape. >> >> Jay >> >> On 3/2/2018 10:48 AM, Ivan Kolodyazhny wrote: >> >> Hi Jay, >> >> Will Fagans serve us tonight? >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> On Thu, Mar 1, 2018 at 3:00 PM, Jay S Bryant >> wrote: >> >>> >>> >>> On 3/1/2018 6:50 AM, Sean McGinnis wrote: >>> >>>> On Feb 28, 2018, at 16:58, Jay S Bryant wrote: >>>>> >>>>> Team, >>>>> >>>>> Just a reminder that we will be having our team photo at 9 am tomorrow >>>>> before the Cinder/Nova cross project session. Please be at the >>>>> registration desk before 9 to be in the photo. >>>>> >>>>> We will then have the Cross Project session in the Nova room as it >>>>> sounds like it is somewhat larger. I will have sound clips in hand to make >>>>> sure things don't get too serious. >>>>> >>>>> Finally, an update on dinner for tomorrow night. I have moved dinner >>>>> to a closer venue: >>>>> >>>>> Fagan's Bar and Restaurant: 146 Drumcondra Rd Lower, Drumcondra, >>>>> Dublin 9 >>>>> >>>>> I have reservations for 7:30 pm. It isn't too difficult a walk from >>>>> Croke Park (even in a blizzard) and it is a great pub. >>>>> >>>>> Thanks for a great day today! >>>>> >>>>> See you all tomorrow! Let's make it a great one! ;-) >>>>> Jay >>>>> >>>>> Any plan now that there is a 4pm curfew? >>>> >>>> Dinner has been rescheduled for Friday night 3/2 or 2/3 depending on >>> your country of origin. 6:30 at Fagans. >>> >>> I will update the etherpad. >>> >>> Jay >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From apevec at redhat.com Fri Mar 2 22:52:09 2018 From: apevec at redhat.com (Alan Pevec) Date: Fri, 2 Mar 2018 22:52:09 +0000 Subject: [openstack-dev] [tripleo][ui] Dependency management In-Reply-To: References: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> Message-ID: On Mon, Jan 22, 2018 at 9:30 AM, Julie Pichon wrote: > On 19 January 2018 at 18:43, Honza Pokorny wrote: >> We've recently discovered an issue with the way we handle dependencies for >> tripleo-ui. This is an explanation of the problem, and a proposed solution. >> I'm looking for feedback. >> >> Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for >> tripleo-ui: >> >> * Native npm jobs >> * Undercloud integration jobs >> >> After the upgrade, the integration jobs went away. Our goal is to add them >> back. >> >> There is a difference in how these two types of jobs handle dependencies. >> Native npm jobs use the "npm install" command to acquire packages, and >> undercloud jobs use RPMs. The tripleo-ui project uses a separate RPM for >> dependencies called openstack-tripleo-ui-deps. >> >> Because of the requirement to use a separate RPM for dependencies, there is some >> extra work needed when a new dependency is introduced, or an existing one is >> upgraded. Once the patch that introduces the dependency is merged, we have to >> increment the version of the -deps package, and rebuild it. It then shows up in >> the yum repos used by the undercloud. >> >> To make matters worse, we recently upgraded our infrastructure to nodejs 8.9.4 >> and npm 5.6.0 (latest stable). This makes it so we can't write "purist" patches >> that simply introduce a new dependency to package.json, and nothing more. The >> code that uses the new dependency must be included. I tend to think that each >> commit should work on its own so this can be seen as a plus. >> >> This presents a problem: you can't get a patch that introduces a new dependency >> merged because it's not included in the RPM needed by the undercloud ci job. >> >> So, here is a proposal on how that might work: >> >> 1. Submit a patch for review that introduces the dependency, along with code >> changes to support it and validate its inclusion >> 2. Native npm jobs will pass >> 3. Undercloud gate job will fail because the dependency isn't in -deps RPM >> 4. We ask RDO to review for licensing >> 5. Once reviewed, new -deps package is built >> 6. Recheck >> 7. All jobs pass > > Perhaps there should be a step after 3 or 4 to have the patch normally > reviewed, and wait for it to have two +2s before building the new > package? Otherwise we may end up with wasted work to get a new package > ready for dependencies that were eventually dismissed. Thanks Julie for reminding me of this thread! I agree we can only build ui-deps package when the review is about to merge. I've proposed https://github.com/rdo-common/openstack-tripleo-ui-deps/pull/19 which allows us to build the package with the review and patchset numbers, before it's merged. Please review and we can try it on the next deps update! Cheers, Alan From dmsimard at redhat.com Sat Mar 3 04:33:15 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 2 Mar 2018 23:33:15 -0500 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: I have nothing to add but I wanted to mention that it seems like a great exercise to have UI/UX minded folks test our Storyboard implementation. Storyboard will be what we make of it based on the feedback and contributions it gets. It's an interesting opportunity. I'd actually like to encourage Horizon and the different UI components developers to test Storyboard, provide feedback and contribute ! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Fri, Mar 2, 2018 at 4:24 AM, Emilien Macchi wrote: > A quick update: > > - Discussed with Jiri Tomasek from TripleO UI squad and he agreed that his > squad would start to use Storyboard, and experiment it. > - I told him I would take care of making sure all UI bugs created in > Launchpad would be moved to Storyboard. > - Talked with Kendall and we agreed that we would move forward and migrate > TripleO UI bugs to Storyboard. > - TripleO UI Squad would report feedback about storyboard to the storyboard > team with the help of other TripleO folks (me at least, I'm willing to > help). > > Hopefully this is progress and we can move forward. More updates to come > about migration during the next days... > > Thanks everyone involved in these productive discussions. > > On Wed, Jan 17, 2018 at 12:33 PM, Thierry Carrez > wrote: >> >> Clint Byrum wrote: >> > [...] >> > That particular example board was built from tasks semi-automatically, >> > using a tag, by this script running on a cron job somewhere: >> > >> > >> > https://git.openstack.org/cgit/openstack-infra/zuul/tree/tools/update-storyboard.py?h=feature/zuulv3 >> > >> > We did this so that we could have a rule "any task that is open with >> > the zuulv3 tag must be on this board". Jim very astutely noticed that >> > I was not very good at being a robot that did this and thus created the >> > script to ease me into retirement from zuul project management. >> > >> > The script adds new things in New, and moves tasks automatically to >> > In Progress, and then removes them when they are completed. We would >> > periodically groom the "New" items into an appropriate lane with the >> > hopes >> > of building what you might call a rolling-sprint in Todo, and calling >> > out blocked tasks in a regular meeting. Stories were added manually as >> > a way to say "look in here and add tasks", and manually removed when >> > the larger effort of the story was considered done. >> > >> > I rather like the semi-automatic nature of it, and would definitely >> > suggest that something like this be included in Storyboard if other >> > groups find the board building script useful. This made a cross-project >> > effort between Nodepool and Zuul go more smoothly as we had some more >> > casual contributors to both, and some more full-time. >> >> That's a great example that illustrates StoryBoard design: rather than >> do too much upfront feature design, focus on primitives and expose them >> fully through a strong API, then let real-world usage dictate patterns >> that might result in future features. >> >> The downside of this approach is of course getting enough usage on a >> product that appears a bit "raw" in terms of features. But I think we >> are closing on getting that critical mass :) >> >> -- >> Thierry Carrez (ttx) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From no-reply at openstack.org Sat Mar 3 11:56:00 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Sat, 03 Mar 2018 11:56:00 -0000 Subject: [openstack-dev] [tripleo] tripleo-heat-templates 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for tripleo-heat-templates for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-heat-templates/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/tripleo-heat-templates/log/?h=stable/queens Release notes for tripleo-heat-templates can be found at: http://docs.openstack.org/releasenotes/tripleo-heat-templates/ If you find an issue that could be considered release-critical, please file it at: https://bugs.launchpad.net/tripleo and tag it *queens-rc-potential* to bring it to the tripleo-heat-templates release crew's attention. From no-reply at openstack.org Sat Mar 3 11:56:37 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Sat, 03 Mar 2018 11:56:37 -0000 Subject: [openstack-dev] [tripleo] tripleo-puppet-elements 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for tripleo-puppet-elements for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-puppet-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/tripleo-puppet-elements/log/?h=stable/queens Release notes for tripleo-puppet-elements can be found at: http://docs.openstack.org/releasenotes/tripleo-puppet-elements/ From no-reply at openstack.org Sat Mar 3 11:58:07 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Sat, 03 Mar 2018 11:58:07 -0000 Subject: [openstack-dev] [tripleo] tripleo-image-elements 8.0.0.0rc1 (queens) Message-ID: Hello everyone, A new release candidate for tripleo-image-elements for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/tripleo-image-elements/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/tripleo-image-elements/log/?h=stable/queens Release notes for tripleo-image-elements can be found at: http://docs.openstack.org/releasenotes/tripleo-image-elements/ From mbooth at redhat.com Sat Mar 3 16:15:10 2018 From: mbooth at redhat.com (Matthew Booth) Date: Sat, 3 Mar 2018 16:15:10 +0000 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> Message-ID: On 2 March 2018 at 14:31, Jay Pipes wrote: > On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: > >> Hello Nova team, >> >> During the Cyborg discussion at Rocky PTG, we proposed a flow for >> FPGAs wherein the request spec asks for a device type as a resource class, >> and optionally a function (such as encryption) in the extra specs. This >> does not seem to work well for the usage model that I’ll describe below. >> >> An FPGA device may implement more than one function. For example, it may >> implement both compression and encryption. Say a cluster has 10 devices of >> device type X, and each of them is programmed to offer 2 instances of >> function A and 4 instances of function B. More specifically, the device may >> implement 6 PCI functions, with 2 of them tied to function A, and the other >> 4 tied to function B. So, we could have 6 separate instances accessing >> functions on the same device. >> > Does this imply that Cyborg can't reprogram the FPGA at all? > >> In the current flow, the device type X is modeled as a resource class, so >> Placement will count how many of them are in use. A flavor for ‘RC >> device-type-X + function A’ will consume one instance of the RC >> device-type-X. But this is not right because this precludes other >> functions on the same device instance from getting used. >> >> One way to solve this is to declare functions A and B as resource classes >> themselves and have the flavor request the function RC. Placement will then >> correctly count the function instances. However, there is still a problem: >> if the requested function A is not available, Placement will return an >> empty list of RPs, but we need some way to reprogram some device to create >> an instance of function A. >> > > Clearly, nova is not going to be reprogramming devices with an instance of > a particular function. > > Cyborg might need to have a separate agent that listens to the nova > notifications queue and upon seeing an event that indicates a failed build > due to lack of resources, then Cyborg can try and reprogram a device and > then try rebuilding the original request. > It was my understanding from that discussion that we intend to insert Cyborg into the spawn workflow for device configuration in the same way that we currently insert resources provided by Cinder and Neutron. So while Nova won't be reprogramming a device, it will be calling out to Cyborg to reprogram a device, and waiting while that happens. My understanding is (and I concede some areas are a little hazy): * The flavors says device type X with function Y * Placement tells us everywhere with device type X * A weigher orders these by devices which already have an available function Y (where is this metadata stored?) * Nova schedules to host Z * Nova host Z asks cyborg for a local function Y and blocks * Cyborg hopefully returns function Y which is already available * If not, Cyborg reprograms a function Y, then returns it Can anybody correct me/fill in the gaps? Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat Mar 3 17:44:59 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 3 Mar 2018 18:44:59 +0100 Subject: [openstack-dev] [requirements] Let's switch from pyldap to python-ldap >= 3 Message-ID: <06aefad9-7fdd-d8c4-8f4b-f08c45de04da@debian.org> Hi, Pyldap started as a fork of python-ldap. But recently, its features were merged into the python-ldap python module >= 3, and pyldap is now deprecated. Let's switch to python-ldap >= 3, and remove pyldap from requirements. This has already been done in Debian. The python-ldap source package now carries a python-pyldap transition package, which installs python-ldap. What's the procedure? Should I first send a patch to the global-reqs repo first? Cheers, Thomas Goirand (zigo) From prometheanfire at gentoo.org Sat Mar 3 18:09:00 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sat, 3 Mar 2018 12:09:00 -0600 Subject: [openstack-dev] [requirements] Let's switch from pyldap to python-ldap >= 3 In-Reply-To: <06aefad9-7fdd-d8c4-8f4b-f08c45de04da@debian.org> References: <06aefad9-7fdd-d8c4-8f4b-f08c45de04da@debian.org> Message-ID: <20180303180900.ur77cge2pn3qlcur@gentoo.org> On 18-03-03 18:44:59, Thomas Goirand wrote: > Hi, > > Pyldap started as a fork of python-ldap. But recently, its features were > merged into the python-ldap python module >= 3, and pyldap is now > deprecated. Let's switch to python-ldap >= 3, and remove pyldap from > requirements. > > This has already been done in Debian. The python-ldap source package now > carries a python-pyldap transition package, which installs python-ldap. > > What's the procedure? Should I first send a patch to the global-reqs > repo first? > I think we should do it like this. 1. add python-ldap-3.x to requirements (though it looks like that's still a beta 2. move existing pyldap to python-ldap 3. remove pyldap -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From giuseppe.decandia at gmail.com Sat Mar 3 18:38:07 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Sat, 3 Mar 2018 12:38:07 -0600 Subject: [openstack-dev] [tatu] Integration with Uber's pam-ussh module (and Stripe's KRL) Message-ID: Hi Folks, I integrated Uber's pam-ussh module in Tatu. With this, if the user's SSH certificate is revoked while they're logged into the VM, they lose sudo access (btw, I don't know how to close their session, which would be even better). Here's the demo video: https://youtu.be/yjwWdYJRTM0 Here's my pull request to add KRL support (from https://github.com/stripe/krl) to pam-ussh: https://github.com/uber/pam-ussh/pull/10 And here's the Tatu code-review: https://review.openstack.org/#/c/549389/ cheers, Pino -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 10:06:11 2018 From: mikal at stillhq.com (Michael Still) Date: Sun, 4 Mar 2018 21:06:11 +1100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node Message-ID: Heya, I've been playing with bifrost to help me manage some lab machines. I must say the install process was well documented and smooth, so that was a pleasure. Thanks! That said, I am struggling to get a working node enrolment. I'm resisting using the JSON file / ansible playbook approach, because I'll want to add more machines later so I need a manual enrolment to work. The command line I am using is like this: ironic node-create -d agent_ipmitool \ -i ipmi_username=root \ -i ipmi_password=superuser \ -i ipmi_address=192.168.50.31 \ -i deploy_kernel=http://192.168.50.209:8080/ipa.vmlinuz \ -i deploy_ramdisk=http://192.168.50.209:8080/ipa.initramfs \ -p cpus=16 \ -p memory_mb=12288 \ -p local_gb=750 \ -p cpu_arch=x86_64 \ -p capabilities=boot_option:local Unfortunately, I get this error: No valid host was found. Reason: No conductor service registered which supports driver agent_ipmitool. (HTTP 400) I can't see anything helpful in the logs. What driver should I be using for bifrost? agent_ipmitool seems to be enabled in ironic.conf. Thanks heaps, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Mar 4 12:43:14 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 4 Mar 2018 21:43:14 +0900 Subject: [openstack-dev] [QA] Dublin PTG QA Team photos Message-ID: Hi All, Please find the QA team photos in Dublin PTG. Added our dinner photo too. -gmann -------------- next part -------------- A non-text attachment was scrubbed... Name: QA_PTG_Dublin.zip Type: application/zip Size: 20870562 bytes Desc: not available URL: From yroblamo at redhat.com Sun Mar 4 12:59:56 2018 From: yroblamo at redhat.com (Yolanda Robla Mota) Date: Sun, 4 Mar 2018 13:59:56 +0100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Hi Michael So what does ironic driver-list say? May it be pxe_ipmitool? It would also be useful if you provide the conductor logs. On Sun, Mar 4, 2018 at 11:06 AM, Michael Still wrote: > Heya, > > I've been playing with bifrost to help me manage some lab machines. I must > say the install process was well documented and smooth, so that was a > pleasure. Thanks! > > That said, I am struggling to get a working node enrolment. I'm resisting > using the JSON file / ansible playbook approach, because I'll want to add > more machines later so I need a manual enrolment to work. The command line > I am using is like this: > > ironic node-create -d agent_ipmitool \ > -i ipmi_username=root \ > -i ipmi_password=superuser \ > -i ipmi_address=192.168.50.31 \ > -i deploy_kernel=http://192.168.50.209:8080/ipa.vmlinuz \ > -i deploy_ramdisk=http://192.168.50.209:8080/ipa.initramfs \ > -p cpus=16 \ > -p memory_mb=12288 \ > -p local_gb=750 \ > -p cpu_arch=x86_64 \ > -p capabilities=boot_option:local > > Unfortunately, I get this error: > > No valid host was found. Reason: No conductor service registered which > supports driver agent_ipmitool. (HTTP 400) > > I can't see anything helpful in the logs. What driver should I be using > for bifrost? agent_ipmitool seems to be enabled in ironic.conf. > > Thanks heaps, > Michael > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Yolanda Robla Mota Principal Software Engineer, RHCE Red Hat C/Avellana 213 Urb Portugal yroblamo at redhat.com M: +34605641639 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Mar 4 13:22:39 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 4 Mar 2018 14:22:39 +0100 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> Message-ID: <6a249305-b7d0-8b1f-b31b-1fa990e8d054@debian.org> On 03/02/2018 07:28 PM, Kaz Shinohara wrote: > Hi Thomas(zigo), > > I found an issue which is included in > https://review.openstack.org/#/c/548924/ (you did cherry pick last > night) > In short, this issue makes it impossible to install heat-dashboard.. > > I landed fix for this. https://review.openstack.org/#/c/549214/ > > Could you kindly pick up this for your package ? > Sorry again for your inconvenience. > > Regards, > Kaz(kazsh) Hi, I've added the patch, thanks for it. So now, I'm embedding that patch for fixing unit tests, plus: https://review.openstack.org/#/c/547468/ https://review.openstack.org/#/c/549214/ Indeed, it'd be nice to have all of them officially backported to Queens, as you suggested on IRC. It'd be even better to completely remove embedded stuff, and use xstatic packages. There's already an XStatic package for the font-awesome which can be used. I do believe it would be very much OK to add such a requirement to heat-dashboard, since it is already one of the requirements for Horizon. Altogether, thanks a lot for your care, as always, the OpenStack community turns out to be comprehensive, reactive and simply awesome! :) Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Sun Mar 4 17:41:15 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 4 Mar 2018 09:41:15 -0800 Subject: [openstack-dev] [ironic] Re-adding Jim Rollenhagen to ironic-core Message-ID: As time goes on it is natural for corporate interests to ebb and flow. Sometimes this comes as a new side project which complements OpenStack. Sometimes it is a negative hit where several contributors are abruptly in search of new jobs. Sadly, I think there is some normalcy to that. What we do often experience is the reverse reverse where someone has left[0] and later returns to the community. Since Jim has resumed working on Ironic[1], I think it naturally makes sense to add Jim back to ironic-core. This has come up for discussion several times amongst the ironic-core members over the past two months since Jim returned, including at the PTG. Every member of ironic-core has expressed positive feedback to re-adding Jim. In less than two months, Jim has resumed reviewing[2] and contributing to ongoing discussions in the community. He has expressed a desire to help and there is no reason we should deny him the ability. I will re-add Jim to ironic-core sometime this week if there are no objections. If anyone objects, please do so promptly. -Julia [0] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118036.html [1] http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2017-12-06.log.html#t2017-12-06T17:24:39 [2] http://stackalytics.com/report/contribution/ironic/90 From juliaashleykreger at gmail.com Sun Mar 4 17:50:14 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 4 Mar 2018 09:50:14 -0800 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: > No valid host was found. Reason: No conductor service registered which > supports driver agent_ipmitool. (HTTP 400) > > I can't see anything helpful in the logs. What driver should I be using for > bifrost? agent_ipmitool seems to be enabled in ironic.conf. Weird, I'm wondering what the error is in the conductor log. You can try using "ipmi" for the hardware type that replaces agent_ipmitool/pxe_ipmitool. -Julia From juliaashleykreger at gmail.com Sun Mar 4 18:00:46 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 4 Mar 2018 10:00:46 -0800 Subject: [openstack-dev] [ironic] Polling for new meeting time? Message-ID: Greetings everyone! As our community composition has shifted to be more global, the question has arisen if we should consider shifting the meeting to be more friendly to some of our contributors in the APAC time zones. Alternatively this may involve changing our processes to better plan and communicate, but the first step is to understand our overlaps and what might work well for everyone. I have created a doodle poll, from which I would like understand what times would ideally work, and from there we can determine if there is a better time to meet. The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk Please don't feel the need to select times that would be burdensome to yourself. This is only to gather information as to the time of day that would be ideal for everyone. All times are set as UTC on the poll. Once we have collected some data, we should expect to discuss during our meeting on the 12th. Thanks everyone! -Julia From mark at stackhpc.com Sun Mar 4 18:45:18 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sun, 4 Mar 2018 18:45:18 +0000 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Hi Michael, If you're using the latest release of biifrost I suspect you're hitting https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox for review. For a workaround, modify /etc/ironic/ironic.conf, and set enabled_hardware_types=ipmi. Cheers, Mark On 4 Mar 2018 5:50 p.m., "Julia Kreger" wrote: > > No valid host was found. Reason: No conductor service registered which > > supports driver agent_ipmitool. (HTTP 400) > > > > I can't see anything helpful in the logs. What driver should I be using > for > > bifrost? agent_ipmitool seems to be enabled in ironic.conf. > > Weird, I'm wondering what the error is in the conductor log. You can > try using "ipmi" for the hardware type that replaces > agent_ipmitool/pxe_ipmitool. > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 19:58:49 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 5 Mar 2018 06:58:49 +1100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Replying to a single email because I am lazier than you. I would have included logs, except /var/log/ironic on the bifrost machine is empty. There are entries in syslog, but nothing that seems related (its all periodic task kind of stuff). However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a hardware type. I've removed ucs entirely from that list and restarted conductor, but that didn't help. I suspect https://review.openstack.org/#/c/549318/3 is more subtle than that. I will patch in that change and see if I can get things to work after a redeploy. Michael On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: > Hi Michael, > > If you're using the latest release of biifrost I suspect you're hitting > https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox for > review. > > For a workaround, modify /etc/ironic/ironic.conf, and set > enabled_hardware_types=ipmi. > > Cheers, > Mark > > On 4 Mar 2018 5:50 p.m., "Julia Kreger" > wrote: > >> > No valid host was found. Reason: No conductor service registered which >> > supports driver agent_ipmitool. (HTTP 400) >> > >> > I can't see anything helpful in the logs. What driver should I be using >> for >> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >> >> Weird, I'm wondering what the error is in the conductor log. You can >> try using "ipmi" for the hardware type that replaces >> agent_ipmitool/pxe_ipmitool. >> >> -Julia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Sun Mar 4 20:04:11 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sun, 4 Mar 2018 20:04:11 +0000 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: The ILO hardware type was also not loading because the required management and power interfaces were not enabled. The patch should address that but please let us know if there are further issues. Mark On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: Replying to a single email because I am lazier than you. I would have included logs, except /var/log/ironic on the bifrost machine is empty. There are entries in syslog, but nothing that seems related (its all periodic task kind of stuff). However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a hardware type. I've removed ucs entirely from that list and restarted conductor, but that didn't help. I suspect https://review. openstack.org/#/c/549318/3 is more subtle than that. I will patch in that change and see if I can get things to work after a redeploy. Michael On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: > Hi Michael, > > If you're using the latest release of biifrost I suspect you're hitting > https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox for > review. > > For a workaround, modify /etc/ironic/ironic.conf, and set > enabled_hardware_types=ipmi. > > Cheers, > Mark > > On 4 Mar 2018 5:50 p.m., "Julia Kreger" > wrote: > >> > No valid host was found. Reason: No conductor service registered which >> > supports driver agent_ipmitool. (HTTP 400) >> > >> > I can't see anything helpful in the logs. What driver should I be using >> for >> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >> >> Weird, I'm wondering what the error is in the conductor log. You can >> try using "ipmi" for the hardware type that replaces >> agent_ipmitool/pxe_ipmitool. >> >> -Julia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 20:33:17 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 5 Mar 2018 07:33:17 +1100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Ok, so I applied your patch and redeployed. I now get a list of drivers in "ironic driver-list", and I can now enroll a node. Interestingly, the node sits in the "enroll" provisioning state for ages and doesn't appear to ever get a meaningful power state (ever being after a five minute wait). There are still no logs in /var/log/ironic, and grepping for the node's uuid in /var/log/syslog returns zero log items. Your thoughts? Michael On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: > The ILO hardware type was also not loading because the required management > and power interfaces were not enabled. The patch should address that but > please let us know if there are further issues. > Mark > > > On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: > > Replying to a single email because I am lazier than you. > > I would have included logs, except /var/log/ironic on the bifrost machine > is empty. There are entries in syslog, but nothing that seems related (its > all periodic task kind of stuff). > > However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a > hardware type. I've removed ucs entirely from that list and restarted > conductor, but that didn't help. I suspect https://review.opensta > ck.org/#/c/549318/3 is more subtle than that. I will patch in that change > and see if I can get things to work after a redeploy. > > Michael > > > > On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: > >> Hi Michael, >> >> If you're using the latest release of biifrost I suspect you're hitting >> https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox >> for review. >> >> For a workaround, modify /etc/ironic/ironic.conf, and set >> enabled_hardware_types=ipmi. >> >> Cheers, >> Mark >> >> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >> wrote: >> >>> > No valid host was found. Reason: No conductor service registered which >>> > supports driver agent_ipmitool. (HTTP 400) >>> > >>> > I can't see anything helpful in the logs. What driver should I be >>> using for >>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>> >>> Weird, I'm wondering what the error is in the conductor log. You can >>> try using "ipmi" for the hardware type that replaces >>> agent_ipmitool/pxe_ipmitool. >>> >>> -Julia >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Mar 4 20:46:39 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 4 Mar 2018 20:46:39 +0000 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: References: Message-ID: Thx Julia, Another option is instead of changing meeting time, you could establish a tick-tock meeting, for example odd weeks for US-Euro friendly times and even weeks for US-Asia friendly times. On Mar 4, 2018 7:01 PM, "Julia Kreger" wrote: > Greetings everyone! > > As our community composition has shifted to be more global, the > question has arisen if we should consider shifting the meeting to be > more friendly to some of our contributors in the APAC time zones. > Alternatively this may involve changing our processes to better plan > and communicate, but the first step is to understand our overlaps and > what might work well for everyone. > > I have created a doodle poll, from which I would like understand what > times would ideally work, and from there we can determine if there is > a better time to meet. > > The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk > > Please don't feel the need to select times that would be burdensome to > yourself. This is only to gather information as to the time of day > that would be ideal for everyone. All times are set as UTC on the > poll. > > Once we have collected some data, we should expect to discuss during > our meeting on the 12th. > > Thanks everyone! > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sun Mar 4 21:29:58 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 4 Mar 2018 21:29:58 +0000 Subject: [openstack-dev] [tripleo] upgrading to a containerized undercloud Message-ID: The use case that I'm working on right now is the following: As an operator, I would like to upgrade my non-containerized undercloud running on Queens to a containerized undercloud running on Rocky. Also, I would like to maintain the exact same command to upgrade my undercloud, which is: openstack undercloud upgrade (with --use-heat to containerize it). The work has been tracked here: https://trello.com/c/nFbky9Uk/5-upgrade-support-from-instack-undercloud But here's an update and some open discussion before we continue to make progress. ## Workflow This is what I've found the easiest to implement and maintain: 1) Update python-tripleoclient-* and tripleo-heat-templates. 2) Run openstack overcloud container prepare. 3) Run openstack undercloud upgrade --use-heat, that underneath will: stop non-containerized services, upgrade all packages & dependencies and deploy a containerized undercloud. Note: the data isn't touched, so when the upgrade is done, the undercloud is just upgraded to Rocky, and containerized. ## Blockers encountered 1) Passwords were re-generated during the containerization, will be fixed by: https://review.openstack.org/#/c/549600/ 2) Neutron DB name was different in instack-undercloud. DB will be renamed by https://review.openstack.org/#/c/549609/ 3) Upgrade logic will live in tripleoclient: https://review.openstack.org/#/c/549624/ (note that it's small) ## Testing I'm using https://review.openstack.org/#/c/549611/ for testing but I'm also deploying in my local environment. I've been upgrading Pike to Queens successfully, when applying my patches. ## Roadmap I would like us to solve the containerized undercloud upgrade case by rocky-m1, and have by the end of m1 a CI job that actually test the operator workflow. I'll need some feedback, reviews on the proposal & reviews. Thanks in advance, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Sun Mar 4 21:42:04 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sun, 4 Mar 2018 21:42:04 +0000 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Try setting the ironic_log_dir variable to /var/log/ironic, or setting [default] log_dir to the same in ironic.conf. I'm surprised it's not logging to a file by default. Mark On 4 Mar 2018 8:33 p.m., "Michael Still" wrote: > Ok, so I applied your patch and redeployed. I now get a list of drivers in > "ironic driver-list", and I can now enroll a node. > > Interestingly, the node sits in the "enroll" provisioning state for ages > and doesn't appear to ever get a meaningful power state (ever being after a > five minute wait). There are still no logs in /var/log/ironic, and grepping > for the node's uuid in /var/log/syslog returns zero log items. > > Your thoughts? > > Michael > > > > On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: > >> The ILO hardware type was also not loading because the required >> management and power interfaces were not enabled. The patch should address >> that but please let us know if there are further issues. >> Mark >> >> >> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: >> >> Replying to a single email because I am lazier than you. >> >> I would have included logs, except /var/log/ironic on the bifrost machine >> is empty. There are entries in syslog, but nothing that seems related (its >> all periodic task kind of stuff). >> >> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a >> hardware type. I've removed ucs entirely from that list and restarted >> conductor, but that didn't help. I suspect https://review.opensta >> ck.org/#/c/549318/3 is more subtle than that. I will patch in that >> change and see if I can get things to work after a redeploy. >> >> Michael >> >> >> >> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: >> >>> Hi Michael, >>> >>> If you're using the latest release of biifrost I suspect you're hitting >>> https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox >>> for review. >>> >>> For a workaround, modify /etc/ironic/ironic.conf, and set >>> enabled_hardware_types=ipmi. >>> >>> Cheers, >>> Mark >>> >>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >>> wrote: >>> >>>> > No valid host was found. Reason: No conductor service registered which >>>> > supports driver agent_ipmitool. (HTTP 400) >>>> > >>>> > I can't see anything helpful in the logs. What driver should I be >>>> using for >>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>>> >>>> Weird, I'm wondering what the error is in the conductor log. You can >>>> try using "ipmi" for the hardware type that replaces >>>> agent_ipmitool/pxe_ipmitool. >>>> >>>> -Julia >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 21:49:13 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 5 Mar 2018 08:49:13 +1100 Subject: [openstack-dev] [nova] vendordata plugin for freeIPA host enrollment In-Reply-To: <58248BCD.1080006@redhat.com> References: <58248BCD.1080006@redhat.com> Message-ID: I was thinking about this the other day... How do you de-register instances from freeipa when the instance is deleted? Is there a missing feature in vendordata there that you need? Michael On Fri, Nov 11, 2016 at 2:01 AM, Rob Crittenden wrote: > Wanted to let you know I'm working on a nova metadata vendordata plugin > that will help automate instance enrollment into a freeIPA server. > > This will do a number of things for a user: > - provide centralized user identity, sudo and host-based access control > for the instances > - provide the instance an identity it can use for itself > - using this identity a host can obtain SSL certificates for itself from > your freeIPA CA > > If ipa_enroll is set to True in the instance metadata (or in the image > metadata) when a nova instance is spawned then a one-time password will > be created and IPA enrollment will occur during the cloud-init stage. > > Code is currently at https://github.com/rcritten/novajoin > > rob > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Sun Mar 4 22:05:46 2018 From: mark at stackhpc.com (Mark Goddard) Date: Sun, 4 Mar 2018 22:05:46 +0000 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: On the enroll state, you can move it to available via manageable by setting the provision state to manage, then provide. Try an ironic node-validate to diagnose the issue, and make sure the ipmi credentials given can be used to query the nodes power state using ipmitool. Mark On 4 Mar 2018 9:42 p.m., "Mark Goddard" wrote: > Try setting the ironic_log_dir variable to /var/log/ironic, or setting > [default] log_dir to the same in ironic.conf. > > I'm surprised it's not logging to a file by default. > > Mark > > On 4 Mar 2018 8:33 p.m., "Michael Still" wrote: > >> Ok, so I applied your patch and redeployed. I now get a list of drivers >> in "ironic driver-list", and I can now enroll a node. >> >> Interestingly, the node sits in the "enroll" provisioning state for ages >> and doesn't appear to ever get a meaningful power state (ever being after a >> five minute wait). There are still no logs in /var/log/ironic, and grepping >> for the node's uuid in /var/log/syslog returns zero log items. >> >> Your thoughts? >> >> Michael >> >> >> >> On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: >> >>> The ILO hardware type was also not loading because the required >>> management and power interfaces were not enabled. The patch should address >>> that but please let us know if there are further issues. >>> Mark >>> >>> >>> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: >>> >>> Replying to a single email because I am lazier than you. >>> >>> I would have included logs, except /var/log/ironic on the bifrost >>> machine is empty. There are entries in syslog, but nothing that seems >>> related (its all periodic task kind of stuff). >>> >>> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a >>> hardware type. I've removed ucs entirely from that list and restarted >>> conductor, but that didn't help. I suspect https://review.opensta >>> ck.org/#/c/549318/3 is more subtle than that. I will patch in that >>> change and see if I can get things to work after a redeploy. >>> >>> Michael >>> >>> >>> >>> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: >>> >>>> Hi Michael, >>>> >>>> If you're using the latest release of biifrost I suspect you're hitting >>>> https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox >>>> for review. >>>> >>>> For a workaround, modify /etc/ironic/ironic.conf, and set >>>> enabled_hardware_types=ipmi. >>>> >>>> Cheers, >>>> Mark >>>> >>>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >>>> wrote: >>>> >>>>> > No valid host was found. Reason: No conductor service registered >>>>> which >>>>> > supports driver agent_ipmitool. (HTTP 400) >>>>> > >>>>> > I can't see anything helpful in the logs. What driver should I be >>>>> using for >>>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>>>> >>>>> Weird, I'm wondering what the error is in the conductor log. You can >>>>> try using "ipmi" for the hardware type that replaces >>>>> agent_ipmitool/pxe_ipmitool. >>>>> >>>>> -Julia >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 23:18:47 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 5 Mar 2018 10:18:47 +1100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: Ahhh yes. The default (null) value of ironic_log_dir doesn't do quite what the author thought it did. https://review.openstack.org/549650 is a patch to correct that. Michael On Mon, Mar 5, 2018 at 8:42 AM, Mark Goddard wrote: > Try setting the ironic_log_dir variable to /var/log/ironic, or setting > [default] log_dir to the same in ironic.conf. > > I'm surprised it's not logging to a file by default. > > Mark > > On 4 Mar 2018 8:33 p.m., "Michael Still" wrote: > >> Ok, so I applied your patch and redeployed. I now get a list of drivers >> in "ironic driver-list", and I can now enroll a node. >> >> Interestingly, the node sits in the "enroll" provisioning state for ages >> and doesn't appear to ever get a meaningful power state (ever being after a >> five minute wait). There are still no logs in /var/log/ironic, and grepping >> for the node's uuid in /var/log/syslog returns zero log items. >> >> Your thoughts? >> >> Michael >> >> >> >> On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: >> >>> The ILO hardware type was also not loading because the required >>> management and power interfaces were not enabled. The patch should address >>> that but please let us know if there are further issues. >>> Mark >>> >>> >>> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: >>> >>> Replying to a single email because I am lazier than you. >>> >>> I would have included logs, except /var/log/ironic on the bifrost >>> machine is empty. There are entries in syslog, but nothing that seems >>> related (its all periodic task kind of stuff). >>> >>> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as a >>> hardware type. I've removed ucs entirely from that list and restarted >>> conductor, but that didn't help. I suspect https://review.opensta >>> ck.org/#/c/549318/3 is more subtle than that. I will patch in that >>> change and see if I can get things to work after a redeploy. >>> >>> Michael >>> >>> >>> >>> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: >>> >>>> Hi Michael, >>>> >>>> If you're using the latest release of biifrost I suspect you're hitting >>>> https://bugs.launchpad.net/bifrost/+bug/1752975. I've submitted anfox >>>> for review. >>>> >>>> For a workaround, modify /etc/ironic/ironic.conf, and set >>>> enabled_hardware_types=ipmi. >>>> >>>> Cheers, >>>> Mark >>>> >>>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >>>> wrote: >>>> >>>>> > No valid host was found. Reason: No conductor service registered >>>>> which >>>>> > supports driver agent_ipmitool. (HTTP 400) >>>>> > >>>>> > I can't see anything helpful in the logs. What driver should I be >>>>> using for >>>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>>>> >>>>> Weird, I'm wondering what the error is in the conductor log. You can >>>>> try using "ipmi" for the hardware type that replaces >>>>> agent_ipmitool/pxe_ipmitool. >>>>> >>>>> -Julia >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Sun Mar 4 23:32:25 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 5 Mar 2018 10:32:25 +1100 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: I think one might be a bug in the deploy guide then. It states: "In order for nodes to be available for deploying workloads on them, nodes must be in the available provision state. To do this, nodes created with API version 1.11 and above must be moved from the enroll state to the manageable state and then to the available state. This section can be safely skipped, if API version 1.10 or earlier is used (which is the case by default)." Whereas I definitely had to move the node to the manage provision state manually to get the node to be managed. For reference, this is the set of command lines I ended up using to manually enroll a node (in case its of use to someone else): ironic node-create -d agent_ipmitool \ -i ipmi_username=root \ -i ipmi_password=superuser \ -i ipmi_address=192.168.50.31 \ -i deploy_kernel=http://192.168.50.209:8080/ipa.vmlinuz \ -i deploy_ramdisk=http://192.168.50.209:8080/ipa.initramfs \ -p cpus=16 \ -p memory_mb=12288 \ -p local_gb=750 \ -p cpu_arch=x86_64 \ -p capabilities=boot_option:local \ -n lab8 ironic port-create -n ${UUID} -a ${DHCP_MAC} ironic node-validate lab8 ironic --ironic-api-version 1.11 node-set-provision-state lab8 manage Michael On Mon, Mar 5, 2018 at 9:05 AM, Mark Goddard wrote: > On the enroll state, you can move it to available via manageable by > setting the provision state to manage, then provide. > > Try an ironic node-validate to diagnose the issue, and make sure the ipmi > credentials given can be used to query the nodes power state using ipmitool. > > Mark > > On 4 Mar 2018 9:42 p.m., "Mark Goddard" wrote: > >> Try setting the ironic_log_dir variable to /var/log/ironic, or setting >> [default] log_dir to the same in ironic.conf. >> >> I'm surprised it's not logging to a file by default. >> >> Mark >> >> On 4 Mar 2018 8:33 p.m., "Michael Still" wrote: >> >>> Ok, so I applied your patch and redeployed. I now get a list of drivers >>> in "ironic driver-list", and I can now enroll a node. >>> >>> Interestingly, the node sits in the "enroll" provisioning state for ages >>> and doesn't appear to ever get a meaningful power state (ever being after a >>> five minute wait). There are still no logs in /var/log/ironic, and grepping >>> for the node's uuid in /var/log/syslog returns zero log items. >>> >>> Your thoughts? >>> >>> Michael >>> >>> >>> >>> On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: >>> >>>> The ILO hardware type was also not loading because the required >>>> management and power interfaces were not enabled. The patch should address >>>> that but please let us know if there are further issues. >>>> Mark >>>> >>>> >>>> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: >>>> >>>> Replying to a single email because I am lazier than you. >>>> >>>> I would have included logs, except /var/log/ironic on the bifrost >>>> machine is empty. There are entries in syslog, but nothing that seems >>>> related (its all periodic task kind of stuff). >>>> >>>> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as >>>> a hardware type. I've removed ucs entirely from that list and restarted >>>> conductor, but that didn't help. I suspect https://review.opensta >>>> ck.org/#/c/549318/3 is more subtle than that. I will patch in that >>>> change and see if I can get things to work after a redeploy. >>>> >>>> Michael >>>> >>>> >>>> >>>> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard wrote: >>>> >>>>> Hi Michael, >>>>> >>>>> If you're using the latest release of biifrost I suspect you're >>>>> hitting https://bugs.launchpad.net/bifrost/+bug/1752975. I've >>>>> submitted anfox for review. >>>>> >>>>> For a workaround, modify /etc/ironic/ironic.conf, and set >>>>> enabled_hardware_types=ipmi. >>>>> >>>>> Cheers, >>>>> Mark >>>>> >>>>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >>>>> wrote: >>>>> >>>>>> > No valid host was found. Reason: No conductor service registered >>>>>> which >>>>>> > supports driver agent_ipmitool. (HTTP 400) >>>>>> > >>>>>> > I can't see anything helpful in the logs. What driver should I be >>>>>> using for >>>>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>>>>> >>>>>> Weird, I'm wondering what the error is in the conductor log. You can >>>>>> try using "ipmi" for the hardware type that replaces >>>>>> agent_ipmitool/pxe_ipmitool. >>>>>> >>>>>> -Julia >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Mon Mar 5 01:02:17 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 5 Mar 2018 12:02:17 +1100 Subject: [openstack-dev] [devstack] Jens Harbott added to core Message-ID: Hello, Jens Harbott (frickler) has agreed to take on core responsibilities in devstack, so feel free to bug him about reviews :) We have also added the members of qa-release in directly to devstack-core, just for visibility (they already had permissions via qa-release -> devstack-release -> devstack-core). We have also added devstack-core as grenade core to hopefully expand coverage there. --- Always feel free to give a gentle ping on reviews that don't seem have received sufficient attention. But please also take a few minutes to compose a commit message! I think sometimes devs have been deep in the weeds with their cool change and devstack requires just a few tweaks. It's easy to forget not all reviewers may have this same context. A couple of well-crafted sentences can avoid pulling projects and "git blame" archaeological digs, which gets everything going faster! Thanks, -i From atsumi.yoshihiko at po.ntt-tx.co.jp Mon Mar 5 05:31:36 2018 From: atsumi.yoshihiko at po.ntt-tx.co.jp (=?UTF-8?B?5ril576OIOaFtuW9pg==?=) Date: Mon, 5 Mar 2018 14:31:36 +0900 Subject: [openstack-dev] Question about API endpoints Message-ID: Hi all, I try to deploy multinode OpenStack by openstack-helm and want to access OpenStack API endpoints from out of k8s nodes. To avoid service failure by node down, I think I need one virtual IP for the endpoints.(like Pacemaker) Could you show me how to realize that if you have any information? A. Deploy OpenStack services for NodePort, and distribute the access to nodes using physical Load Balancer. B. Using Ingress?   I think Ingress is for L7 routing, so it can't be used to create VIP for the endpoints. C. Any other ideas? And when I try this on GCP/GKE, is there any difference from on-premises? best regards -- -------------------------------------------------------- Yoshihiko Atsumi E-mail:atsumi.yoshihiko at po.ntt-tx.co.jp -------------------------------------------------------- From atsumi.yoshihiko at po.ntt-tx.co.jp Mon Mar 5 05:34:31 2018 From: atsumi.yoshihiko at po.ntt-tx.co.jp (=?UTF-8?B?5ril576OIOaFtuW9pg==?=) Date: Mon, 5 Mar 2018 14:34:31 +0900 Subject: [openstack-dev] [openstack-helm] Question about API endpoints Message-ID: <20f7e795-8221-0ff8-ebd1-484c0764def8@po.ntt-tx.co.jp> Hi all, # Resend with openstack-helm tag I try to deploy multinode OpenStack by openstack-helm and want to access OpenStack API endpoints from out of k8s nodes. To avoid service failure by node down, I think I need one virtual IP for the endpoints.(like Pacemaker) Could you show me how to realize that if you have any information? A. Deploy OpenStack services for NodePort, and distribute the access to nodes using physical Load Balancer. B. Using Ingress?   I think Ingress is for L7 routing, so it can't be used to create VIP for the endpoints. C. Any other ideas? And when I try this on GCP/GKE, is there any difference from on-premises? best regards -- -------------------------------------------------------- Yoshihiko Atsumi E-mail:atsumi.yoshihiko at po.ntt-tx.co.jp -------------------------------------------------------- From hanishgogadahcu at gmail.com Mon Mar 5 06:09:22 2018 From: hanishgogadahcu at gmail.com (hanish gogada) Date: Mon, 5 Mar 2018 11:39:22 +0530 Subject: [openstack-dev] [Nova] [all] native ovsdb interface for vif_plug_ovs Message-ID: Hi Nova team, Currently nova-compute-agent uses vsctl for ovsdb interface and it does not support nova-to-ovs communication over tcp socket. We are trying to integrate Cavium Liquidio with openstack, it requires nova-to-ovs communication over a tcp connection (since ovs is offloaded to NIC). These two patches add the missing functionality. https://review.openstack.org/#/c/476612/ https://review.openstack.org/#/c/482226/ Adding native python ovsdb interface would improve the portability and performance of the nova-to-ovs communication. A similar mechanism is already present in Neutron. I added a video with a demo of these patches in action. Please have a look at it and review these patches. Thanks hanish gogada -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: liquidio-os-integration.m4v Type: video/mp4 Size: 9845116 bytes Desc: not available URL: From hyunsun.moon at gmail.com Mon Mar 5 07:42:57 2018 From: hyunsun.moon at gmail.com (Hyunsun Moon) Date: Mon, 5 Mar 2018 16:42:57 +0900 Subject: [openstack-dev] [openstack-helm] Question about API endpoints In-Reply-To: <20f7e795-8221-0ff8-ebd1-484c0764def8@po.ntt-tx.co.jp> References: <20f7e795-8221-0ff8-ebd1-484c0764def8@po.ntt-tx.co.jp> Message-ID: Hi Yoshihiko, If you have physical LB in your environment, you might want to make use of NodePort for distributing the access to multiple controller nodes. In that case, it is recommended to set Values.network.external_policy_local to true so that you could eliminate unnecessary hops. Ingress backed by nginx could be used of course, but as you pointed, IP address of the node where ingress pod resides will be the address you’re accessing, which might not be desirable in many use cases. If you plan to try it on GCP/GKE, where the ingress controller is backed by GCP’s load-balancer service, NodePort + ingress seems valid option for exposing your service to external. FYI, https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer Hope this helps. Hyunsun > On 5 Mar 2018, at 2:34 PM, 渥美 慶彦 wrote: > > Hi all, > # Resend with openstack-helm tag > > I try to deploy multinode OpenStack by openstack-helm > and want to access OpenStack API endpoints from out of k8s nodes. > To avoid service failure by node down, I think I need one virtual IP for the endpoints.(like Pacemaker) > Could you show me how to realize that if you have any information? > > A. Deploy OpenStack services for NodePort, and distribute the access to nodes using physical Load Balancer. > B. Using Ingress? > I think Ingress is for L7 routing, so it can't be used to create VIP for the endpoints. > C. Any other ideas? > > And when I try this on GCP/GKE, is there any difference from on-premises? > > best regards > > -- > -------------------------------------------------------- > Yoshihiko Atsumi > E-mail:atsumi.yoshihiko at po.ntt-tx.co.jp > -------------------------------------------------------- > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Mar 5 08:17:09 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 5 Mar 2018 08:17:09 +0000 Subject: [openstack-dev] [tripleo] Queens RC1 was released! Message-ID: TripleO team is proud to announce that we released Queens RC1! Some numbers: 210 bugs fixed 7 features implemented In Pike RC1: 138 bug fixed 8 features implemented In Ocata RC1: 62 bug fixed 7 features implemented In Newton RC1: 51 bug fixed 11 features implemented Unless we find a need to do it, we won't release RC2, but we'll see how it works during the next days. We encourage people to backport their bugfixes to stable/queens. Also all work related to FFU & upgrades is moving to rocky-1 but we expect the patches to be backported into stable/queens. Reminder: backports to stable/queens should be done by patches authors to help PTL & TripleO stable maintainers. Thanks and nice work everyone! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Mon Mar 5 08:35:30 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Mon, 5 Mar 2018 09:35:30 +0100 Subject: [openstack-dev] [nova][thirdparty-ci] s390x third party CI broken Message-ID: Hi all, the s390x nova CI is currently broken due to the bump of the grpcio version. I’m working on fixing it…. Regards, --- Andreas Scheuring (andreas_s) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Mar 5 09:47:16 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Mar 2018 10:47:16 +0100 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: References: Message-ID: <372f3de5-5d8f-6281-2fed-a26b63014775@redhat.com> On 03/04/2018 07:00 PM, Julia Kreger wrote: > Greetings everyone! > > As our community composition has shifted to be more global, the > question has arisen if we should consider shifting the meeting to be > more friendly to some of our contributors in the APAC time zones. > Alternatively this may involve changing our processes to better plan > and communicate, but the first step is to understand our overlaps and > what might work well for everyone. > > I have created a doodle poll, from which I would like understand what > times would ideally work, and from there we can determine if there is > a better time to meet. > > The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk > > Please don't feel the need to select times that would be burdensome to > yourself. This is only to gather information as to the time of day > that would be ideal for everyone. All times are set as UTC on the > poll. Are you sure? I'm asking because the last time I checked Doodle created all polls in some specific time, defaulting to your local time. Then for each participant it converts the times to their local time. E.g. for me the time span is 1am to to 12am Berlin time (UTC+1), is it what you expected? > > Once we have collected some data, we should expect to discuss during > our meeting on the 12th. > > Thanks everyone! > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Mon Mar 5 09:47:56 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Mar 2018 10:47:56 +0100 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: References: Message-ID: <6a718c23-8249-eb00-5d07-f1f7306e7c1c@redhat.com> On 03/04/2018 09:46 PM, Zhipeng Huang wrote: > Thx Julia, > > Another option is instead of changing meeting time, you could establish a > tick-tock meeting, for example odd weeks for US-Euro friendly times and even > weeks for US-Asia friendly times. We tried that roughly two years ago, and it did not work because very few people showed up on the APAC time. I think the goal of this poll is to figure out how many people would show up now. > > On Mar 4, 2018 7:01 PM, "Julia Kreger" > wrote: > > Greetings everyone! > > As our community composition has shifted to be more global, the > question has arisen if we should consider shifting the meeting to be > more friendly to some of our contributors in the APAC time zones. > Alternatively this may involve changing our processes to better plan > and communicate, but the first step is to understand our overlaps and > what might work well for everyone. > > I have created a doodle poll, from which I would like understand what > times would ideally work, and from there we can determine if there is > a better time to meet. > > The poll can be found at: https://doodle.com/poll/6kuwixpkkhbwsibk > > > Please don't feel the need to select times that would be burdensome to > yourself. This is only to gather information as to the time of day > that would be ideal for everyone. All times are set as UTC on the > poll. > > Once we have collected some data, we should expect to discuss during > our meeting on the 12th. > > Thanks everyone! > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Mon Mar 5 09:57:34 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 5 Mar 2018 10:57:34 +0100 Subject: [openstack-dev] [tripleo] Queens RC1 was released! In-Reply-To: References: Message-ID: <6f53733f-646b-9e28-7dd2-026b4fb079ec@redhat.com> Hi! A reminder that https://review.openstack.org/534842 is quite important, as it will enable upgrade to hardware types from classic drivers. The latter will be removed in one of the future releases. As it applies to online_data_migrations it will not be run after you upgrade to Queens without this patch. On 03/05/2018 09:17 AM, Emilien Macchi wrote: > TripleO team is proud to announce that we released Queens RC1! > > Some numbers: > 210 bugs fixed > 7 features implemented > > In Pike RC1: > 138 bug fixed > 8 features implemented > > In Ocata RC1: > 62 bug fixed > 7 features implemented > > In Newton RC1: > 51 bug fixed > 11 features implemented > > > Unless we find a need to do it, we won't release RC2, but we'll see how it works > during the next days. > We encourage people to backport their bugfixes to stable/queens. > Also all work related to FFU & upgrades is moving to rocky-1 but we expect the > patches to be backported into stable/queens. > > Reminder: backports to stable/queens should be done by patches authors to help > PTL & TripleO stable maintainers. > > Thanks and nice work everyone! > -- > Emilien Macchi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From andrea.frittoli at gmail.com Mon Mar 5 10:11:40 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 05 Mar 2018 10:11:40 +0000 Subject: [openstack-dev] [devstack] Jens Harbott added to core In-Reply-To: References: Message-ID: On Mon, 5 Mar 2018, 1:02 am Ian Wienand, wrote: > Hello, > > Jens Harbott (frickler) has agreed to take on core responsibilities in > devstack, so feel free to bug him about reviews :) > Yay +1 > > We have also added the members of qa-release in directly to > devstack-core, just for visibility (they already had permissions via > qa-release -> devstack-release -> devstack-core). > > We have also added devstack-core as grenade core to hopefully expand > coverage there. > Thanks, this helps indeed. I started working on the zuulv3 native grenade jobs, hopefully this will help getting a bit more speed on that. > --- > > Always feel free to give a gentle ping on reviews that don't seem have > received sufficient attention. > > But please also take a few minutes to compose a commit message! I > think sometimes devs have been deep in the weeds with their cool > change and devstack requires just a few tweaks. It's easy to forget > not all reviewers may have this same context. A couple of > well-crafted sentences can avoid pulling projects and "git blame" > archaeological digs, which gets everything going faster! > +1000 Andrea Frittoli (andreaf) > > Thanks, > > -i > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksnhr.tech at gmail.com Mon Mar 5 10:18:21 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Mon, 5 Mar 2018 19:18:21 +0900 Subject: [openstack-dev] [heat] heat-dashboard is non-free, with broken unit test env In-Reply-To: <6a249305-b7d0-8b1f-b31b-1fa990e8d054@debian.org> References: <4129015c-b120-786f-60e5-2d6a634f3999@debian.org> <664e765d-77ed-0255-625e-a56cc9322aac@debian.org> <6a249305-b7d0-8b1f-b31b-1fa990e8d054@debian.org> Message-ID: Hi, Thank you very much for your quick response & kind comment :) Yes, we are planning to backport some patches to stable/queens. Also in the last PTG, had discussion with horizon team about how to leverage xstatic repos to solve this issue. Current our plan is to create a new xstatic-*** repo to support js which are needed by only heat-dashboard. For any other js, heat-dashboard will take the existing xstatic-***. In the end, we will not have any embedded js in heat-dashboard repo, for sure. Now Dublin's snow is pretty cleared, it's time to fly back to my home. I don't like to see snow any more :P Regards, Kaz 2018-03-04 22:22 GMT+09:00 Thomas Goirand : > On 03/02/2018 07:28 PM, Kaz Shinohara wrote: >> Hi Thomas(zigo), >> >> I found an issue which is included in >> https://review.openstack.org/#/c/548924/ (you did cherry pick last >> night) >> In short, this issue makes it impossible to install heat-dashboard.. >> >> I landed fix for this. https://review.openstack.org/#/c/549214/ >> >> Could you kindly pick up this for your package ? >> Sorry again for your inconvenience. >> >> Regards, >> Kaz(kazsh) > > Hi, > > I've added the patch, thanks for it. > > So now, I'm embedding that patch for fixing unit tests, plus: > https://review.openstack.org/#/c/547468/ > https://review.openstack.org/#/c/549214/ > > Indeed, it'd be nice to have all of them officially backported to > Queens, as you suggested on IRC. It'd be even better to completely > remove embedded stuff, and use xstatic packages. There's already an > XStatic package for the font-awesome which can be used. I do believe it > would be very much OK to add such a requirement to heat-dashboard, since > it is already one of the requirements for Horizon. > > Altogether, thanks a lot for your care, as always, the OpenStack > community turns out to be comprehensive, reactive and simply awesome! :) > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mark at stackhpc.com Mon Mar 5 10:25:16 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 5 Mar 2018 10:25:16 +0000 Subject: [openstack-dev] [Ironic][Bifrost] Manually enrolling a node In-Reply-To: References: Message-ID: I think the issue is that the default behaviour of the client was changed in the queens (2.0.0). Previously the default microversion was 1.9, but now it is latest [1]. Looks like the deploy guide needs some more conditional wording. I've raised a bug [2], feel free to comment if I've missed something. Mark [1] https://docs.openstack.org/releasenotes/python-ironicclient/queens.html [2] https://bugs.launchpad.net/ironic/+bug/1753435 On 4 March 2018 at 23:32, Michael Still wrote: > I think one might be a bug in the deploy guide then. It states: > > "In order for nodes to be available for deploying workloads on them, nodes > must be in the available provision state. To do this, nodes created with > API version 1.11 and above must be moved from the enroll state to the > manageable state and then to the available state. This section can be > safely skipped, if API version 1.10 or earlier is used (which is the case > by default)." > > Whereas I definitely had to move the node to the manage provision state > manually to get the node to be managed. For reference, this is the set of > command lines I ended up using to manually enroll a node (in case its of > use to someone else): > > ironic node-create -d agent_ipmitool \ > -i ipmi_username=root \ > -i ipmi_password=superuser \ > -i ipmi_address=192.168.50.31 \ > -i deploy_kernel=http://192.168.50.209:8080/ipa.vmlinuz \ > -i deploy_ramdisk=http://192.168.50.209:8080/ipa.initramfs \ > -p cpus=16 \ > -p memory_mb=12288 \ > -p local_gb=750 \ > -p cpu_arch=x86_64 \ > -p capabilities=boot_option:local \ > -n lab8 > ironic port-create -n ${UUID} -a ${DHCP_MAC} > ironic node-validate lab8 > ironic --ironic-api-version 1.11 node-set-provision-state lab8 manage > > Michael > > On Mon, Mar 5, 2018 at 9:05 AM, Mark Goddard wrote: > >> On the enroll state, you can move it to available via manageable by >> setting the provision state to manage, then provide. >> >> Try an ironic node-validate to diagnose the issue, and make sure the ipmi >> credentials given can be used to query the nodes power state using ipmitool. >> >> Mark >> >> On 4 Mar 2018 9:42 p.m., "Mark Goddard" wrote: >> >>> Try setting the ironic_log_dir variable to /var/log/ironic, or setting >>> [default] log_dir to the same in ironic.conf. >>> >>> I'm surprised it's not logging to a file by default. >>> >>> Mark >>> >>> On 4 Mar 2018 8:33 p.m., "Michael Still" wrote: >>> >>>> Ok, so I applied your patch and redeployed. I now get a list of drivers >>>> in "ironic driver-list", and I can now enroll a node. >>>> >>>> Interestingly, the node sits in the "enroll" provisioning state for >>>> ages and doesn't appear to ever get a meaningful power state (ever being >>>> after a five minute wait). There are still no logs in /var/log/ironic, and >>>> grepping for the node's uuid in /var/log/syslog returns zero log items. >>>> >>>> Your thoughts? >>>> >>>> Michael >>>> >>>> >>>> >>>> On Mon, Mar 5, 2018 at 7:04 AM, Mark Goddard wrote: >>>> >>>>> The ILO hardware type was also not loading because the required >>>>> management and power interfaces were not enabled. The patch should address >>>>> that but please let us know if there are further issues. >>>>> Mark >>>>> >>>>> >>>>> On 4 Mar 2018 7:59 p.m., "Michael Still" wrote: >>>>> >>>>> Replying to a single email because I am lazier than you. >>>>> >>>>> I would have included logs, except /var/log/ironic on the bifrost >>>>> machine is empty. There are entries in syslog, but nothing that seems >>>>> related (its all periodic task kind of stuff). >>>>> >>>>> However, Mark is right. I had an /etc/ironic/ironic.conf with "ucs" as >>>>> a hardware type. I've removed ucs entirely from that list and restarted >>>>> conductor, but that didn't help. I suspect https://review.opensta >>>>> ck.org/#/c/549318/3 is more subtle than that. I will patch in that >>>>> change and see if I can get things to work after a redeploy. >>>>> >>>>> Michael >>>>> >>>>> >>>>> >>>>> On Mon, Mar 5, 2018 at 5:45 AM, Mark Goddard >>>>> wrote: >>>>> >>>>>> Hi Michael, >>>>>> >>>>>> If you're using the latest release of biifrost I suspect you're >>>>>> hitting https://bugs.launchpad.net/bifrost/+bug/1752975. I've >>>>>> submitted anfox for review. >>>>>> >>>>>> For a workaround, modify /etc/ironic/ironic.conf, and set >>>>>> enabled_hardware_types=ipmi. >>>>>> >>>>>> Cheers, >>>>>> Mark >>>>>> >>>>>> On 4 Mar 2018 5:50 p.m., "Julia Kreger" >>>>>> wrote: >>>>>> >>>>>>> > No valid host was found. Reason: No conductor service registered >>>>>>> which >>>>>>> > supports driver agent_ipmitool. (HTTP 400) >>>>>>> > >>>>>>> > I can't see anything helpful in the logs. What driver should I be >>>>>>> using for >>>>>>> > bifrost? agent_ipmitool seems to be enabled in ironic.conf. >>>>>>> >>>>>>> Weird, I'm wondering what the error is in the conductor log. You can >>>>>>> try using "ipmi" for the hardware type that replaces >>>>>>> agent_ipmitool/pxe_ipmitool. >>>>>>> >>>>>>> -Julia >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Mar 5 11:06:38 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 5 Mar 2018 11:06:38 +0000 Subject: [openstack-dev] [nova][ptg] team photo Thursday at 11:10 AM In-Reply-To: <8AE0FD41-0C5B-4C48-A0C6-102987CB693D@gmail.com> References: <8AE0FD41-0C5B-4C48-A0C6-102987CB693D@gmail.com> Message-ID: <6CA81746-BC74-4843-8602-2DC77BEC39E6@gmail.com> Howdy everyone, Please find our team photos attached to this email. Cheers, -melanie -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_4374.NEF Type: application/octet-stream Size: 10133448 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_4375.NEF Type: application/octet-stream Size: 10111552 bytes Desc: not available URL: From andr.kurilin at gmail.com Mon Mar 5 12:26:25 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 5 Mar 2018 14:26:25 +0200 Subject: [openstack-dev] [nova][neutron][openstackclient] The way to assign floating IPs to VM Message-ID: Hi stackers! A year ago, Nova team decided to deprecate the addFixedIP, removeFixedIP, addFloatingIP, removeFloatingIP server action APIs and it was done[1]. It looks like not all the consumers paid attention to this change so after novaclient 10.0.0 release (which includes removal of these interfaces[2]) got in u-c file [3] I found broken code in my repo :( I tried to find an alternative way to associate a floating ip with an instance and I faced several new issues: - openstackclient * command `openstack server add floating ip` calls `add_floating_ip` method of novaclient's server object[4]. This method is missed in the latest novaclient (see [2]). It means that this command is broken now. - neutronclient * CLI is deprecated * the python binding accepts floating ip id (need to list all floating IPs to find the id) and port id to attach floating to (where should I take port id?!). So here we have 2 global issues: - openstackclient has a broken command (or I missed something?) - there is no easy way to associate a floating ip with a vm using CLI or via python. [1] https://github.com/openstack/python-novaclient/blob/master/releasenotes/notes/microversion-v2_44-d60c8834e436ad3d.yaml [2] https://github.com/openstack/python-novaclient/blob/master/releasenotes/notes/remove-virt-interfaces-add-rm-fixed-floating-398c905d9c91cca8.yaml [3] https://github.com/openstack/requirements/commit/c126685c2007c818e65c53cc9c32885fae16fd34 [4] https://github.com/openstack/python-openstackclient/blob/b10941ddf6f7f6894b7d87f25c173f227b111e4e/openstackclient/compute/v2/server.py#L266-L267 -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Mon Mar 5 12:57:45 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 5 Mar 2018 12:57:45 +0000 Subject: [openstack-dev] [monasca] PTG follow-up Message-ID: Hello everyone, I hope all of you attending the PTG in Dublin could meanwhile get safely home. For those not attending, a short update, because of Storm Emma red alert had been introduced last week in Ireland and the conference venue had to be closed on Thursday and Friday. We have decided to schedule an additional short session on Wednesday, 7th March, via GoToMeeting [1] to finish the discussion on the remaining topics and to do the tasks prioritization. Everyone is welcome to join. The topics are at the bottom of the etherpad [2]. The IRC team meeting is cancelled this week. The table for prioritization game [3] is filled with topics. Please review them and make your thoughts about which are important for you. Please also let me know if anything is missing. See you on Wednesday Witek P.S. Team photos are available here [4]. [1] https://global.gotomeeting.com/join/664699565 [2] https://etherpad.openstack.org/p/monasca-ptg-rocky [3] https://docs.google.com/spreadsheets/d/10yNylReIaIAINPBDb9IuASpGfuJg2f-hU9K2jpVxlyk/edit?usp=sharing [4] https://www.dropbox.com/sh/dtei3ovfi7z74vo/AAB6nLiArw8fYBiO-X_vDGyna?dl=0 From mriedemos at gmail.com Mon Mar 5 14:07:35 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Mar 2018 08:07:35 -0600 Subject: [openstack-dev] [nova][neutron][openstackclient] The way to assign floating IPs to VM In-Reply-To: References: Message-ID: On 3/5/2018 6:26 AM, Andrey Kurilin wrote: > A year ago, Nova team decided to deprecate the addFixedIP, > removeFixedIP, addFloatingIP, removeFloatingIP server action APIs and it > was done[1]. > > It looks like not all the consumers paid attention to this change so > after novaclient 10.0.0 release (which includes removal of these > interfaces[2]) got in u-c file [3] I found broken code in my repo :( > > I tried to find an alternative way to associate a floating ip with an > instance and I faced several new issues: > > - openstackclient > > * command `openstack server add floating ip` calls `add_floating_ip` > method of novaclient's server object[4]. This method is missed in the > latest novaclient (see [2]). >   It means that this command is broken now. > > - neutronclient > > * CLI is deprecated > * the python binding accepts floating ip id (need to list all floating > IPs to find the id)  and port id to attach floating to (where should I > take port id?!). > > > So here we have 2 global issues: > - openstackclient has a broken command (or I missed something?) > - there is no easy way to associate a floating ip with a vm using CLI or > via python. > > [1] > https://github.com/openstack/python-novaclient/blob/master/releasenotes/notes/microversion-v2_44-d60c8834e436ad3d.yaml > [2] > https://github.com/openstack/python-novaclient/blob/master/releasenotes/notes/remove-virt-interfaces-add-rm-fixed-floating-398c905d9c91cca8.yaml > [3] > https://github.com/openstack/requirements/commit/c126685c2007c818e65c53cc9c32885fae16fd34 > [4] > https://github.com/openstack/python-openstackclient/blob/b10941ddf6f7f6894b7d87f25c173f227b111e4e/openstackclient/compute/v2/server.py#L266-L267 I mentioned the related issue back in January: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126741.html Adding a floating IP to an instance is possible using OSC CLI, it's essentially something like: a) get the server id (openstack server show/list) b) get the port id using the server id (openstack port list --device-id $server_id) c) assign the floating IP to the port (openstack floating ip set --port $port_id) -- Thanks, Matt From bdobreli at redhat.com Mon Mar 5 14:56:59 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 5 Mar 2018 15:56:59 +0100 Subject: [openstack-dev] [tripleo] upgrading to a containerized undercloud In-Reply-To: References: Message-ID: On 3/4/18 10:29 PM, Emilien Macchi wrote: > The use case that I'm working on right now is the following: > >   As an operator, I would like to upgrade my non-containerized > undercloud running on Queens to a containerized undercloud running on Rocky. >   Also, I would like to maintain the exact same command to upgrade my > undercloud, which is: openstack undercloud upgrade (with --use-heat to > containerize it). > > The work has been tracked here: > https://trello.com/c/nFbky9Uk/5-upgrade-support-from-instack-undercloud > > But here's an update and some open discussion before we continue to make > progress. > > ## Workflow > > This is what I've found the easiest to implement and maintain: > > 1) Update python-tripleoclient-* and tripleo-heat-templates. Just a note that those need to be installed first, though you have this covered https://review.openstack.org/#/c/549624/7/tripleoclient/v1/undercloud.py at 100 > 2) Run openstack overcloud container prepare. > 3) Run openstack undercloud upgrade --use-heat, that underneath will: > stop non-containerized services, upgrade all packages & dependencies and > deploy a containerized undercloud. As we have discussed and you noted for https://review.openstack.org/#/c/549609/ we're better off changing the workflow to run the upgrade_tasks so we avoid code duplication. > > Note: the data isn't touched, so when the upgrade is done, the > undercloud is just upgraded to Rocky, and containerized. > > ## Blockers encountered > > 1) Passwords were re-generated during the containerization, will be > fixed by: https://review.openstack.org/#/c/549600/ > 2) Neutron DB name was different in instack-undercloud. DB will be > renamed by https://review.openstack.org/#/c/549609/ > 3) Upgrade logic will live in tripleoclient: > https://review.openstack.org/#/c/549624/ (note that it's small) > > ## Testing > > I'm using https://review.openstack.org/#/c/549611/ for testing but I'm > also deploying in my local environment. I've been upgrading Pike to > Queens successfully, when applying my patches. > > ## Roadmap > > I would like us to solve the containerized undercloud upgrade case by > rocky-m1, and have by the end of m1 a CI job that actually test the > operator workflow. > > > I'll need some feedback, reviews on the proposal & reviews. > Thanks in advance, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From scheuran at linux.vnet.ibm.com Mon Mar 5 15:02:00 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Mon, 5 Mar 2018 16:02:00 +0100 Subject: [openstack-dev] [nova][thirdparty-ci] s390x third party CI broken In-Reply-To: References: Message-ID: <1D5B1AB8-F798-4055-8E79-3B0B4C0E05B1@linux.vnet.ibm.com> CI seems to be running again, but with a reduced number of tests for now (only nova.api, excluded scenario tests). Will continue working on getting all tests executing again. --- Andreas Scheuring (andreas_s) On 5. Mar 2018, at 09:35, Andreas Scheuring wrote: Hi all, the s390x nova CI is currently broken due to the bump of the grpcio version. I’m working on fixing it…. Regards, --- Andreas Scheuring (andreas_s) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Mar 5 15:17:55 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 5 Mar 2018 09:17:55 -0600 Subject: [openstack-dev] [nova][neutron][openstackclient] The way to assign floating IPs to VM In-Reply-To: References: Message-ID: On Mon, Mar 5, 2018 at 8:07 AM, Matt Riedemann wrote: > On 3/5/2018 6:26 AM, Andrey Kurilin wrote: >> - openstackclient >> >> * command `openstack server add floating ip` calls `add_floating_ip` >> method of novaclient's server object[4]. This method is missed in the latest >> novaclient (see [2]). >> It means that this command is broken now. >> So here we have 2 global issues: >> - openstackclient has a broken command (or I missed something?) >> - there is no easy way to associate a floating ip with a vm using CLI or >> via python. > I mentioned the related issue back in January: > > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126741.html > > Adding a floating IP to an instance is possible using OSC CLI, it's > essentially something like: > > a) get the server id (openstack server show/list) > b) get the port id using the server id (openstack port list --device-id > $server_id) > c) assign the floating IP to the port (openstack floating ip set --port > $port_id) We keep removing Python API bindings from client libraries that are still in use for old clouds that are still in much wider use than we would like. Why do we not give a rats ass about our users? Especially when some deployers have multiple clouds lying about, requiring them to maintain multiple venvs of CLIs is just stupid to be able to work on their clouds and migrations to the cool new stuff. OSC is not done because I have about 3 hours a week left to work on it. Continued shit like this isn't helping me want to keep going. Maybe my brain is just snow-fried. And for the love of all the snow in Dublin, please, NOBODY USE THE SDK IN A SERVICE. Keeping service assumptions out of client-side stuff is the biggest reason OSC NEEDS to get changed over to the SDK, like, 2 years ago. Then I'll not give a rats ass about the legacy python client libs. dt -- Dean Troyer dtroyer at gmail.com From juliaashleykreger at gmail.com Mon Mar 5 15:22:28 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 05 Mar 2018 15:22:28 +0000 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: <372f3de5-5d8f-6281-2fed-a26b63014775@redhat.com> References: <372f3de5-5d8f-6281-2fed-a26b63014775@redhat.com> Message-ID: On Mon, Mar 5, 2018 at 1:47 AM Dmitry Tantsur wrote: > > > Please don't feel the need to select times that would be burdensome to > > yourself. This is only to gather information as to the time of day > > that would be ideal for everyone. All times are set as UTC on the > > poll. > > Are you sure? I'm asking because the last time I checked Doodle created all > polls in some specific time, defaulting to your local time. Then for each > participant it converts the times to their local time. E.g. for me the > time span > is 1am to to 12am Berlin time (UTC+1), is it what you expected? > If memory serves, it is user preference based upon prior usage, although I created it in UTC. I guess I kind of forgot that they automatically do timezone adjustment now. It should do the right thing. And people should set their time zone and times accordlying. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slawek at kaplonski.pl Mon Mar 5 15:23:51 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Mon, 5 Mar 2018 16:23:51 +0100 Subject: [openstack-dev] [neutron] Bug deputy report - 26.02-5.03.2018 Message-ID: <1BFEBFB9-E966-4E01-9EA5-44276A19D4DB@kaplonski.pl> Hi, I was on bug deputy during last week. Below short summary about reported bugs. * For drivers team there are two RFE bugs: [RFE] Support stateless security groups - https://bugs.launchpad.net/neutron/+bug/1753466 [RFE] (Operator-only) Add support 'snat' for loggable resource type - https://bugs.launchpad.net/neutron/+bug/1752290 * One critical bug reported: https://bugs.launchpad.net/neutron/+bug/1753507 - it is error in neutron-fwaas after upgrade from Pike to Queens * Two bugs marked as High: https://bugs.launchpad.net/neutron/+bug/1743425 - it is about crash neutron-server during service restart if vlan_range was changed, * https://bugs.launchpad.net/neutron/+bug/1752006 - crash of neutron linuxbridge agent if fwaas_v2 is also enabled - I think that patch is ready for review for that one, Other bugs are marked as medium or low: * https://bugs.launchpad.net/neutron/+bug/1752274 - merge API references from wiki, and neutron-specs into neutron-lib, * https://bugs.launchpad.net/neutron/+bug/1752903 - issue with allocation IPv6 for FIP if there is IPv6 subnet in network - someone familiar with IPAM and L3 should take a look on this one probably, * https://bugs.launchpad.net/neutron/+bug/1753384 - issue with updating FIP QoS, patch is already proposed for this one, * https://bugs.launchpad.net/neutron/+bug/1752275 - it's about write some document on API reference guideline, There is also one bug reported for LBaaS v2: https://bugs.launchpad.net/neutron/+bug/1753380 and here is my question: should we redirect it to Octavia/LBaaS team's storyboard? — Best regards Slawek Kaplonski slawek at kaplonski.pl From juliaashleykreger at gmail.com Mon Mar 5 15:30:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 05 Mar 2018 15:30:13 +0000 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: <6a718c23-8249-eb00-5d07-f1f7306e7c1c@redhat.com> References: <6a718c23-8249-eb00-5d07-f1f7306e7c1c@redhat.com> Message-ID: On Mon, Mar 5, 2018 at 1:48 AM Dmitry Tantsur wrote: > On 03/04/2018 09:46 PM, Zhipeng Huang wrote: > > Thx Julia, > > > > Another option is instead of changing meeting time, you could establish a > > tick-tock meeting, for example odd weeks for US-Euro friendly times and > even > > weeks for US-Asia friendly times. > > We tried that roughly two years ago, and it did not work because very few > people > showed up on the APAC time. I think the goal of this poll is to figure out > how > many people would show up now. Exactly, the overlap is critical to understand. My preference is to try and keep one meeting time. If there are no good times, then we might want to consider something like office hours, but we will need to adapt planning and coordination. I’m not sure I’ve had enough coffee yet to think about that further. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Mon Mar 5 15:44:56 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 5 Mar 2018 09:44:56 -0600 Subject: [openstack-dev] [cinder][ptg] Team Photos ... Message-ID: <193f4269-0e88-c28d-506b-de362f7f7641@gmail.com> Team, Attached are the team photos. Thanks! Jay -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_4355.JPG Type: image/jpeg Size: 4799040 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_4356.JPG Type: image/jpeg Size: 6426354 bytes Desc: not available URL: From mark at stackhpc.com Mon Mar 5 15:52:35 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 5 Mar 2018 15:52:35 +0000 Subject: [openstack-dev] [kayobe] development process changes Message-ID: Hi Kayobe team, As many of you are aware, we are making the changes necessary [1] to become an OpenStack-related project. This will affect development and use of kayobe in a number of ways. Storyboard We'll be moving from using Github as our issue & feature tracker to Storyboard [2]. See the Storyboard documentation [3] for details. We'll need to experiment a little to find a usage model that suits our needs. The project's home page [4]. Git The kayobe repository [5] is now hosted under the openstack namespace. We'll use gerrit for submitting patches, and will adopt the standard OpenStack workflow [6]. Changes submitted to the stackhpc Github repo after the transition will be rejected. CI Currently we use TravisCI for continuous integration. After the transition we'll start using the OpenStack CI infrastructure, including the shiny new Zuul v3 [7]. There will be a bit of a learning curve here, and it may take a little time to reach test coverage parity with TravisCI. All CI tests are currently run using tox, so hopefully we can leverage existing Zuul job templates here. IRC We've had the #openstack-kayobe channel for a few weeks now, but just a reminder that it's there. Please share any problems during the transition period in IRC and we can tackle them together. Thanks to everyone who's helped to get kayobe to this point. Looking forward to seeing where it goes next! [1] https://docs.openstack.org/infra/manual/creators.html [2] https://storyboard.openstack.org [3] https://docs.openstack.org/infra/storyboard/ [4] https://storyboard.openstack.org/#!/project/928 [5] https://git.openstack.org/cgit/openstack/kayobe [6] https://docs.openstack.org/infra/manual/developers.html [7] https://docs.openstack.org/infra/zuul/ Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Mon Mar 5 15:56:25 2018 From: thingee at gmail.com (Mike Perez) Date: Mon, 5 Mar 2018 07:56:25 -0800 Subject: [openstack-dev] [DriverLog] DriverLog future In-Reply-To: References: Message-ID: <20180305155625.GE32596@gmail.com> On 11:44 Mar 01, Ilya Shakhat wrote: > Hi! > > For those who do not know, DriverLog is a community registry of 3rd-party > drivers for OpenStack hosted together with Stackalytics [1]. The project > started 4 years ago and by now contains information about 220 drivers. The > data from DriverLog is also consumed by official Marketplace [2]. > > Here I would like to discuss directions for DriverLog and 3rd-party driver > registry as general. > > 1) Being a single community-wide registry was good initially, it allowed to > quickly collect description for most of drivers in a single place. But in a > long term this approach stopped working - not many projects remember to > update the information stored in some random place, right? > > Mike already pointed to this problem a year ago [3] and the idea was to > move driver list to projects (and thus move responsibility to them too) and > have an aggregated list of drivers produced by infra. Do we have any > progress in this direction? Is it a time to start deprecation of DriverLog > and consider transition during Rocky release? > > 2) As a project with 4 years history DriverLog's list only increased over > the time with quite few removals. Now it still has drivers with the latest > version Liberty or drivers for non-maintained projects (e.g. Fuel). While > it maybe makes sense to keep all of them for operators who run older > versions, it may produce a feeling that the majority of drivers are old. > One of solutions for this is to show by default drivers for active releases > only (Pike and ahead). If done this will apply to both DriverLog and > Marketplace. > > Any other ideas or suggestions? Hey Ilya, Yes there is progress. Thanks to others who have helped me, we have a project called sphinx-feature-classification [0]. This allows a project to use a sphinx directive to generate a support matrix based on drivers recorded in a INI file [1] which lives in the project's repository. I have also went through and found projects using the duplicate code this library replaces and proposed changes to those projects [2]. Next steps in the library to account for: * Releases * Maintainers * CI success/failure parsing patterns (do we still need this?) Am I missing anything else? I noticed we have information in driver log and the wiki [3] and I kind of don't know now what third-party CI's are dependent on to work with gerrit. I'll check it out today and report back here, unless someone knows and replies before I get a chance. [0] - http://git.openstack.org/cgit/openstack/sphinx-feature-classification [1] - https://docs.openstack.org/sphinx-feature-classification/latest/usage.html [2] - https://review.openstack.org/#/q/status:+open+topic:support-matrix [3] - https://wiki.openstack.org/wiki/ThirdPartySystems -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From giuseppe.decandia at gmail.com Mon Mar 5 16:03:39 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Mon, 5 Mar 2018 10:03:39 -0600 Subject: [openstack-dev] [security] Security PTG Planning, x-project request for topics. In-Reply-To: References: Message-ID: Hi Luke, Yes, please - that would be great! best, Pino On Wed, Feb 28, 2018 at 3:25 AM, Luke Hinds wrote: > Hi Pino, > > Thank you for your time demonstrating Tatu. > > If you like we could incubate Tatu into the security SIG. This would mean > no change to project structure / governance etc, its more the project gains > a regular slot on our weekly meetings to help get patches reviewed and > encourage other contributors / feedback etc. We did this with projects such > as Bandit before, until it found its own legs and momentum. > > Cheers, > > Luke > > > On Mon, Feb 12, 2018 at 8:45 AM, Luke Hinds wrote: > >> >> >> On Sun, Feb 11, 2018 at 4:01 PM, Pino de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> I uploaded the demo video (https://youtu.be/y6ICCPO08d8) and linked it >>> from the slides. >>> >> >> Thanks Pino , i added these to the agenda: >> >> https://etherpad.openstack.org/p/security-ptg-rocky >> >> Please let me know before the PTG, if it will be your colleague or if we >> need to find a projector to conference you in. >> >> >>> On Fri, Feb 9, 2018 at 5:51 PM, Pino de Candia < >>> giuseppe.decandia at gmail.com> wrote: >>> >>>> Hi Folks, >>>> >>>> here are the slides for the Tatu presentation: https://docs.goo >>>> gle.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM >>>> >>>> I meant to record the demo video as well but I haven't gotten around to >>>> editing all the bits. Please stay tuned. >>>> >>>> thanks, >>>> Pino >>>> >>>> >>>> On Tue, Feb 6, 2018 at 10:52 AM, Giuseppe de Candia < >>>> giuseppe.decandia at gmail.com> wrote: >>>> >>>>> Hi Luke, >>>>> >>>>> Fantastic! An hour would be great if the schedule allows - there are >>>>> lots of different aspects we can dive into and potential future directions >>>>> the project can take. >>>>> >>>>> thanks! >>>>> Pino >>>>> >>>>> >>>>> >>>>> On Tue, Feb 6, 2018 at 10:36 AM, Luke Hinds wrote: >>>>> >>>>>> >>>>>> >>>>>> On Tue, Feb 6, 2018 at 4:21 PM, Giuseppe de Candia < >>>>>> giuseppe.decandia at gmail.com> wrote: >>>>>> >>>>>>> Hi Folks, >>>>>>> >>>>>>> I know the request is very late, but I wasn't aware of this SIG >>>>>>> until recently. Would it be possible to present a new project to the >>>>>>> Security SIG at the PTG? I need about 30 minutes. I'm hoping to drum up >>>>>>> interest in the project, sign on users and contributors and get feedback. >>>>>>> >>>>>>> For the past few months I have been working on a new project - Tatu >>>>>>> [1]- to automate the management of SSH certificates (for both users and >>>>>>> hosts) in OpenStack. Tatu allows users to generate SSH certificates with >>>>>>> principals based on their Project role assignments, and VMs automatically >>>>>>> set up their SSH host certificate (and related config) via Nova vendor >>>>>>> data. The project also manages bastions and DNS entries so that users don't >>>>>>> have to assign Floating IPs for SSH nor remember IP addresses. >>>>>>> >>>>>>> I have a working demo (including Horizon panels [2] and OpenStack >>>>>>> CLI [3]), but am still working on the devstack script and patches [4] to >>>>>>> get Tatu's repositories into OpenStack's GitHub and Gerrit. I'll try to >>>>>>> post a demo video in the next few days. >>>>>>> >>>>>>> best regards, >>>>>>> Pino >>>>>>> >>>>>>> >>>>>>> References: >>>>>>> >>>>>>> 1. https://github.com/pinodeca/tatu (Please note this is still >>>>>>> very much a work in progress, lots of TODOs in the code, very little >>>>>>> testing and documentation doesn't reflect the latest design). >>>>>>> 2. https://github.com/pinodeca/tatu-dashboard >>>>>>> 3. https://github.com/pinodeca/python-tatuclient >>>>>>> 4. https://review.openstack.org/#/q/tatu >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> Hi Giuseppe, of course you can! I will add you to the agenda. We >>>>>> could get your an hour if it allows more time for presenting and post >>>>>> discussion? >>>>>> >>>>>> We will be meeting in an allocated room on Monday (details to follow). >>>>>> >>>>>> https://etherpad.openstack.org/p/security-ptg-rocky >>>>>> >>>>>> Luke >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jan 31, 2018 at 12:03 PM, Luke Hinds >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> On Mon, Jan 29, 2018 at 2:29 PM, Adam Young >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Bug 968696 and System Roles. Needs to be addressed across the >>>>>>>>> Service catalog. >>>>>>>>> >>>>>>>> >>>>>>>> Thanks Adam, will add it to the list. I see it's been open since >>>>>>>> 2012! >>>>>>>> >>>>>>>> >>>>>>>>> >>>>>>>>> On Mon, Jan 29, 2018 at 7:38 AM, Luke Hinds >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Just a reminder as we have not had many uptakes yet.. >>>>>>>>>> >>>>>>>>>> Are there any projects (new and old) that would like to make use >>>>>>>>>> of the security SIG for either gaining another perspective on security >>>>>>>>>> challenges / blueprints etc or for help gaining some cross project >>>>>>>>>> collaboration? >>>>>>>>>> >>>>>>>>>> On Thu, Jan 11, 2018 at 3:33 PM, Luke Hinds >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hello All, >>>>>>>>>>> >>>>>>>>>>> I am seeking topics for the PTG from all projects, as this will >>>>>>>>>>> be where we try out are new form of being a SIG. >>>>>>>>>>> >>>>>>>>>>> For this PTG, we hope to facilitate more cross project >>>>>>>>>>> collaboration topics now that we are a SIG, so if your project has a >>>>>>>>>>> security need / problem / proposal than please do use the security SIG room >>>>>>>>>>> where a larger audience may be present to help solve problems and gain >>>>>>>>>>> x-project consensus. >>>>>>>>>>> >>>>>>>>>>> Please see our PTG planning pad [0] where I encourage you to add >>>>>>>>>>> to the topics. >>>>>>>>>>> >>>>>>>>>>> [0] https://etherpad.openstack.org/p/security-ptg-rocky >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Luke Hinds >>>>>>>>>>> Security Project PTL >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ____________________________________________________________ >>>>>>>>>> ______________ >>>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>>>> enstack.org?subject:unsubscribe >>>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> ____________________________________________________________ >>>>>>>>> ______________ >>>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>>> enstack.org?subject:unsubscribe >>>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>>>>> >>>>>>>> ____________________________________________________________ >>>>>>>> ______________ >>>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>>> enstack.org?subject:unsubscribe >>>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >>>>>> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >>>>>> >>>>> >>>>> >>>> >>> >> >> >> -- >> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >> e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 >> > > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Mar 5 16:32:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 6 Mar 2018 00:32:12 +0800 Subject: [openstack-dev] [ironic] Polling for new meeting time? In-Reply-To: References: <6a718c23-8249-eb00-5d07-f1f7306e7c1c@redhat.com> Message-ID: understood :) On Mon, Mar 5, 2018 at 11:30 PM, Julia Kreger wrote: > > On Mon, Mar 5, 2018 at 1:48 AM Dmitry Tantsur wrote: > >> On 03/04/2018 09:46 PM, Zhipeng Huang wrote: >> > Thx Julia, >> > >> > Another option is instead of changing meeting time, you could establish >> a >> > tick-tock meeting, for example odd weeks for US-Euro friendly times and >> even >> > weeks for US-Asia friendly times. >> >> We tried that roughly two years ago, and it did not work because very few >> people >> showed up on the APAC time. I think the goal of this poll is to figure >> out how >> many people would show up now. > > > Exactly, the overlap is critical to understand. My preference is to try > and keep one meeting time. If there are no good times, then we might want > to consider something like office hours, but we will need to adapt planning > and coordination. I’m not sure I’ve had enough coffee yet to think about > that further. :) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.decandia at gmail.com Mon Mar 5 16:35:00 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Mon, 5 Mar 2018 10:35:00 -0600 Subject: [openstack-dev] [docs] Permissions to upload a PNG for my new project page Message-ID: Hi Folks, I'm creating a project page for Tatu (SSH as a Service). I created a link to a logo in my page source like this: [[File:Project_Tatu_Logo.png|right]] But when I try to upload the file I get this error: "You do not have permission to upload this file, for the following reason: The action you have requested is limited to users in one of the groups: Administrators , Autopatrolled users ." Any guidance/help is much appreciated! Pino -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon Mar 5 16:41:39 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 5 Mar 2018 16:41:39 +0000 Subject: [openstack-dev] [security] Security SIG Meeting Time Change Message-ID: Hi All, As agreed during the PTG, we will switch Thursdays meetings from 17:00 UTC, to 15:00 UTC. -- Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Mon Mar 5 16:56:27 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 5 Mar 2018 18:56:27 +0200 Subject: [openstack-dev] [nova][neutron][openstackclient] The way to assign floating IPs to VM In-Reply-To: References: Message-ID: Matt, thanks! Unfortunately, I missed your thread. Dean, I propose a patch (https://review.openstack.org/549820) which should work for all environments where neutron is installed. Please check it when you have free time. 2018-03-05 17:17 GMT+02:00 Dean Troyer : > On Mon, Mar 5, 2018 at 8:07 AM, Matt Riedemann > wrote: > > On 3/5/2018 6:26 AM, Andrey Kurilin wrote: > >> - openstackclient > >> > >> * command `openstack server add floating ip` calls `add_floating_ip` > >> method of novaclient's server object[4]. This method is missed in the > latest > >> novaclient (see [2]). > >> It means that this command is broken now. > > >> So here we have 2 global issues: > >> - openstackclient has a broken command (or I missed something?) > >> - there is no easy way to associate a floating ip with a vm using CLI or > >> via python. > > > I mentioned the related issue back in January: > > > > http://lists.openstack.org/pipermail/openstack-dev/2018- > January/126741.html > > > > Adding a floating IP to an instance is possible using OSC CLI, it's > > essentially something like: > > > > a) get the server id (openstack server show/list) > > b) get the port id using the server id (openstack port list --device-id > > $server_id) > > c) assign the floating IP to the port (openstack floating ip set --port > > $port_id) > > > We keep removing Python API bindings from client libraries that are > still in use for old clouds that are still in much wider use than we > would like. Why do we not give a rats ass about our users? > Especially when some deployers have multiple clouds lying about, > requiring them to maintain multiple venvs of CLIs is just stupid to be > able to work on their clouds and migrations to the cool new stuff. > > OSC is not done because I have about 3 hours a week left to work on > it. Continued shit like this isn't helping me want to keep going. > Maybe my brain is just snow-fried. And for the love of all the snow > in Dublin, please, NOBODY USE THE SDK IN A SERVICE. Keeping service > assumptions out of client-side stuff is the biggest reason OSC NEEDS > to get changed over to the SDK, like, 2 years ago. Then I'll not give > a rats ass about the legacy python client libs. > > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Mar 5 16:59:08 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 05 Mar 2018 16:59:08 +0000 Subject: [openstack-dev] =?utf-8?q?=5Bironic=5D_Cancelling_this_week?= =?utf-8?q?=E2=80=99s_ironic_meeting_-_Next_meeting_February_12th?= Message-ID: Greetings everyone! Given the weather impact upon travel out of Dublin the end of this last week (and this week at this point...), we’re going to cancel today’s meeting. The rough list of priorities for this week: * Rocky Priorities - Expect a patch to be proposed in the next 24 hours. * Rescue testing patches * Pickup graphical console specs and patches in order to get a list going I want to thank everyone that attended the PTG and those that held the fort down while many of us were busy last week. I’ll ping contributors individually regarding items that need more definition to proceed into this cycle as the week goes on. Otherwise, have a wonderful week everyone! -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Mon Mar 5 17:55:23 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 5 Mar 2018 18:55:23 +0100 Subject: [openstack-dev] [docs][i18n][ptg] Rocky PTG Summary Message-ID: <20180305185523.c89550fdccbc28c476eb747b@redhat.com> Hi all, Just wanted to share a summary of docs- and i18n-related meetings and discussions we had in Dublin last week during the Rocky Project Teams Gathering. I think we can say both our teams, docs and i18n, are now pretty stable member-wise, following new processes and goals set up earlier in the Pike cycle. At the PTG, this was reflected in a rather fluctuating attendance of 3 to 12 ppl during the first two days. The meetings related to contributor docs and community onboarding proved to be most popular and caught more attention than others. As with previous PTGs, it is important to note that many of our cores couldn't attend, sadly. Traveling to OpenStack events remains a challenge for many and this was again mentioned during the PTG feedback session. The overall schedule for all our sessions with additional comments can be found here: https://etherpad.openstack.org/p/docs-i18n-ptg-rocky Our team picture from the Croke Park stadium (with quite a few members missing) can be found here (thanks Kendall!): https://pmkovar.fedorapeople.org/ptg/docs-ptg-dublin-2018.jpg To summarize what I found most important: GOVERNANCE DOCS TAGS We used to have the docs:follows-policy governance tag which in fact wasn't implemented by the project teams and we eventually retired it in Pike. Going forward, we want to propose several governance tags for projects to use. Each tag would identify a conformance to a specific area of content management and development, such following common glossary and terminology, tested installation procedures, common content structure, etc. These would serve as signs of maturity of each project. CONTENT REUSE AND CROSS-REFERENCING This is relevant to use cases such as sharing glossary term definitions across project team docs and reusing common content for installation guides across multiple releases. There are several alternatives we can look at further, such as automatically submitting content changes between repos with a bot, using Sphinx extensions (sphinx.ext.intersphinx), etc. More guidance will need to be provided. COMMON ORGANIZATION OF CONTENT Some of the feedback we received from developers at the PTG was centered around offering more guidance in the common organization of content in most popular pages across project team docs. These certainly include landing pages and front pages for each content category (installation, usage, administration, reference, contribution, etc.). SITE ANALYTICS This is related to the previous point in that having access to some of the site analytics data for docs.openstack.org would help the docs project and project teams determine most popular content, content gaps, the keywords users use when searching for content, etc. We discussed this with a number of people from the Foundation. INSTALLATION GUIDE TESTING During the last cycles, there's been little interest from the community in testing installation procedures in an organized high-level manner. For Rocky, we will instead focus on providing more guidance on how to test procedures on an individual level using Gerrit dashboards to track changes, etc. CONTRIBUTOR GUIDE We met with some of the contributor-guide team members to discuss content restructure and reuse, adding more content and cleaning up existing contributor docs, such as the project-team-guide. There's also a First Contact Special Interest Group etherpad that provides more information on the subject of onboarding: https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG TRANSLATIONS Our i18n crew worked on enabling project team docs to be translatable, starting with the openstack-ansible project as a pilot. THAT'S IT? Please add to the list if I missed anything important, particularly for i18n. Thank you to everybody who attended the sessions, and a special thanks goes to all the PTG organizers and the local staff who handled the Beast from the East combined with storm Emma in Dublin in a truly professional manner! Hope to see more of you at the next PTG in Secret Name of Next PTG Location! Cheers, pk From kennelson11 at gmail.com Mon Mar 5 20:00:19 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 05 Mar 2018 20:00:19 +0000 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions Message-ID: Hello Everyone :) It was wonderful to see and talk with so many of you last week! For those that couldn't attend our whole day of chats or those that couldn't attend at all, I thought I would put forth a summary of our discussions which were mostly noted in the etherpad[1] #Contributor Guide# - Walkthrough: We walked through every section of what exists and came up with a variety of improvements on what is there. Most of these items have been added to our StoryBoard project[2]. This came up again Tuesday in docs sessions and I have added those items to StoryBoard as well. - Google Analytics: It was discussed we should do something about getting the contributor portal[3] to appear higher in Google searches about onboarding. Not sure what all this entails. NEEDS AN OWNER IF ANYONE WANTS TO VOLUNTEER. #Mission Statement# We updated our mission statement[4]! It now states: To provide a place for new contributors to come for information and advice. This group will also analyze and document successful contribution models while seeking out and providing information to new members of the community. #Weekly Meeting# We discussed beginning a weekly meeting- optimized for APAC/Europe and settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed here[5]. For now I added a section to our wiki for agenda organization[6]. The two main items we want to cover on a weekly basis are new contributor patches in gerrit and if anything has come up on ask.openstack.org about contributors so those will be standing agenda items. #Forum Session# We discussed proposing some forum sessions in order to get more involvement from operators. Currently, our activities focus on development activities and we would like to diversify. When this SIG was first proposed we wanted to have two chairs- one to represent developers and one to represent operators. We will propose a session or two when the call for forum proposals go out (should be today). #IRC Channels# We want to get rid of #openstack-101 and begin using #openstack-dev instead. The 101 channel isn't watched closely enough anymore and it makes more sense to move onboarding activities (like in OpenStack Upstream Institute) to a channel where there are people that can answer questions rather than asking those to move to a new channel. For those concerned about noise, OUI is run the weekend before the summit when most people are traveling to the Summit anyway. #Ongoing Onboarding Efforts# - GSOC: Unfortunately we didn't get accepted this year. We will try again next year. - Outreachy: Applications for the next round of interns are due March 22nd, 2018 [7]. Decisions will be made by April and then internships run May to August. - WoO Mentoring: The format of mentoring is changing from 1x1 to cohorts focused on a single goal. If you are interested in helping out, please contact me! I NEED HELP :) - Contributor guide: Please see the above section. - OpenStack Upstream Institute: It will be run, as usual, the weekend before the Summit in Vancouver. Depending on how much progress is made on the contributor guide, we will make use of it as opposed to slides like previous renditions. There have also been a number of OpenStack Days requesting we run it there as well. More details of those to come. #Project Liaisons# The list is filling out nicely, but we still need more coverage. If you know someone from a project not listed that might be willing to help, please reach out to them and get them added to our list [8]. I thiiiiiink that is just about everything. Hopefully I at least covered everything important :) Thanks Everyone! - Kendall Nelson (diablo_rojo) [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 [3] Contributor Portal https://www.openstack.org/community/ [4] Mission Statement Update https://review.openstack.org/#/c/548054/ [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ [6] Meeting Agenda https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda [7] Outreachy https://www.outreachy.org/apply/ [8] Project Liaisons https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon Mar 5 20:16:09 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 05 Mar 2018 15:16:09 -0500 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1518792130.19501.1.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> Message-ID: <1520280969.25743.54.camel@redhat.com> Based on a few replies, we'll try moving the Barbican weekly meeting to 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday starting Tuesday March 12 2018 (next week). See you then! Ade On Fri, 2018-02-16 at 09:42 -0500, Ade Lee wrote: > Thanks Jiong, > > Preference noted. Anyone else want to make the meeting time switch? > (Or prefer not to). > > Ade > > On Wed, 2018-02-14 at 14:13 +0800, Jiong Liu wrote: > > Hi Ade, > > > > Thank you for proposing this change! > > I'm in China, and the second time slot works better for me. > > > > Regards, > > Jiong > > > > > Message: 35 > > > Date: Tue, 13 Feb 2018 10:17:59 -0500 > > > From: Ade Lee > > > To: "OpenStack Development Mailing List (not for usage > > > questions)" > > > > > > Subject: [openstack-dev] [barbican] weekly meeting time > > > Message-ID: <1518535079.22990.9.camel at redhat.com> > > > Content-Type: text/plain; charset="UTF-8" > > > Hi all, > > > The Barbican weekly meeting has been fairly sparsely attended for > > > a > > > little while now, and the most active contributors these days > > > appear to > > > be in Asia. > > > Its time to consider moving the weekly meeting to a time when > > > more > > > contributors can attend. I'm going to propose a couple times > > > below > > > to > > > start out. > > > 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) Tuesday > > > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday > > > Feel free to propose other days/times. > > > Thanks, > > > Ade > > > P.S. Until decided otherwise, the Barbican meeting remains on > > > Mondays > > > at 2000 UTC > > > > > > > > ___________________________________________________________________ > > __ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bs > > cribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From alee at redhat.com Mon Mar 5 20:29:01 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 05 Mar 2018 15:29:01 -0500 Subject: [openstack-dev] [barbican] priorities/tracker for Rocky Message-ID: <1520281741.25743.60.camel@redhat.com> Hi all, I have started a tracker wiki page with some of the features/bugs that we might want to track for Rocky. Please take a look and see if there is anything that you would like to add/ comment on/ volunteer for. https://etherpad.openstack.org/p/barbican-tracker-rocky Thanks, Ade From dtroyer at gmail.com Mon Mar 5 20:37:36 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 5 Mar 2018 14:37:36 -0600 Subject: [openstack-dev] [nova][neutron][openstackclient] The way to assign floating IPs to VM In-Reply-To: References: Message-ID: On Mon, Mar 5, 2018 at 10:56 AM, Andrey Kurilin wrote: > Dean, I propose a patch (https://review.openstack.org/549820) which should > work for all environments where neutron is installed. Please check it when > you have free time. Thank you Andrey. Monty started one approach, I just pushed up https://review.openstack.org/#/c/549864/ as an example that follows the pattern we[0] established in the Network commands long ago to auto-detect between nova-net and Neutron. This involves moving the commands from the compute.v2.server classes to network.v2.floating_ip (etc) and subclassing network.common.NetworkAndComputeCommand to leverage that work. dt [0] I can't take credit for this, rtheis, amotoki and others worked this out very nicely. -- Dean Troyer dtroyer at gmail.com From emilien at redhat.com Mon Mar 5 21:52:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 5 Mar 2018 21:52:13 +0000 Subject: [openstack-dev] [tripleo] upgrading to a containerized undercloud In-Reply-To: References: Message-ID: On Mon, Mar 5, 2018 at 2:56 PM, Bogdan Dobrelya wrote: [...] Just a note that those need to be installed first, though you have this > covered https://review.openstack.org/#/c/549624/7/tripleoclient/v1/u > ndercloud.py at 100 Yeah, right now in my testing I'm cheating a bit and running a yum update before running the undercloud upgrade CLI, because the overcloud prepare command fail. I'll try to make something cleaner when I've got something actually working. [...] 3) Run openstack undercloud upgrade --use-heat, that underneath will: stop >> non-containerized services, upgrade all packages & dependencies and deploy >> a containerized undercloud. >> > > As we have discussed and you noted for https://review.openstack.org/# > /c/549609/ we're better off changing the workflow to run the > upgrade_tasks so we avoid code duplication. > So until now I was trying to avoid this situation and NOT running the upgrade_tasks, because I think this is not needed in the case of Queens to Rocky (to a containerized undercloud). But if we hit more situations similar to this mysql thing, yes indeed we might need to consider it more seriously. I really like the idea of just running deploy tasks... as the deployment is supposed to be idempotent. [...] Thanks for the feedback so far, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Mon Mar 5 21:53:23 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 5 Mar 2018 16:53:23 -0500 Subject: [openstack-dev] Release Naming for S - time to suggest a name! In-Reply-To: <20180221011959.GA30957@localhost.localdomain> References: <20180221011959.GA30957@localhost.localdomain> Message-ID: <20180305215323.GA14231@localhost.localdomain> On Tue, Feb 20, 2018 at 08:19:59PM -0500, Paul Belanger wrote: > Hey everybody, > > Once again, it is time for us to pick a name for our "S" release. > > Since the associated Summit will be in Berlin, the Geographic > Location has been chosen as "Berlin" (State). > > Nominations are now open. Please add suitable names to > https://wiki.openstack.org/wiki/Release_Naming/S_Proposals between now > and 2018-03-05 23:59 UTC. > > In case you don't remember the rules: > > * Each release name must start with the letter of the ISO basic Latin > alphabet following the initial letter of the previous release, starting > with the initial release of "Austin". After "Z", the next name should > start with "A" again. > > * The name must be composed only of the 26 characters of the ISO basic > Latin alphabet. Names which can be transliterated into this character > set are also acceptable. > > * The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region > under consideration must be declared before the opening of nominations, > as part of the initiation of the selection process. > > * The name must be a single word with a maximum of 10 characters. Words > that describe the feature should not be included, so "Foo City" or "Foo > Peak" would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may > make an exception for one or more of them to be considered in the > Condorcet poll. The naming official is responsible for presenting the > list of exceptional names for consideration to the TC before the poll opens. > > Let the naming begin. > > Paul > Just a reminder, there is only few more hours left to get your suggestions in for the naming the next release. Thanks, Paul From emilien at redhat.com Mon Mar 5 22:05:23 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 5 Mar 2018 22:05:23 +0000 Subject: [openstack-dev] [tripleo] Queens RC1 was released! In-Reply-To: <6f53733f-646b-9e28-7dd2-026b4fb079ec@redhat.com> References: <6f53733f-646b-9e28-7dd2-026b4fb079ec@redhat.com> Message-ID: On Mon, Mar 5, 2018 at 9:57 AM, Dmitry Tantsur wrote: > > A reminder that https://review.openstack.org/534842 is quite important, > as it will enable upgrade to hardware types from classic drivers. The > latter will be removed in one of the future releases. As it applies to > online_data_migrations it will not be run after you upgrade to Queens > without this patch. Every bugfix or FFU/upgrade related patch can be backported to any stable branch, so no big deal :) -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Mon Mar 5 23:15:38 2018 From: thingee at gmail.com (Mike Perez) Date: Mon, 5 Mar 2018 15:15:38 -0800 Subject: [openstack-dev] [forum] Brainstorming Topics for Vancouver 2018 Message-ID: <20180305231538.GF32596@gmail.com> Hi all, Welcome to the topic selection process for our Forum in Vancouver. Note that this is not a classic conference track with speakers and presentations. OpenStack community members (participants in development teams, SIGS, working groups, and other interested individuals) discuss the topics they want to cover and get alignment on and we welcome your participation. The Forum is for the entire community to come together, to create a neutral space rather than having separate "ops" and "dev" days. Users should should aim to come with ideas for for the next release, gather feedback on the past version and have strategic discussions that go beyond just one release cycle. We aim to ensure the broadest coverage of topics that will allow for multiple parts of the community getting together to discuss key areas within our community/projects. There are two stages to the brainstorming: 1. Starting today, set up an etherpad with your team and start discussing ideas you'd like to talk about at the Forum and work out which ones to submit - just like you did prior to the design summit. 2. Then, in a couple of weeks, we will open up a more formal web-based tool for you to submit abstracts for the most popular sessions that came out of your brainstorming. Make an etherpad and add it to the list at: https://wiki.openstack.org/wiki/Forum/Vancouver2018 One key thing we'd like to see (as always?) is cross-project collaboration, and discussion between every area of the community. Try to see if there is an interested working group on the user side to add to your ideas. Examples of typical discussions that include multiple parts of the community getting together to discuss: * Strategic, whole-of-community discussions, to think about the big picture, including beyond just one release cycle and new technologies o eg Making OpenStack One Platform for containers/VMs/Bare Metal (Strategic session) the entire community congregates to share opinions on how to make OpenStack achieve its integration engine goal * Cross-project sessions, in a similar vein to what has happened at past design summits, but with increased emphasis on issues that are of relevant to all areas of the community o eg Rolling Upgrades at Scale (Cross-Project session) -- the Large Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues that come up with rolling upgrades when there's a large number of machines. * Project-specific sessions, where developers can ask users specific questions about their experience, users can provide feedback from the last release and cross-community collaboration on the priorities and 'blue sky' ideas for the next release. o eg Neutron Pain Points (Project-Specific session) -- Co-organized by neutron developers and users. Neutron developers bring some specific questions they want answered, Neutron users bring feedback from the latest release and ideas about the future. Think about what kind of session ideas might end up as: Project-specific, cross-project or strategic/whole-of-community discussions. There'll be more slots for the latter two, so do try and think outside the box! This part of the process is where we gather broad community consensus - in theory the second part is just about fitting in as many of the good ideas into the schedule as we can. Further details about the forum can be found at: https://wiki.openstack.org/wiki/Forum -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From pabelanger at redhat.com Mon Mar 5 23:45:13 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 5 Mar 2018 18:45:13 -0500 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 Message-ID: <20180305234513.GA26473@localhost.localdomain> Greetings, A quick search of git shows your projects are using fedora-26 nodes for testing. Please take a moment to look at gerrit[1] and help land patches. We'd like to remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to approve these patches. If you jobs are failing under fedora-27, please take the time to fix any issue or update said patches to make them non-voting. We (openstack-infra) aim to only keep the latest fedora image online, which changes aprox every 6 months. Thanks for your help and understanding, Paul [1] https://review.openstack.org/#/q/topic:fedora-27+status:open From hjensas at redhat.com Tue Mar 6 00:43:12 2018 From: hjensas at redhat.com (Harald Jensas) Date: Tue, 6 Mar 2018 01:43:12 +0100 Subject: [openstack-dev] [tripleo] Queens RC1 was released! In-Reply-To: References: Message-ID: On 5 Mar 2018 9:18 a.m., "Emilien Macchi" wrote: TripleO team is proud to announce that we released Queens RC1! Some numbers: 210 bugs fixed 7 features implemented In Pike RC1: 138 bug fixed 8 features implemented In Ocata RC1: 62 bug fixed 7 features implemented In Newton RC1: 51 bug fixed 11 features implemented Unless we find a need to do it, we won't release RC2, but we'll see how it works during the next days. We encourage people to backport their bugfixes to stable/queens. Also all work related to FFU & upgrades is moving to rocky-1 but we expect the patches to be backported into stable/queens. Reminder: backports to stable/queens should be done by patches authors to help PTL & TripleO stable maintainers. Thanks and nice work everyone! We need to land this and backport it to be able to deploy on routed ctlplane network: https://review.openstack.org/537830 Unfortunately we where unable to land it earlier due to package promotion issues. -- Harald -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Mar 6 03:43:54 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 6 Mar 2018 12:43:54 +0900 Subject: [openstack-dev] [Openstack-dev] [QA] [forum] [Vancouver] QA Brainstorming Topic ideas for Vancouver 2018 Message-ID: Hi All, As you all might have seen the mail from thingee to collect the brainstorming ideas for coming Vancouver summit, I have created the below etherpad to collect the forum ideas for QA team. Please write up your ideas with your irc name on etherpad. https://etherpad.openstack.org/p/YVR-qa-brainstorming -gmann From zhipengh512 at gmail.com Tue Mar 6 03:51:51 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 6 Mar 2018 11:51:51 +0800 Subject: [openstack-dev] [cyborg]No Meeting This Week Message-ID: Hi Team, As most of us are rekubrating from PTG and snowenpstack last week, let's cancel the team meeting this week. At the mean time I have solicitate the meeting summary from topic leads, and will send out a summary of the summaries later :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Tue Mar 6 09:03:13 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Tue, 6 Mar 2018 17:03:13 +0800 Subject: [openstack-dev] [cinder] Cinder volume revert to snapshot with Ceph Message-ID: Hi,all This is the patch [0] about volume revert to snapshot with Ceph.Is anyone working on this patchset or maybe new patchset was proposed to implement RBD specific functionality?Can you tell me more about this ?Thank you very much. The link is here. Re:https://review.openstack.org/#/c/481566/ Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Tue Mar 6 09:06:24 2018 From: eumel at arcor.de (Frank Kloeker) Date: Tue, 06 Mar 2018 10:06:24 +0100 Subject: [openstack-dev] [I18n] Translation plan & priorities for Rocky Message-ID: Good morning, welcome to the Rocky translation release cycle! We merged today on our translation platform the stable-queens translations back to master and created a new translation dashboard on [1]. There are 19 projects identified for translation in Rocky. Please check your own project, if it meets your expectations. Furthermore we have 5 documentation projects still on the list [2]. There is a very good progress and only deltas are required to translate. Additional we have the translation for the Edge Computing Whitepaper still online [3]. If you are interested to have this in your language, please go ahead. New language version will be accepted. Early April we want to start with a new User Survey Translation. There will be very few changes. Due the upcoming Summit in Berlin we want to translate the results of the survey also, at least to German. Last but not least, as Petr mentioned yesterday evening on openstack-dev, our big goal for Rocky is project doc translation. We already talked to some project teams in Dublin and we're still in preparation before we can start. If you have any questions or hints, let me know. kind regards Frank PTL I18n [1] https://translate.openstack.org/version-group/view/Rocky-dashboard-translation [2] https://translate.openstack.org/version-group/view/doc-resources [3] https://translate.openstack.org/iteration/view/edge-computing/master/documents From tbechtold at suse.com Tue Mar 6 09:35:48 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Tue, 6 Mar 2018 10:35:48 +0100 Subject: [openstack-dev] Queens packages for openSUSE and SLES available Message-ID: <8303ee9f-922c-c50f-8ac2-88d6172519fe@suse.com> Hi, Queens packages for openSUSE and SLES are now available at: http://download.opensuse.org/repositories/Cloud:/OpenStack:/Queens/ We maintain + test the packages for SLES 12SP3 and openSUSE Leap 42.3. If you find issues, please do not hesitate to report them to opensuse-cloud at opensuse.org or to https://bugzilla.opensuse.org/ Thanks and have a lot of fun, Tom From atsumi.yoshihiko at po.ntt-tx.co.jp Tue Mar 6 10:40:15 2018 From: atsumi.yoshihiko at po.ntt-tx.co.jp (=?UTF-8?B?5ril576OIOaFtuW9pg==?=) Date: Tue, 6 Mar 2018 19:40:15 +0900 Subject: [openstack-dev] [openstack-helm]Re: Question about API endpoints In-Reply-To: References: <20f7e795-8221-0ff8-ebd1-484c0764def8@po.ntt-tx.co.jp> Message-ID: <45ff2dd3-0a91-9e2f-5ca9-5ed2c9655068@po.ntt-tx.co.jp> Hi Hyunsun, Thank you for your help!! My questions has been solved. On 2018/03/05 16:42, Hyunsun Moon wrote: > Hi Yoshihiko, > > If you have physical LB in your environment, you might want to make use of NodePort for distributing the access to multiple controller nodes. In that case, it is recommended to set Values.network.external_policy_local to true so that you could eliminate unnecessary hops. > Ingress backed by nginx could be used of course, but as you pointed, IP address of the node where ingress pod resides will be the address you’re accessing, which might not be desirable in many use cases. > If you plan to try it on GCP/GKE, where the ingress controller is backed by GCP’s load-balancer service, NodePort + ingress seems valid option for exposing your service to external. > FYI, https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer > > Hope this helps. > > Hyunsun > > >> On 5 Mar 2018, at 2:34 PM, 渥美 慶彦 wrote: >> >> Hi all, >> # Resend with openstack-helm tag >> >> I try to deploy multinode OpenStack by openstack-helm >> and want to access OpenStack API endpoints from out of k8s nodes. >> To avoid service failure by node down, I think I need one virtual IP for the endpoints.(like Pacemaker) >> Could you show me how to realize that if you have any information? >> >> A. Deploy OpenStack services for NodePort, and distribute the access to nodes using physical Load Balancer. >> B. Using Ingress? >> I think Ingress is for L7 routing, so it can't be used to create VIP for the endpoints. >> C. Any other ideas? >> >> And when I try this on GCP/GKE, is there any difference from on-premises? >> >> best regards >> >> -- >> -------------------------------------------------------- >> Yoshihiko Atsumi >> E-mail:atsumi.yoshihiko at po.ntt-tx.co.jp >> -------------------------------------------------------- >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -------------------------------------------------------- Yoshihiko Atsumi E-mail:atsumi.yoshihiko at po.ntt-tx.co.jp -------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Mar 6 11:11:53 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 6 Mar 2018 12:11:53 +0100 Subject: [openstack-dev] [ironic] heads-up: classic drivers deprecation and future removal Message-ID: Hi all, As you may already know, we have deprecated classic drivers in the Queens release. We don't have specific removal plans yet. But according to the deprecation policy we may remove them at any time after May 1st, which will be half way to Rocky milestone 2. Personally, I'd like to do it around then. The `online_data_migrations` script will handle migrating nodes, if all required hardware interfaces and types are enabled before the upgrade to Queens. Otherwise, check the documentation [1] on how to update your nodes. Dmitry [1] https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html From rnoriega at redhat.com Tue Mar 6 12:38:04 2018 From: rnoriega at redhat.com (Ricardo Noriega De Soto) Date: Tue, 6 Mar 2018 13:38:04 +0100 Subject: [openstack-dev] [neutron][l2gw] Rocky goals for Networking L2GW Message-ID: Hi L2GWers, Eventhough this is a project with few active contributors, these are our goals for Rocky release: - OpenStack client migration - Out-of-sync DB common effort (together with networking-odl, networking-ovn, etc.) - Broad CI jobs scenarios (adding a HWVTEP emultator) - Add Grenade support for upgrade testing. - Rocky community goals (removing mox and changing configuration without restarting services) Please, if you have any other proposal, just let us know via mailing list, or reply to this very same email. Cheers -- Ricardo Noriega Senior Software Engineer - NFV Partner Engineer | Office of Technology | Red Hat irc: rnoriega @freenode -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Tue Mar 6 12:40:53 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Tue, 6 Mar 2018 14:40:53 +0200 Subject: [openstack-dev] [tripleo] Proposing Security Squad Message-ID: Hello! As mentioned in the PTG, I would like to start a Security Squad for TripleO, with the goal of working with the security aspects and challenges in a more public manner. We'll have our first meeting tomorrow. With weekly meetings every wednesday at 1pm UTC. We started an etherpad already https://etherpad.openstack.org/p/tripleo-security-squad And here's the patch adding the Security Squad to the list https://review.openstack.org/#/c/550001/ Feel free to join if you're interested. BR -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Mar 6 12:27:00 2018 From: aspiers at suse.com (Adam Spiers) Date: Tue, 6 Mar 2018 12:27:00 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA] Validating HA on upstream In-Reply-To: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> Message-ID: <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> Hi Raoul and all, Sorry for joining this discussion late! Raoul Scarazzini wrote: >TL;DR: we would like to change the way HA is tested upstream to avoid >being hitten by evitable bugs that the CI process should discover. > >Long version: > >Today HA testing in upstream consist only in verifying that a three >controllers setup comes up correctly and can spawn an instance. That's >something, but it’s far from being enough since we continuously see "day >two" bugs. >We started covering this more than a year ago in internal CI and today >also on rdocloud using a project named tripleo-quickstart-utils [1]. >Apart from his name, the project is not limited to tripleo-quickstart, >it covers three principal roles: > >1 - stonith-config: a playbook that can be used to automate the creation >of fencing devices in the overcloud; >2 - instance-ha: a playbook that automates the seventeen manual steps >needed to configure instance HA in the overcloud, test them via rally >and verify that instance HA works; >3 - validate-ha: a playbook that runs a series of disruptive actions in >the overcloud and verifies it always behaves correctly by deploying a >heat-template that involves all the overcloud components; Yes, a more rigorous approach to HA testing obviously has huge value, not just for TripleO deployments, but also for any type of OpenStack deployment. >To make this usable upstream, we need to understand where to put this >code. Here some choices: [snipped] I do not work on TripleO, but I'm part of the wider OpenStack sub-communities which focus on HA[0] and more recently, self-healing[1]. With that hat on, I'd like to suggest that maybe it's possible to collaborate on this in a manner which is agnostic to the deployment mechanism. There is an open spec on this: https://review.openstack.org/#/c/443504/ which was mentioned in the Denver PTG session on destructive testing which you referenced[2]. As mentioned in the self-healing SIG's session in Dublin[3], the OPNFV community has already put a lot of effort into testing HA scenarios, and it would be great if this work was shared across the whole OpenStack community. In particular they have a project called Yardstick: https://www.opnfv.org/community/projects/yardstick which contains a bunch of HA test cases: http://docs.opnfv.org/en/latest/submodules/yardstick/docs/testing/user/userguide/15-list-of-tcs.html#h-a Currently each sub-community and vendor seems to be reinventing HA testing by itself to some extent, which is easier to accomplish in the short-term, but obviously less efficient in the long-term. It would be awesome if we could break these silos down and join efforts! :-) Cheers, Adam [0] #openstack-ha on Freenode IRC [1] https://wiki.openstack.org/wiki/Self-healing_SIG [2] https://etherpad.openstack.org/p/qa-queens-ptg-destructive-testing [3] https://etherpad.openstack.org/p/self-healing-ptg-rocky From zhang.lei.fly at gmail.com Tue Mar 6 13:29:56 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Tue, 6 Mar 2018 21:29:56 +0800 Subject: [openstack-dev] [kolla] Ubuntu jobs failed on pike branch due to package dependency In-Reply-To: References: Message-ID: Here is what i tried[0]. - pin ceph version in ceph-* container to Jewel. - the clients (nova/gnocchi/cinder) container use ceph Luminous. I made some test locally with a env: nova + glance + gnocchi + ceph, seems it works. [0] https://review.openstack.org/549466 On Tue, Feb 27, 2018 at 12:53 AM, Michał Jastrzębski wrote: > I'm for option 1 definitely. accidental ceph upgrade during routine > minor version upgrade is something we don't want. We will need big > warning about this version mismatch in release notes. > > On 26 February 2018 at 07:01, Eduardo Gonzalez wrote: > > I prefer option 1, breaking stable policy is not good for users. They > will > > be forced to upgrade a major ceph version during a minor upgrade, which > is > > not good and not excepted to be done ever. > > > > Regards > > > > > > 2018-02-26 9:51 GMT+01:00 Shake Chen : > >> > >> I prefer to the option 2. > >> > >> On Mon, Feb 26, 2018 at 4:39 PM, Jeffrey Zhang > > >> wrote: > >>> > >>> Recently, the Ubuntu jobs on pike branch are red[0]. With some > debugging, > >>> i found it is caused by > >>> package dependency. > >>> > >>> > >>> *Background* > >>> > >>> Since we have no time to upgrade ceph from Jewel to Luminous at the end > >>> of pike cycle, we pinned > >>> Ceph to Jewel on pike branch. This works on CentOS, because ceph jewel > >>> and ceph luminous are on > >>> the different repos. > >>> > >>> But in Ubuntu Cloud Archive repo, it bump ceph to Luminous. Even though > >>> ceph luminous still exists > >>> on UCA. But since qemu 2.10 depends on ceph luminous, we have to ping > >>> qemu to 2.5 to use ceph Jewel[1]. > >>> And this works since then. > >>> > >>> > >>> *Now Issue* > >>> > >>> But recently, UCA changed the libvirt-daemon package dependency, and > >>> added following, > >>> > >>> Package: libvirt-daemon > >>> Version: 3.6.0-1ubuntu6.2~cloud0 > >>> ... > >>> Breaks: qemu (<< 1:2.10+dfsg-0ubuntu3.4~), qemu-kvm (<< > >>> 1:2.10+dfsg-0ubuntu3.4~) > >>> > >>> It requires qemu 2.10 now. So dependency is broken and nova-libvirt > >>> container is failed to build. > >>> > >>> > >>> *Possible Solution* > >>> > >>> I think there two possible ways now, but none of them is good. > >>> > >>> 1. install ceph Luminuous on nova-libvirt container and ceph Jewel in > >>> ceph-* container > >>> 2. Bump ceph from jewel to luminous. But this breaks the backport > policy, > >>> obviously. > >>> > >>> So any idea on this? > >>> > >>> [0] https://review.openstack.org/534149 > >>> [1] https://review.openstack.org/#/c/526931/ > >>> > >>> -- > >>> Regards, > >>> Jeffrey Zhang > >>> Blog: http://xcodest.me > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> > >> -- > >> Shake Chen > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 6 13:37:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Mar 2018 13:37:43 +0000 Subject: [openstack-dev] [tripleo] [security] Proposing Security Squad In-Reply-To: References: Message-ID: <20180306133742.55uieqsos2tgl2kg@yuggoth.org> On 2018-03-06 14:40:53 +0200 (+0200), Juan Antonio Osorio wrote: > As mentioned in the PTG, I would like to start a Security Squad > for TripleO, with the goal of working with the security aspects > and challenges in a more public manner. [...] I would also strongly encourage you all to get involved with the OpenStack Security SIG. We're always looking for help/input with regard to information security concerns and ideas. The weekly meeting has recently been rescheduled to 15:00 UTC on Thursdays if you're available, and we have a #openstack-security IRC channel on Freenode where many of us can be found. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lhinds at redhat.com Tue Mar 6 13:42:35 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 6 Mar 2018 13:42:35 +0000 Subject: [openstack-dev] [tripleo] [security] Proposing Security Squad In-Reply-To: <20180306133742.55uieqsos2tgl2kg@yuggoth.org> References: <20180306133742.55uieqsos2tgl2kg@yuggoth.org> Message-ID: On Tue, Mar 6, 2018 at 1:37 PM, Jeremy Stanley wrote: > On 2018-03-06 14:40:53 +0200 (+0200), Juan Antonio Osorio wrote: > > As mentioned in the PTG, I would like to start a Security Squad > > for TripleO, with the goal of working with the security aspects > > and challenges in a more public manner. > [...] > > I would also strongly encourage you all to get involved with the > OpenStack Security SIG. We're always looking for help/input with > regard to information security concerns and ideas. The weekly > meeting has recently been rescheduled to 15:00 UTC on Thursdays if > you're available, and we have a #openstack-security IRC channel on > Freenode where many of us can be found. > -- > Jeremy Stanley > > Hi Jeremy, Good call - and I made a note in the squad planning pad ("General OpenStack security topics should go to the Security SIG." > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Mar 6 14:21:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 6 Mar 2018 14:21:50 +0000 Subject: [openstack-dev] [docs] Permissions to upload a PNG for my new project page In-Reply-To: References: Message-ID: <20180306142149.tvih7owzjcomrexc@yuggoth.org> On 2018-03-05 10:35:00 -0600 (-0600), Pino de Candia wrote: [...] > But when I try to upload the file I get this error: > > "You do not have permission to upload this file, for the following > reason: > > The action you have requested is limited to users in one of the > groups: Administrators, Autopatrolled users." [...] I have a feeling PTG attendance and associated travel challenges have delayed the rate at which the volunteers patrolling recent edits in the wiki are verifying validity of edits made by new users. As such, your account wasn't marked as verified by anyone until I did so just now. Please try again. Unfortunately we've found that without careful control over operations like file uploading or page renames we quickly get overrun with spam, so we limit that exposure by vetting accounts first. You'll also hopefully find that you no longer get presented with a captcha challenge when making page edits now that you're verified. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dougal at redhat.com Tue Mar 6 14:23:14 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 6 Mar 2018 14:23:14 +0000 Subject: [openstack-dev] [mistral] Retiring the Mistral Wiki pages Message-ID: Hey folks, Mistral has several Wiki pages that rank highly in Google searches. However, most of them have not been updated in months (or years in many cases). I am therefore starting to remove these and direct people to the Mistral documentation. Where possible I will link them to the relevant documentation pages. I have taken the plunge and removed the main wiki [0] page. The old content is still accessible [1], just click on "Page" at the top left and then go to history. Over the next week or so I am going to read through the old wiki pages and see if there is any information that is still relevant and move it to the Mistral documentation. If you are aware of anything that is in the wiki, but not in the docs (and should be) then please submit a patch or open a bug. After we consolidate all of the information into the Mistral docs I hope to coordinate an effort to improve the documentation. Cheers, Dougal [0]: https://wiki.openstack.org/wiki/Mistral [1]: https://wiki.openstack.org/w/index.php?title=Mistral&oldid=152120 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Tue Mar 6 14:31:58 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 6 Mar 2018 14:31:58 +0000 Subject: [openstack-dev] [mistral] What's new in latest CloudFlow? In-Reply-To: References: Message-ID: I checked with one of the tripleo-ui developers. They are using redux thunks to execute the API calls and they pointed me to this part of the code. https://github.com/openstack/tripleo-ui/blob/master/src/js/services/KeystoneApiService.js Hopefully that is useful - it looks like the custom code required is fairly small. Cheers, Dougal On 1 March 2018 at 16:37, Shaanan, Guy (Nokia - IL/Kfar Sava) < guy.shaanan at nokia.com> wrote: > Hi Dougal, > > Yes, it probably does help. > > I haven’t found any proper Keystone JavaScript library (if anyone knows > about one let me know). > > > > *From:* Dougal Matthews [mailto:dougal at redhat.com] > *Sent:* Thursday, March 1, 2018 17:43 > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* Re: [openstack-dev] [mistral] What's new in latest CloudFlow? > > > > Hey Guy, > > Thanks for sharing this update. I need to find time to try it out. The > biggest issue for me is the lack of keystone support. > > > > I wonder if any of the code in tripleo-ui could be used to help with > KeyStone support. It is a front-end JavaScript GUI. > https://github.com/openstack/tripleo-ui > > > > Cheers, > > Dougal > > > > On 26 February 2018 at 09:10, Shaanan, Guy (Nokia - IL/Kfar Sava) < > guy.shaanan at nokia.com> wrote: > > CloudFlow [1] is an open-source web-based GUI tool that helps visualize > and debug Mistral workflows. > > > > With the latest release [2] of CloudFlow (v0.5.0) you can: > > * Visualize the flow of workflow executions > > * Identify the execution path of a single task in huge workflows > > * Search Mistral by any entity ID > > * Identify long-running tasks at a glance > > * Easily distinguish between simple task (an action) and a sub workflow > execution > > * Follow tasks with a `retry` and/or `with-items` > > * 1-click to copy task's input/output/publish/params values > > * See complete workflow definition and per task definition YAML > > * And more... > > > > CloudFlow is easy to install and run (and even easier to upgrade), and we > appreciate any feedback and contribution. > > > > CloudFlow currently supports unauthenticated Mistral or authentication > with KeyCloak (openid-connect implementation). A support for Keystone will > be added in the near future. > > > > You can try CloudFlow now on your Mistral Pike/Queens, or try it on the > online demo [3]. > > > > [1] https://github.com/nokia/CloudFlow > > [2] https://github.com/nokia/CloudFlow/releases/latest > > [3] http://yaqluator.com:8000 > > > > > > Thanks, > > *-----------------------------------------------------* > > *Guy Shaanan* > > Full Stack Web Developer, CI & Internal Tools > > CloudBand @ Nokia Software, Nokia, ISRAEL > > Guy.Shaanan at nokia.com > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.k.mooney at intel.com Tue Mar 6 14:45:40 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Tue, 6 Mar 2018 14:45:40 +0000 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> Message-ID: <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> From: Matthew Booth [mailto:mbooth at redhat.com] Sent: Saturday, March 3, 2018 4:15 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions On 2 March 2018 at 14:31, Jay Pipes > wrote: On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: Hello Nova team, During the Cyborg discussion at Rocky PTG, we proposed a flow for FPGAs wherein the request spec asks for a device type as a resource class, and optionally a function (such as encryption) in the extra specs. This does not seem to work well for the usage model that I’ll describe below. An FPGA device may implement more than one function. For example, it may implement both compression and encryption. Say a cluster has 10 devices of device type X, and each of them is programmed to offer 2 instances of function A and 4 instances of function B. More specifically, the device may implement 6 PCI functions, with 2 of them tied to function A, and the other 4 tied to function B. So, we could have 6 separate instances accessing functions on the same device. Does this imply that Cyborg can't reprogram the FPGA at all? [Mooney, Sean K] cyborg is intended to support fixed function acclerators also so it will not always be able to program the accelerator. In this case where an fpga is preprogramed with a multi function bitstream that is statically provisioned cyborge will not be able to reprogram the slot if any of the fuctions from that slot are already allocated to an instance. In this case it will have to treat it like a fixed function device and simply allocate a unused vf of the corret type if available. In the current flow, the device type X is modeled as a resource class, so Placement will count how many of them are in use. A flavor for ‘RC device-type-X + function A’ will consume one instance of the RC device-type-X. But this is not right because this precludes other functions on the same device instance from getting used. One way to solve this is to declare functions A and B as resource classes themselves and have the flavor request the function RC. Placement will then correctly count the function instances. However, there is still a problem: if the requested function A is not available, Placement will return an empty list of RPs, but we need some way to reprogram some device to create an instance of function A. Clearly, nova is not going to be reprogramming devices with an instance of a particular function. Cyborg might need to have a separate agent that listens to the nova notifications queue and upon seeing an event that indicates a failed build due to lack of resources, then Cyborg can try and reprogram a device and then try rebuilding the original request. It was my understanding from that discussion that we intend to insert Cyborg into the spawn workflow for device configuration in the same way that we currently insert resources provided by Cinder and Neutron. So while Nova won't be reprogramming a device, it will be calling out to Cyborg to reprogram a device, and waiting while that happens. My understanding is (and I concede some areas are a little hazy): * The flavors says device type X with function Y * Placement tells us everywhere with device type X * A weigher orders these by devices which already have an available function Y (where is this metadata stored?) * Nova schedules to host Z * Nova host Z asks cyborg for a local function Y and blocks * Cyborg hopefully returns function Y which is already available * If not, Cyborg reprograms a function Y, then returns it Can anybody correct me/fill in the gaps? [Mooney, Sean K] that correlates closely to my recollection also. As for the metadata I think the weigher may need to call to cyborg to retrieve this as it will not be available in the host state object. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Mar 6 15:51:22 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 6 Mar 2018 15:51:22 +0000 Subject: [openstack-dev] [openstack-ansible] Meetings change (PTG discussion follow-up) Message-ID: Hello, During the PTG, we've discussed about changing our meetings. I'd like to have a written evidence in our mailing lists, showing what we discussed, and what we proposed to change. I propose we validate those changes if they get no opposition in the next 7 days (deadline: 13 March). What we discussed was: - Should the meetings be rescheduled, and at what time; - Should the meetings be happening in alternance for US/Europe friendly timezones; - What is the purpose/expected outcome of those meetings; - What is the reason the attendance is low. The summary is the following: - The expected outcome of bug triage is currently (drumroll....) actively triaging bugs which produces better deliverables (what a surprise!). - The expected outcome of the community meeting is to discuss about what we actively need to work on together, but we are doing these kind of conversations, ad-hoc, in the channel. So if we summarize things on a regular basis to make sure everyone is aware of the conversations, we should be good. - The timezone friendly things won't impact the attendance positively. - Right now, the Europe meetings can be postponed of one hour, but decision should be re-discussed with daylight saving. - A lot of ppl have meetings at 4PM UTC right now. As such, here is the PTG proposed change: - Moving the bug triage meeting to 5PM UTC until next daylight saving change. - Keep the "Focus of the week" section of the bug triage, to list what we discussed in the week (if more conversations have to happen, they can happen just after the bug triage) - Removing the community meeting. Any opposition there? If we are all okay, I will update our procedures next week. Best regards, JP From amy at demarco.com Tue Mar 6 16:08:27 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 6 Mar 2018 10:08:27 -0600 Subject: [openstack-dev] [openstack-ansible] Meetings change (PTG discussion follow-up) In-Reply-To: References: Message-ID: JP, When the Community meeting was moved to once a month there was a lot of resulting miscommunication as a result. If a weekly review is going to be sent to the mailing list with channel discussions is going to be sent out, I think that's a good alternative but the conversations still need to take place and as many people involved as possible. What about having office hours? Amy (spotz) On Tue, Mar 6, 2018 at 9:51 AM, Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > During the PTG, we've discussed about changing our meetings. > I'd like to have a written evidence in our mailing lists, showing what > we discussed, and what we proposed to change. I propose we validate > those changes if they get no opposition in the next 7 days (deadline: > 13 March). > > What we discussed was: > - Should the meetings be rescheduled, and at what time; > - Should the meetings be happening in alternance for US/Europe > friendly timezones; > - What is the purpose/expected outcome of those meetings; > - What is the reason the attendance is low. > > The summary is the following: > - The expected outcome of bug triage is currently (drumroll....) > actively triaging bugs which produces better deliverables (what a > surprise!). > - The expected outcome of the community meeting is to discuss about > what we actively need to work on together, but we are doing these kind > of conversations, ad-hoc, in the channel. So if we summarize things on > a regular basis to make sure everyone is aware of the conversations, > we should be good. > - The timezone friendly things won't impact the attendance positively. > - Right now, the Europe meetings can be postponed of one hour, but > decision should be re-discussed with daylight saving. > - A lot of ppl have meetings at 4PM UTC right now. > > As such, here is the PTG proposed change: > - Moving the bug triage meeting to 5PM UTC until next daylight saving > change. > - Keep the "Focus of the week" section of the bug triage, to list what > we discussed in the week (if more conversations have to happen, they > can happen just after the bug triage) > - Removing the community meeting. > > Any opposition there? If we are all okay, I will update our procedures > next week. > > Best regards, > JP > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Mar 6 18:18:27 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 6 Mar 2018 18:18:27 +0000 Subject: [openstack-dev] [tripleo] The Weekly Owl - 11th Edition Message-ID: Note: this is the eleventh edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127583.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Tripleo Queens RC1 was released last Friday! Congrats to everyone! +--> PTG was last week, we expect some updates during the following days/weeks about the topics we covered. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is John and ruck is Matt. Please let them know any new CI issue. +--> Master promotion is 1 day, Queens is 1 day, Pike is 14 days and Ocata is 14 days. +--> Focus is on TripleO CI infrastructure hardening, see https://trello.com/c/abar9eup/542-tripleo-ci-infrastructure-hardening +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Work in progress: FFU, Queens update/upgrade workflows, need reviews! see etherpad! +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Containerized undercloud is the major ongoing effort in the squad. Focus is on making OVB working and also upgrades. Target is rocky-m1. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Don't miss the deep-dive session to learn more about this awesome feature! https://www.youtube.com/watch?v=-6ojHT8P4RE +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> Team is working on config-download integration and multi-cluster support. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Topics discussed during PTG: Network management, GUI/CLI compatibility, Config download, Root device hints. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Custom validations spec: https://review.openstack.org/#/c/393775/ +--> Evaluate testing playbooks with containers. +--> Evaluate OpenShift on OpenStack validations. +--> Need reviews! (see etherpad) +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> New squad, who will focus improving the overall security of TripleO. +--> First meeting on IRC tomorrow! (See etherpad). +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ The tiniest owl in the world is the Elf Owl, which is 5 - 6 inches tall and weighs about 1 ½ ounces. The largest North American owl, in appearance, is the Great Gray Owl, which is up to 32 inches tall. Source: http://www.audubon.org/news/11-fun-facts-about-owls Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Tue Mar 6 18:37:10 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Tue, 6 Mar 2018 19:37:10 +0100 Subject: [openstack-dev] [TripleO][CI][QA][HA] Validating HA on upstream In-Reply-To: <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> Message-ID: <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> On 06/03/2018 13:27, Adam Spiers wrote: > Hi Raoul and all, > Sorry for joining this discussion late! [...] > I do not work on TripleO, but I'm part of the wider OpenStack > sub-communities which focus on HA[0] and more recently, > self-healing[1].  With that hat on, I'd like to suggest that maybe > it's possible to collaborate on this in a manner which is agnostic to > the deployment mechanism.  There is an open spec on this>    https://review.openstack.org/#/c/443504/ > which was mentioned in the Denver PTG session on destructive testing > which you referenced[2]. [...] >    https://www.opnfv.org/community/projects/yardstick [...] > Currently each sub-community and vendor seems to be reinventing HA > testing by itself to some extent, which is easier to accomplish in the > short-term, but obviously less efficient in the long-term.  It would > be awesome if we could break these silos down and join efforts! :-) Hi Adam, First of all thanks for your detailed answer. Then let me be honest while saying that I didn't know yardstick. I need to start from scratch here to understand what this project is. In any case, the exact meaning of this thread is to involve people and have a more comprehensive look at what's around. The point here is that, as you can see from the tripleo-ha-utils spec [1] I've created, the project is meant for TripleO specifically. On one side this is a significant limitation, but on the other one, due to the pluggable nature of the project, I think that integrations with other software like you are proposing is not impossible. Feel free to add your comments to the review. In the meantime, I'll check yardstick to see which kind of bridge we can build to avoid reinventing the wheel. Thanks a lot again for your involvement, [1] https://review.openstack.org/#/c/548874/ -- Raoul Scarazzini rasca at redhat.com From jungleboyj at gmail.com Tue Mar 6 20:02:38 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 6 Mar 2018 14:02:38 -0600 Subject: [openstack-dev] [cinder][ptg] PTG Summary Now Available ... Message-ID: <2bceb48a-b643-66a1-e816-ec40b4a18ea8@gmail.com> Team, I have collected all of our actions and agreements out of the three days of etherpads into the following summary page:  [1] . The etherpad includes links to the original etherpads and video clips. I am planning to use the wiki to help guide our development during Rocky. Let me know if you have any questions or concerns over the content. Thanks! Jay (jungleboyj) [1] https://wiki.openstack.org/wiki/CinderRockyPTGSummary From giuseppe.decandia at gmail.com Tue Mar 6 20:26:49 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Tue, 6 Mar 2018 14:26:49 -0600 Subject: [openstack-dev] [docs] Permissions to upload a PNG for my new project page In-Reply-To: <20180306142149.tvih7owzjcomrexc@yuggoth.org> References: <20180306142149.tvih7owzjcomrexc@yuggoth.org> Message-ID: Hi Jeremy, that worked (succeeded in uploading and no more capthca). Thanks! On Tue, Mar 6, 2018 at 8:21 AM, Jeremy Stanley wrote: > On 2018-03-05 10:35:00 -0600 (-0600), Pino de Candia wrote: > [...] > > But when I try to upload the file I get this error: > > > > "You do not have permission to upload this file, for the following > > reason: > > > > The action you have requested is limited to users in one of the > > groups: Administrators, Autopatrolled users." > [...] > > I have a feeling PTG attendance and associated travel challenges > have delayed the rate at which the volunteers patrolling recent > edits in the wiki are verifying validity of edits made by new users. > As such, your account wasn't marked as verified by anyone until I > did so just now. Please try again. > > Unfortunately we've found that without careful control over > operations like file uploading or page renames we quickly get > overrun with spam, so we limit that exposure by vetting accounts > first. You'll also hopefully find that you no longer get presented > with a captcha challenge when making page edits now that you're > verified. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Mar 6 21:20:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 6 Mar 2018 15:20:27 -0600 Subject: [openstack-dev] [keystone] changing meeting time Message-ID: Hey all, Per one of the outcomes from the PTG, I've proposed a new time slot for the keystone weekly meeting [0]. Note that it requires us to move meeting rooms as well. I'd like to get +1/-1s on the review from people looking to attend before asking a core to review. Let's discuss in review. Thanks, Lance [0] https://review.openstack.org/#/c/550260/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Tue Mar 6 21:22:20 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 6 Mar 2018 15:22:20 -0600 Subject: [openstack-dev] [keystone] [policy] No meeting 2018-03-07 Message-ID: <6143e70b-018b-6bcd-f68e-865469aa754a@gmail.com> Just a reminder that we won't be holding a policy meeting tomorrow [0], since the PTG was last week and people are either still traveling or recovering. We'll plan to pick things back up next week. Thanks, Lance [0] http://eavesdrop.openstack.org/#Keystone_Policy_Meeting -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Tue Mar 6 21:28:57 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 6 Mar 2018 15:28:57 -0600 Subject: [openstack-dev] [keystone] [infra] Post PTG performance testing needs Message-ID: Hey all, Last week during the PTG the keystone team sat down with a few of the infra folks to discuss performance testing. The major hurdle here has always been having dedicated hosts to use for performance testing, regardless of that being rally, tempest, or a home-grown script. Otherwise results vary wildly from run to run in the gate due to differences from providers or noisy neighbor problems. Opening up the discussion here because it sounded like some providers (mnaser, mtreinish) had some thoughts on how we can reserve specific hardware for these cases. Thoughts? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cboylan at sapwetik.org Tue Mar 6 21:46:48 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 06 Mar 2018 13:46:48 -0800 Subject: [openstack-dev] [keystone] [infra] Post PTG performance testing needs In-Reply-To: References: Message-ID: <1520372808.498510.1293935440.4D0451C8@webmail.messagingengine.com> On Tue, Mar 6, 2018, at 1:28 PM, Lance Bragstad wrote: > Hey all, > > Last week during the PTG the keystone team sat down with a few of the > infra folks to discuss performance testing. The major hurdle here has > always been having dedicated hosts to use for performance testing, > regardless of that being rally, tempest, or a home-grown script. > Otherwise results vary wildly from run to run in the gate due to > differences from providers or noisy neighbor problems. > > Opening up the discussion here because it sounded like some providers > (mnaser, mtreinish) had some thoughts on how we can reserve specific > hardware for these cases. > > Thoughts? Currently the Infra team has access to a variety of clouds, but due to how scheduling works we can't rule out noisy neighbors (or even being our own noisy neighbor). mtreinish also has data showing that runtimes are too noisy to do statistical analysis on, even within a single cloud region. So this is indeed an issue in the current setup. One approach that has been talked about in the past is to measure performance impacting operations using metrics other than execution time. For example number of sql queries or rabbit requests. I think this would also be valuable but won't give you proper performance measurements. That brought us back to the idea of possibly working with some cloud providers like mnaser and/or mtreinish to have a small number of dedicated instances to run performance tests on. We could then avoid the noisy neighbor problem as well. For the infra team we would likely need to have at least two providers providing these resources so that we could handle the loss of one without backing up job queues. I don't think the hardware needs to have an other special properties as we don't care about performance on specific hardware as much as comparing performance of the project over time on known hardware. Curious to hear what others may have to say. Thanks, Clark From e0ne at e0ne.info Tue Mar 6 21:46:38 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 6 Mar 2018 23:46:38 +0200 Subject: [openstack-dev] [cinder][ptg] PTG Summary Now Available ... In-Reply-To: <2bceb48a-b643-66a1-e816-ec40b4a18ea8@gmail.com> References: <2bceb48a-b643-66a1-e816-ec40b4a18ea8@gmail.com> Message-ID: Jay, thanks a lot for this great summary! Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Mar 6, 2018 at 10:02 PM, Jay S Bryant wrote: > Team, > > I have collected all of our actions and agreements out of the three days > of etherpads into the following summary page: [1] . The etherpad includes > links to the original etherpads and video clips. > > I am planning to use the wiki to help guide our development during Rocky. > > Let me know if you have any questions or concerns over the content. > > Thanks! > > Jay > > (jungleboyj) > > [1] https://wiki.openstack.org/wiki/CinderRockyPTGSummary > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Mar 6 21:56:18 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 6 Mar 2018 21:56:18 +0000 (GMT) Subject: [openstack-dev] [api] [heat] microversion_parse.middleware.MicroversionMiddleware Message-ID: Last week at the PTG, during the API-SIG session, there was discussion of extracting the microversion handling middleware that is used in the placement service into the relatively small microversion-parse library. This is so people who want to adopt microversions (or change their implementation) can share some code. This evening I've got a working version of that, and would like some feedback (and a few other things as well). The code is in a stack starting with https://review.openstack.org/#/c/495356/ In total that stack of patches moves most of the microversion handling code out of placement and adapts it (with some caveats) to general use. As a sort of proof, there's also a nova patchset which shows the removed code. If you install the above stack into the checked out nova patchset, it works as expected. That nova change is at https://review.openstack.org/#/c/550265/ Right now the microversion-parse changes are pretty rough but I don't want to go too far down the road of cleaning them up if the approach is not going to work for people. Looking at the two different patchsets will make some of the current limitations more clear, but some that I'm aware of: * It wants to use webob, because that's how it started out. This is pretty easy to fix with one challenge being managing error formatting. * At the moment it is not yet set up to align with deployment strategies such as paste (it uses old school wsgi initialization and wrapping). Also pretty easy to fix. There are some weird boundaries between version info used by the application, and version info used by the middleware. In the case of placement, there's some code left in placement for managing different methods for different versions of requests to the same URL. This kind of thing would be pretty nice to have in a library, but the current implementation is very tied to the way placement does dispatch. For services that already have their own routing dispatch system, that's kind of a non-starter. Anyway, if this is a topic of interest to you the code linked above is available for review and experimentation. If it turns out to be something people like I'll start the process of making a new release and getting that release into global requirements. The other things I'm thinking about are: * microversion-parse needs more cores. Right now there are only three and two of those are unable to be super active in the community any more. If you are someone who has knowledge of microversions and WSGI middleware, look at the code, and let me know. * If this code is going to be used outside of placement, it may make sense for it to go under the umbrella of oslo. I think we may have discussed that when the microversion-parse library was initially created and at the time I took a wait and see attitude. Is now the time? I don't know. Thanks for your attention and feedback. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From mtreinish at kortar.org Tue Mar 6 22:00:59 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 6 Mar 2018 17:00:59 -0500 Subject: [openstack-dev] [keystone] [infra] Post PTG performance testing needs In-Reply-To: References: Message-ID: <20180306220059.GA3574@zeong.kortar.org> On Tue, Mar 06, 2018 at 03:28:57PM -0600, Lance Bragstad wrote: > Hey all, > > Last week during the PTG the keystone team sat down with a few of the > infra folks to discuss performance testing. The major hurdle here has > always been having dedicated hosts to use for performance testing, > regardless of that being rally, tempest, or a home-grown script. > Otherwise results vary wildly from run to run in the gate due to > differences from providers or noisy neighbor problems. > > Opening up the discussion here because it sounded like some providers > (mnaser, mtreinish) had some thoughts on how we can reserve specific > hardware for these cases. While I like being called a provider, I'm not really one. I was more trying to find a use case for my closet cloud [1], and was volunteering to open that up to external/infra use to provide dedicated hardware for consistent performance testing. That's still an option, (I mean the boxes are just sitting there not doing anything) and I'd gladly work with infra and keystone to get that working. But, if mnaser and vexxhost have an alternative route with their real capacity and modern hardware, that's probably a better route to go. -Matt Treinish [1] https://blog.kortar.org/?p=380 > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From e0ne at e0ne.info Tue Mar 6 22:08:39 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 7 Mar 2018 00:08:39 +0200 Subject: [openstack-dev] [horizon][ptg] Horizon PTG Highlights Message-ID: Hi team, First of all, I would like to say a thank you to all who was able to attend PTG this time. We'd got very productive discussions with a great team. Below is my short summary related on the etherpad [1]: - Current blueprints and features proposals: - we agreed to allow new blueprints and feature proposals due to the dev cycle before Feature Freeze milestone [2] - it should help contributors who are interested in feature development propose and implement new features for Horizon - Bugs and reviews list maintaining: - we did a good progress on Launchpad bugs list cleanup in Queens - it would be good to have a Bug triage days - I'll start to do it on a weekly basis - we created an etherpad for review priorities [3] - I'll review this list before weekly meeting - feel free to add anything you think is important to merge it soon - we can discuss this list on IRC meeting if needed - Should we stop rewriting existing panels with Angular? - there are a lot of concerns about re-writing current features with Angular JS - we've got a lot of not implemented features but do re-implementation of the current - It would be great to have new features implemented with Angular JS but it's not a requirement at the moment - seems that we're OK to not block current patches with features re-implementation with Angular JS but do not want to start new patches with re-implementation - there is no final decision on this topic yet - Fetch resources in parallel - we agreed to make go forward with Eventlet by default and make it configurable to allow native Python threads which are used now - let's ask the community about their experience with Eventlet - Eventlet is not the best option for Python 3 at the moment - An interaction between Horizon and other projects - project teams have troubles with Horizon integration - we've got features gap between Horizon and other projects - Horizon would like to use project capabilities - we need to be more active in cross-project communications - Horizon needs to fix integration tests - Ironic UI team wants to have their integration tests based on Horizon tests - it would be good to have Horizon plugins jobs per each Horizon commit to being sure that we don't break anything - Heat team asked for a help with new XStatic packages - Current state in Horizon testing - we want to fix our Selenium and Integration tests - there is some progress on this - once general integration tests framework will be ready, we can start fix tests one by one - need to figure out why tempest job is not stable enough - translations are not enabled in unit-tests - having test cases with some non-default locale seems to be good - add an option to enable localization in unit-tests - Angular and XStatic packages versions - testing and updating were done mostly manually by Radomir and Rob - we agreed to update XStatic packages in Rocky if they have suitable for Horizon versions and we've got capacity for this - Horizon accessibility - This initiative was started some time ago but isn't maintained now - Error handling - We need better user-facing error messages - We don't log every exception, so it makes hard for operators to investigate what went wrong - Bandit [3] - we're OK to get bandit job like some other projects do My general feeling is: we're trying to balance between bug-fixing/stabilization and new features development with a limited number of resources. And last, but not least, I want to say thank you to everybody who attends PTG, does review or proposes patches. [1] https://etherpad.openstack.org/p/horizon-ptg-rocky [2] https://releases.openstack.org/rocky/schedule.html#r-ff [3] https://etherpad.openstack.org/p/horizon-reviews-priority [4] https://github.com/openstack/bandit Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Mar 6 22:55:15 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 6 Mar 2018 23:55:15 +0100 Subject: [openstack-dev] [horizon][ptg] Horizon PTG Highlights In-Reply-To: References: Message-ID: <5365e6ce-2521-72a1-6ca3-76e2f00d066f@debian.org> On 03/06/2018 11:08 PM, Ivan Kolodyazhny wrote: > * Angular and XStatic packages versions > o testing and updating were done mostly manually by Radomir and Rob > o we agreed to update XStatic packages in Rocky if they have > suitable for Horizon versions and we've got capacity for this Just a quick input here. Having JS libs which I don't maintain in Debian to upgrade can be a long, and painful process, especially for high profile packages like libjs-jquery. I'd appreciate a lot if I can get a ping before/when this happen, and this has to happen as early as possible in the cycle. Also, you don't *HAVE* to upgrade them *ALL* in a single cycle and give downstream package maintainers so much work! :) Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Tue Mar 6 23:21:33 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 6 Mar 2018 15:21:33 -0800 Subject: [openstack-dev] [horizon][ptg] Horizon PTG Highlights In-Reply-To: References: Message-ID: On Tue, Mar 6, 2018 at 2:08 PM, Ivan Kolodyazhny wrote: > Horizon needs to fix integration tests > > Ironic UI team wants to have their integration tests based on Horizon tests Not exactly. There are multiple goals, with the central end goal of preventing breaking changes in horizon from breaking ironic-ui horribly. This gives Horizon improved feedback as to if major changes are good or not, and reduces our overall cost to maintain. What we want and intend is to get a working integration test job in the ironic-ui repository that is triggered when a change is proposed in the horizon repository. Thanks! -Julia From zbitter at redhat.com Wed Mar 7 00:31:55 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 6 Mar 2018 19:31:55 -0500 Subject: [openstack-dev] [heat] tracking the mox removal work Message-ID: I know how eager you all are to start replacing mox with mock, so I created a spreadsheet to help us avoid tripping over each other in our enthusiasm to get it done: https://ethercalc.openstack.org/heat-mox-removal Please add links to your patches in there as you create them. This will help us to ensure that we only convert each file once :) It's also a useful resource for seeing what needs reviews. Alternatively, so is this Gerrit query: https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+intopic:%22%255Emox%255B-_%255Dremoval%22 Happy mocking. cheers, Zane. From soulxu at gmail.com Wed Mar 7 02:21:58 2018 From: soulxu at gmail.com (Alex Xu) Date: Wed, 7 Mar 2018 10:21:58 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: 2018-03-06 22:45 GMT+08:00 Mooney, Sean K : > > > > > *From:* Matthew Booth [mailto:mbooth at redhat.com] > *Sent:* Saturday, March 3, 2018 4:15 PM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions > > > > On 2 March 2018 at 14:31, Jay Pipes wrote: > > On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: > > Hello Nova team, > > During the Cyborg discussion at Rocky PTG, we proposed a flow for > FPGAs wherein the request spec asks for a device type as a resource class, > and optionally a function (such as encryption) in the extra specs. This > does not seem to work well for the usage model that I’ll describe below. > > An FPGA device may implement more than one function. For example, it may > implement both compression and encryption. Say a cluster has 10 devices of > device type X, and each of them is programmed to offer 2 instances of > function A and 4 instances of function B. More specifically, the device may > implement 6 PCI functions, with 2 of them tied to function A, and the other > 4 tied to function B. So, we could have 6 separate instances accessing > functions on the same device. > > > > Does this imply that Cyborg can't reprogram the FPGA at all? > > *[Mooney, Sean K] cyborg is intended to support fixed function acclerators > also so it will not always be able to program the accelerator. In this case > where an fpga is preprogramed with a multi function bitstream that is > statically provisioned cyborge will not be able to reprogram the slot if > any of the fuctions from that slot are already allocated to an instance. In > this case it will have to treat it like a fixed function device and simply > allocate a unused vf of the corret type if available. * > > > > > > In the current flow, the device type X is modeled as a resource class, so > Placement will count how many of them are in use. A flavor for ‘RC > device-type-X + function A’ will consume one instance of the RC > device-type-X. But this is not right because this precludes other > functions on the same device instance from getting used. > > One way to solve this is to declare functions A and B as resource classes > themselves and have the flavor request the function RC. Placement will then > correctly count the function instances. However, there is still a problem: > if the requested function A is not available, Placement will return an > empty list of RPs, but we need some way to reprogram some device to create > an instance of function A. > > > Clearly, nova is not going to be reprogramming devices with an instance of > a particular function. > > Cyborg might need to have a separate agent that listens to the nova > notifications queue and upon seeing an event that indicates a failed build > due to lack of resources, then Cyborg can try and reprogram a device and > then try rebuilding the original request. > > > > It was my understanding from that discussion that we intend to insert > Cyborg into the spawn workflow for device configuration in the same way > that we currently insert resources provided by Cinder and Neutron. So while > Nova won't be reprogramming a device, it will be calling out to Cyborg to > reprogram a device, and waiting while that happens. > > My understanding is (and I concede some areas are a little hazy): > > * The flavors says device type X with function Y > > * Placement tells us everywhere with device type X > > * A weigher orders these by devices which already have an available > function Y (where is this metadata stored?) > > * Nova schedules to host Z > > * Nova host Z asks cyborg for a local function Y and blocks > > * Cyborg hopefully returns function Y which is already available > > * If not, Cyborg reprograms a function Y, then returns it > > Can anybody correct me/fill in the gaps? > > *[Mooney, Sean K] that correlates closely to my recollection also. As for > the metadata I think the weigher may need to call to cyborg to retrieve > this as it will not be available in the host state object.* > Is it the nova scheduler weigher or we want to support weigh on placement? Function is traits as I think, so can we have preferred_traits? I remember we talk about that parameter in the past, but we don't have good use-case at that time. This is good use-case. > Matt > > > > -- > > Matthew Booth > > Red Hat OpenStack Engineer, Compute DFG > > > > Phone: +442070094448 <+44%2020%207009%204448> (UK) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Wed Mar 7 02:36:56 2018 From: soulxu at gmail.com (Alex Xu) Date: Wed, 7 Mar 2018 10:36:56 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: 2018-03-07 10:21 GMT+08:00 Alex Xu : > > > 2018-03-06 22:45 GMT+08:00 Mooney, Sean K : > >> >> >> >> >> *From:* Matthew Booth [mailto:mbooth at redhat.com] >> *Sent:* Saturday, March 3, 2018 4:15 PM >> *To:* OpenStack Development Mailing List (not for usage questions) < >> openstack-dev at lists.openstack.org> >> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple >> functions >> >> >> >> On 2 March 2018 at 14:31, Jay Pipes wrote: >> >> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: >> >> Hello Nova team, >> >> During the Cyborg discussion at Rocky PTG, we proposed a flow for >> FPGAs wherein the request spec asks for a device type as a resource class, >> and optionally a function (such as encryption) in the extra specs. This >> does not seem to work well for the usage model that I’ll describe below. >> >> An FPGA device may implement more than one function. For example, it may >> implement both compression and encryption. Say a cluster has 10 devices of >> device type X, and each of them is programmed to offer 2 instances of >> function A and 4 instances of function B. More specifically, the device may >> implement 6 PCI functions, with 2 of them tied to function A, and the other >> 4 tied to function B. So, we could have 6 separate instances accessing >> functions on the same device. >> >> >> >> Does this imply that Cyborg can't reprogram the FPGA at all? >> >> *[Mooney, Sean K] cyborg is intended to support fixed function >> acclerators also so it will not always be able to program the accelerator. >> In this case where an fpga is preprogramed with a multi function bitstream >> that is statically provisioned cyborge will not be able to reprogram the >> slot if any of the fuctions from that slot are already allocated to an >> instance. In this case it will have to treat it like a fixed function >> device and simply allocate a unused vf of the corret type if available. * >> >> >> >> >> >> In the current flow, the device type X is modeled as a resource class, so >> Placement will count how many of them are in use. A flavor for ‘RC >> device-type-X + function A’ will consume one instance of the RC >> device-type-X. But this is not right because this precludes other >> functions on the same device instance from getting used. >> >> One way to solve this is to declare functions A and B as resource classes >> themselves and have the flavor request the function RC. Placement will then >> correctly count the function instances. However, there is still a problem: >> if the requested function A is not available, Placement will return an >> empty list of RPs, but we need some way to reprogram some device to create >> an instance of function A. >> >> >> Clearly, nova is not going to be reprogramming devices with an instance >> of a particular function. >> >> Cyborg might need to have a separate agent that listens to the nova >> notifications queue and upon seeing an event that indicates a failed build >> due to lack of resources, then Cyborg can try and reprogram a device and >> then try rebuilding the original request. >> >> >> >> It was my understanding from that discussion that we intend to insert >> Cyborg into the spawn workflow for device configuration in the same way >> that we currently insert resources provided by Cinder and Neutron. So while >> Nova won't be reprogramming a device, it will be calling out to Cyborg to >> reprogram a device, and waiting while that happens. >> >> My understanding is (and I concede some areas are a little hazy): >> >> * The flavors says device type X with function Y >> >> * Placement tells us everywhere with device type X >> >> * A weigher orders these by devices which already have an available >> function Y (where is this metadata stored?) >> >> * Nova schedules to host Z >> >> * Nova host Z asks cyborg for a local function Y and blocks >> >> * Cyborg hopefully returns function Y which is already available >> >> * If not, Cyborg reprograms a function Y, then returns it >> >> Can anybody correct me/fill in the gaps? >> >> *[Mooney, Sean K] that correlates closely to my recollection also. As for >> the metadata I think the weigher may need to call to cyborg to retrieve >> this as it will not be available in the host state object.* >> > Is it the nova scheduler weigher or we want to support weigh on placement? > Function is traits as I think, so can we have preferred_traits? I remember > we talk about that parameter in the past, but we don't have good use-case > at that time. This is good use-case. > If we call the Cyborg from the nova scheduler weigher, that will slow down the scheduling a lot also. > > >> Matt >> >> >> >> -- >> >> Matthew Booth >> >> Red Hat OpenStack Engineer, Compute DFG >> >> >> >> Phone: +442070094448 <+44%2020%207009%204448> (UK) >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Wed Mar 7 05:33:28 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 7 Mar 2018 12:33:28 +0700 Subject: [openstack-dev] [mistral] Retiring the Mistral Wiki pages In-Reply-To: References: Message-ID: <7e8d63a8-9dde-4ac9-99a1-6bafd59a6981@Spark> Thanks Dougal, I’ll also look at it to see what’s still relevant. Renat Akhmerov @Nokia On 6 Mar 2018, 21:24 +0700, Dougal Matthews , wrote: > Hey folks, > > Mistral has several Wiki pages that rank highly in Google searches. However, most of them have not been updated in months (or years in many cases). I am therefore starting to remove these and direct people to the Mistral documentation. Where possible I will link them to the relevant documentation pages. > > I have taken the plunge and removed the main wiki [0] page. The old content is still accessible [1], just click on "Page" at the top left and then go to history. > > Over the next week or so I am going to read through the old wiki pages and see if there is any information that is still relevant and move it to the Mistral documentation. If you are aware of anything that is in the wiki, but not in the docs (and should be) then please submit a patch or open a bug. > > After we consolidate all of the information into the Mistral docs I hope to coordinate an effort to improve the documentation. > > Cheers, > Dougal > > [0]: https://wiki.openstack.org/wiki/Mistral > [1]: https://wiki.openstack.org/w/index.php?title=Mistral&oldid=152120 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Wed Mar 7 06:19:30 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Wed, 7 Mar 2018 13:19:30 +0700 Subject: [openstack-dev] [mistral] Retiring the Mistral Wiki pages In-Reply-To: <7e8d63a8-9dde-4ac9-99a1-6bafd59a6981@Spark> References: <7e8d63a8-9dde-4ac9-99a1-6bafd59a6981@Spark> Message-ID: IMO, these sections should be moved to the official docs in some form: • FAQ - https://wiki.openstack.org/w/index.php?title=Mistral&oldid=99745#FAQ • Actions design - https://wiki.openstack.org/wiki/Mistral/Blueprints/ActionsDesign • Description of use cases: • https://wiki.openstack.org/wiki/Mistral/Long_Running_Business_Process • https://wiki.openstack.org/wiki/Mistral/Cloud_Cron_details All of these pages are outdated to some degree (terms etc.) and need to be refreshed but I think they contain a lot of interesting info that could help people understand Mistral better. There's also a page (very much outdated!) containing info about the Mistral team: https://wiki.openstack.org/wiki/Mistral/Team Of course, it can't be reused but I think it would be nice to have something link in the official doc, may be pointing directly to a stackalytics relevant info. Thanks Renat Akhmerov @Nokia On 7 Mar 2018, 12:33 +0700, Renat Akhmerov , wrote: > Thanks Dougal, I’ll also look at it to see what’s still relevant. > > > Renat Akhmerov > @Nokia > > On 6 Mar 2018, 21:24 +0700, Dougal Matthews , wrote: > > Hey folks, > > > > Mistral has several Wiki pages that rank highly in Google searches. However, most of them have not been updated in months (or years in many cases). I am therefore starting to remove these and direct people to the Mistral documentation. Where possible I will link them to the relevant documentation pages. > > > > I have taken the plunge and removed the main wiki [0] page. The old content is still accessible [1], just click on "Page" at the top left and then go to history. > > > > Over the next week or so I am going to read through the old wiki pages and see if there is any information that is still relevant and move it to the Mistral documentation. If you are aware of anything that is in the wiki, but not in the docs (and should be) then please submit a patch or open a bug. > > > > After we consolidate all of the information into the Mistral docs I hope to coordinate an effort to improve the documentation. > > > > Cheers, > > Dougal > > > > [0]: https://wiki.openstack.org/wiki/Mistral > > [1]: https://wiki.openstack.org/w/index.php?title=Mistral&oldid=152120 > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Mar 7 06:53:54 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 7 Mar 2018 17:53:54 +1100 Subject: [openstack-dev] [Release-job-failures] Release of openstack/paunch failed In-Reply-To: References: Message-ID: <20180307065352.GA24686@thor.bakeyournoodle.com> On Wed, Mar 07, 2018 at 06:05:56AM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python http://logs.openstack.org/34/34e767fbcc4dd488c94023b5ee682dd2369db7bd/release/release-openstack-python/5edd302/ : FAILURE in 16m 43s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED Looks like this was a generic timeout: http://logs.openstack.org/34/34e767fbcc4dd488c94023b5ee682dd2369db7bd/release/release-openstack-python/5edd302/job-output.txt.gz#_2018-03-07_06_05_03_388522 Can we re-run these jobs, if they haven't been done already. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Mar 7 06:55:15 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 7 Mar 2018 00:55:15 -0600 Subject: [openstack-dev] Fwd: [Release-job-failures] Release of openstack/paunch failed References: Message-ID: This appears to have been a network related glitch. 'ReadTimeoutError("HTTPConnectionPool(host='mirror.gra1.ovh.openstack.org', port=80): Read timed out. (read timeout=60.0)",)’ I would guess it could just be rerun. When someone from infra gets a chance, would you be able to reenqueue this job? Thanks, Sean > Begin forwarded message: > > From: zuul at openstack.org > Subject: [Release-job-failures] Release of openstack/paunch failed > Date: March 7, 2018 at 00:05:56 CST > To: release-job-failures at lists.openstack.org > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - release-openstack-python http://logs.openstack.org/34/34e767fbcc4dd488c94023b5ee682dd2369db7bd/release/release-openstack-python/5edd302/ : FAILURE in 16m 43s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.chadin at servionica.ru Wed Mar 7 08:24:35 2018 From: a.chadin at servionica.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YA=?=) Date: Wed, 7 Mar 2018 08:24:35 +0000 Subject: [openstack-dev] [watcher] weekly meeting is cancelled Message-ID: <60FD34E4-EE12-46C2-B3B1-3C3BD5B3607A@servionica.ru> We will not be holding a weekly meeting this time since people experience some jet lag. Let’s meet on March 14 at 08:00 UTC as usual. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Wed Mar 7 09:28:42 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 7 Mar 2018 09:28:42 +0000 Subject: [openstack-dev] [mistral] PTG Summary Message-ID: Hey Mistralites (maybe?), I have been through the etherpad from the PTG and attempted to expand on the topics with details that I remember. If I have missed anything or you have any questions, please get in touch. I want to update it while the memory is as fresh as possible. For each main topic I have added a "champion" and a "goal". These are not all complete yet and can be adjusted. I did add names next to champion for people that discussed that topic at the PTG. The goal should summarise what we need to do. Note: "Champion" does not mean you need to do all the work - just you are leading that effort and helping rally people around the issue. Essentially it is a collaboration role, but you can still lead the implementation if that makes sense. For example, I put myself as the Documentation champion. I do not plan on writing all the documentation, rather I want to setup better foundations and a better process for writing documentation. This will likely be a team effort I need to coordinate. Etherpad: https://etherpad.openstack.org/p/mistral-ptg-rocky Thanks everyone for coming, I think it was a useful week. It was unfortunate that the "Beast from the East" (the weather, not Renat!) stopped things a bit early on Thursday. I hope all your homeward travels worked out in the end. Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Mar 7 09:40:43 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 7 Mar 2018 17:40:43 +0800 Subject: [openstack-dev] [heat] weekly meeting is cancelled Message-ID: Hi team, As we just get back from #*SnowpenStack* (PTG), let's skip meeting this week. Here are sessions that we discussed in PTG, so if you would like to add some input to it, now is the time(try to leave your name, so we might know who it is). https://etherpad.openstack.org/p/heat-rocky-ptg -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Wed Mar 7 10:20:58 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 7 Mar 2018 10:20:58 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> Message-ID: <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> Raoul Scarazzini wrote: >On 06/03/2018 13:27, Adam Spiers wrote: >> Hi Raoul and all, >> Sorry for joining this discussion late! >[...] >> I do not work on TripleO, but I'm part of the wider OpenStack >> sub-communities which focus on HA[0] and more recently, >> self-healing[1].  With that hat on, I'd like to suggest that maybe >> it's possible to collaborate on this in a manner which is agnostic to >> the deployment mechanism.  There is an open spec on this>    https://review.openstack.org/#/c/443504/ >> which was mentioned in the Denver PTG session on destructive testing >> which you referenced[2]. >[...] >>    https://www.opnfv.org/community/projects/yardstick >[...] >> Currently each sub-community and vendor seems to be reinventing HA >> testing by itself to some extent, which is easier to accomplish in the >> short-term, but obviously less efficient in the long-term.  It would >> be awesome if we could break these silos down and join efforts! :-) > >Hi Adam, >First of all thanks for your detailed answer. Then let me be honest >while saying that I didn't know yardstick. Neither did I until Sydney, despite being involved with OpenStack HA for many years ;-) I think this shows that either a) there is room for improved communication between the OpenStack and OPNFV communities, or b) I need to take my head out of the sand more often ;-) >I need to start from scratch >here to understand what this project is. In any case, the exact meaning >of this thread is to involve people and have a more comprehensive look >at what's around. >The point here is that, as you can see from the tripleo-ha-utils spec >[1] I've created, the project is meant for TripleO specifically. On one >side this is a significant limitation, but on the other one, due to the >pluggable nature of the project, I think that integrations with other >software like you are proposing is not impossible. Yep. I totally sympathise with the tension between the need to get something working quickly, vs. the need to collaborate with the community in the most efficient way. >Feel free to add your comments to the review. The spec looks great to me; I don't really have anything to add, and I don't feel comfortable voting in a project which I know very little about. >In the meantime, I'll check yardstick to see which kind of bridge we >can build to avoid reinventing the wheel. Great, thanks! I wish I could immediately help with this, but I haven't had the chance to learn yardstick myself yet. We should probably try to recruit someone from OPNFV to provide advice. I've cc'd Georg who IIRC was the person who originally told me about yardstick :-) He is an NFV expert and is also very interested in automated testing efforts: http://lists.openstack.org/pipermail/openstack-dev/2017-November/124942.html so he may be able to help with this architectural challenge. Also you should be aware that work has already started on Eris, the extreme testing framework proposed in this user story: http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/openstack_extreme_testing.html and in the spec you already saw: https://review.openstack.org/#/c/443504/ You can see ongoing work here: https://github.com/LCOO/eris https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+-+Extreme+Testing+Framework+for+OpenStack It looks like there is a plan to propose a new SIG for this, although personally I would be very happy to see it adopted by the self-healing SIG, since this framework is exactly what is needed for testing any self-healing mechanism. I'm hoping that Sampath and/or Gautum will chip in here, since I think they're currently the main drivers for Eris. I'm beginning to think that maybe we should organise a video conference call to coordinate efforts between the various interested parties. If there is appetite for that, the first question is: who wants to be involved? To answer that, I have created an etherpad where interested people can sign up: https://etherpad.openstack.org/p/extreme-testing-contacts and I've cc'd people who I think would probably be interested. Does this sound like a good approach? Cheers, Adam From cdent+os at anticdent.org Wed Mar 7 12:12:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 7 Mar 2018 12:12:12 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-10 Message-ID: HTML: https://anticdent.org/tc-report-18-10.html This is a TC Report, but since everything that happened in its window of observation is preparing for the [PTG](https://www.openstack.org/ptg), being at the PTG, trying to get home from the PTG, and recovering from the PTG, perhaps think of this as "What the TC talked about [at] the PTG". As it is impossible to be everywhere at once (especially when the board meeting overlaps with other responsibilities) this will miss a lot of important stuff. I hope there are other summaries. As you may be aware, it [snowed in Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty of disruption to the [PTG](https://twitter.com/search?q=%23openstackptg) but everyone (foundation staff, venue staff, hotel staff, attendees, uisce beatha) worked together to make a good week. # Talking about the PTG at the PTG At the [board meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html), the future of the PTG was a big topic. As currently constituted it presents some challenges: * It is difficult for some people to attend because of visa and other travel related issues. * It is expensive to run and not everyone is convinced of the return on investment. * Some people don't like it (they either miss the old way of doing the design summit, or midcycles, or $OTHER). * Plenty of other reasons that I'm probably not aware of. This same topic was reviewed at [yesterday's office hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32). For now, the next 2018 PTG is going to happen (destination unknown) but plans for 2019 are still being discussed. If you have opinions about the PTG, there will be an opportunity to express them in a forthcoming survey. Beyond that, however, it is important [that management at contributing companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24) hear from more people (notably their employees) than the foundation about the value of the PTG. My own position is that of the three different styles of in-person events for technical contributors to OpenStack that I've experienced (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes distractions from other obligations (customer meetings, presentations, marketing requirements) while maximizing cross-project interaction. One idea, discussed [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24) and [earlier today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20) was to have the PTG be open to technical participants of any sort, not just so-called "OpenStack developers". Make it more of a place for people who hack on and with OpenStack to hack and talk. Leave the summit (without a forum) for presentations, marketing, pre-sales, etc. An issue raised with conflating the PTG and the Forum is that it would remove the [inward/outward focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17) concept that is supposed to distinguish the two events. I guess it depends on how we define "we" but I've always assumed that both events were for outward focus and that for any inward focussing effort we ought to be able use asynchronous tools more. # Foundation and OCI Thierry mentioned [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04) that it is likely that the OpenStack Foundation will join the [Open Container Initiative](https://www.opencontainers.org/) because of [Kata](https://katacontainers.io/) and [LOCI](https://governance.openstack.org/tc/reference/projects/loci.html). This segued into some brief concerns about the [attentions and intentions of the Foundation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:13:34), aggravated by the board meeting schedule conflict (there's agreement that will never ever happen again), and the rumor milling about the PTG. # Friday at the PTG with the TC The TC had scheduled a half day of discussion for Friday at the PTG. A big [agenda](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics), a fun filled week, and the snow meant we went nearly all day (and since there's no place to go, let's talk, let's talk, let's talk) with some reasonable progress. Some highlights: * There was some discussion on trying to move forward with constellations concept, but I don't recall specific outcomes from that discussion. * The team diversity tags need to be updated to reflect adjustments in the very high bars we set earlier in the history of OpenStack. We agreed to not remove projects from the tc-approved tag, as that could be taken the wrong way. Instead we'll create a new tag for projects that are in the trademark program. * Rather than having Long Term Support, which implies too much, a better thing to do is enable [extended maintenance](https://review.openstack.org/#/c/548916/) for those parties who want to do it. * Heat was approved to be a part of the trademark program, but then there were issues with where to put their tests and the tooling used to manage them. By the power of getting the right people in the room at the same time, we reached some consensus which is being finalized on a [proposed resolution](https://review.openstack.org/#/c/521602/). * We need to make an official timeline for the deprecation (and eventual removal) of support for Python 2, meaning we also need to accelerate the adoption of Python 3 as the primary environment. * In a discussion about the availability of [etcd](https://coreos.com/etcd/) it was decided that [tooz needs to be finished](https://docs.openstack.org/tooz/latest/user/compatibility.html). See the [etherpad](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics) for additional details. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From balazs.gibizer at ericsson.com Wed Mar 7 12:12:25 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 7 Mar 2018 13:12:25 +0100 Subject: [openstack-dev] [nova] Notification update week 10 (PTG) Message-ID: <1520424745.7809.1@smtp.office365.com> Hi, Here is the status update / focus settings mail for w10. We discussed couple of new notification related changes during the PTG. I tried to mention all of them below but if I missed something then please extend my list. Bugs ---- [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged. The backport for ocata is still open: https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields No progress. We still need to understand how this problem happens to find the proper solution. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ [Whislist] https://bugs.launchpad.net/nova/+bug/1639152 Send out notification about server group changes when delete instances It was discussed in the Rocky PTG and agreed to do this. A new specless bp has been created to track the effort https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications The bp is assigned to Takashi Versioned notification transformation ------------------------------------- We already have some patches proposed to the rock bp. I will go and review them this week. https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances is approved. Waiting for the implementation to be proposed. Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications is approved. Implementation patch exists but still needs work https://review.openstack.org/#/c/526251/ Add request_id to the InstanceAction versioned notifications ------------------------------------------------------------ The bp https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications is approved and assigned to Keving_Zheng. Sending full traceback in versioned notifications ------------------------------------------------- On the PTG we discussed the need of sending full tracebacks in error notifications. I will go and dig out why we decided not to send the full traceback when we created the versioned notifications. Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications is proposed and it looks good to me. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open No open patches, but I would like to progress with this through the Rocky cycle. Weekly meeting -------------- The next meeting will be held on 13th of Marc on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180313T170000 Cheers, gibi From dougal at redhat.com Wed Mar 7 12:34:16 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 7 Mar 2018 12:34:16 +0000 Subject: [openstack-dev] [mistral] PTG Summary In-Reply-To: References: Message-ID: On 7 March 2018 at 09:28, Dougal Matthews wrote: > Hey Mistralites (maybe?), > > I have been through the etherpad from the PTG and attempted to expand on > the topics with details that I remember. If I have missed anything or you > have any questions, please get in touch. I want to update it while the > memory is as fresh as possible. > > For each main topic I have added a "champion" and a "goal". These are not > all complete yet and can be adjusted. I did add names next to champion for > people that discussed that topic at the PTG. The goal should summarise what > we need to do. > > Note: "Champion" does not mean you need to do all the work - just you are > leading that effort and helping rally people around the issue. Essentially > it is a collaboration role, but you can still lead the implementation if > that makes sense. For example, I put myself as the Documentation champion. > I do not plan on writing all the documentation, rather I want to setup > better foundations and a better process for writing documentation. This > will likely be a team effort I need to coordinate. > > Etherpad: > https://etherpad.openstack.org/p/mistral-ptg-rocky > I forgot to add, if you were unable to attend the PTG or have anything else you want to add/discuss then please let us know. > > > Thanks everyone for coming, I think it was a useful week. It was > unfortunate that the "Beast from the East" (the weather, not Renat!) > stopped things a bit early on Thursday. I hope all your homeward travels > worked out in the end. > > Cheers, > Dougal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Mar 7 12:40:25 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 7 Mar 2018 21:40:25 +0900 Subject: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project Message-ID: Hi All, QA had discussion in Dublin PTG about interop adds-on tests location. First of all thanks all (specially markvoelker, dhellmann, mugsie) for joining the sessions. and I am glad we conclude the things and agreed on solution. Discussion was carry forward from the ML discussion [1] and to get the agreement about interop adds-on program tests location. Till now only 2 projects (heat and designate) are in list of adds-on program from interop side. After discussion and points from all stack holders, QA team agreed to host these 2 projects interop tests. Tests from both projects are not much as of now and QA team can accommodate to host their interop tests. Along with that agreement we had few more technical points to consider while moving designate and heat interop tests in Tempest repo. All the interop tests going to be added in Tempest must to be Tempest like tests. Tempest like tests here means tests written using Tempest interfaces and guidelines. For example, heat has their tests in heat-tempest-plugin based on gabbi and to move heat interop tests to Tempest those have to be written as Tempest like test. This is because if we accept non-tempest like tests in Tempest then, it will be too difficult to maintain by Tempest team. Projects (designate and heat) and QA team will work closely to move interop tests to Tempest repo which might needs some extra work of standardizing their tests and interface used by them like service clients etc. In future, if there are more new interop adds-on program proposal then, we need to analyse the situation again regarding QA team bandwidth. TC or QA or interop team needs to raise the resource requirement to Board of Directors before any more new adds-on program is being proposed. If QA team has less resource and less review bandwitdh then we cannot accept the more interop programs till QA get more resource to maintain new interop tests. Overall Summary: - QA team agreed to host the interop tests for heat and designate in Tempest repo. - Existing TC resolution needs to be adjust about the QA team resource bandwidth requirement. If there is going to be more adds-on program proposal then, QA team will not accept the new interop tests if QA team bandwidth issue still exist that time also. - Tempest will document the clear process about interop tests addition and other more care items etc. - Projects team to make their tests and interface as Tempest like tests and stable interfaces standards. Tempest team will closely work and help Designate and Heat on this. Action Items: - mugsie to abandon https://review.openstack.org/#/c/521602 with quick summary of discussion here at PTG - markvoelker to write up clarification to InteropWG process stating that tests should be moved into Tempest before being proposed to the BoD - markvoelker to work with gmann before next InteropWG+BoD discussion to frame up a note about resourcing testing for add-on/vertical programs - dhellmann to adjust the TC resolution for resource requirement in QA when new adds-on program is being proposed - project teams to convert interop test and framework as per tempest like tests and propose to add to tempest repo. - gmann to define process in QA about interop tests addition and maintainance We have added this as one of the monitoring/helping item for QA to make sure it is done without delay. Let's work together to finish this activity. Discussion Details: https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-test-for-adds-on-project ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html -gmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Wed Mar 7 13:01:05 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Wed, 7 Mar 2018 18:31:05 +0530 Subject: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: On Wed, Mar 7, 2018 at 6:10 PM, Ghanshyam Mann wrote: > Hi All, > > QA had discussion in Dublin PTG about interop adds-on tests location. > First of all thanks all (specially markvoelker, dhellmann, mugsie) for > joining the sessions. and I am glad we conclude the things and agreed on > solution. > > Discussion was carry forward from the ML discussion [1] and to get the > agreement about interop adds-on program tests location. > > Till now only 2 projects (heat and designate) are in list of adds-on > program from interop side. After discussion and points from all stack > holders, QA team agreed to host these 2 projects interop tests. Tests from > both projects are not much as of now and QA team can accommodate to host > their interop tests. > > Along with that agreement we had few more technical points to consider > while moving designate and heat interop tests in Tempest repo. All the > interop tests going to be added in Tempest must to be Tempest like tests. > Tempest like tests here means tests written using Tempest interfaces and > guidelines. For example, heat has their tests in heat-tempest-plugin based > on gabbi and to move heat interop tests to Tempest those have to be written > as Tempest like test. This is because if we accept non-tempest like tests > in Tempest then, it will be too difficult to maintain by Tempest team. > > Projects (designate and heat) and QA team will work closely to move > interop tests to Tempest repo which might needs some extra work of > standardizing their tests and interface used by them like service clients > etc. > Though I've not been part of any of these discussions, this seems to be exactly opposite to what I've been made to understand by the team i.e. Heat is not rewriting the gabbi api tests used by Trademark program, but would create a new tempest plugin (new repo 'orchestration-trademark-tempest-plugin') to host the heat related tests that are currently candidates for Trademark program? > > In future, if there are more new interop adds-on program proposal then, we > need to analyse the situation again regarding QA team bandwidth. TC or QA > or interop team needs to raise the resource requirement to Board of > Directors before any more new adds-on program is being proposed. If QA team > has less resource and less review bandwitdh then we cannot accept the more > interop programs till QA get more resource to maintain new interop tests. > > Overall Summary: > - QA team agreed to host the interop tests for heat and designate in > Tempest repo. > - Existing TC resolution needs to be adjust about the QA team resource > bandwidth requirement. If there is going to be more adds-on program > proposal then, QA team will not accept the new interop tests if QA team > bandwidth issue still exist that time also. > - Tempest will document the clear process about interop tests addition and > other more care items etc. > - Projects team to make their tests and interface as Tempest like tests > and stable interfaces standards. Tempest team will closely work and help > Designate and Heat on this. > > Action Items: > - mugsie to abandon https://review.openstack.org/#/c/521602 with quick > summary of discussion here at PTG > - markvoelker to write up clarification to InteropWG process stating that > tests should be moved into Tempest before being proposed to the BoD > - markvoelker to work with gmann before next InteropWG+BoD discussion to > frame up a note about resourcing testing for add-on/vertical programs > - dhellmann to adjust the TC resolution for resource requirement in QA > when new adds-on program is being proposed > - project teams to convert interop test and framework as per tempest > like tests and propose to add to tempest repo. > - gmann to define process in QA about interop tests addition and > maintainance > > We have added this as one of the monitoring/helping item for QA to make > sure it is done without delay. Let's work together to finish this > activity. > > Discussion Details: https://etherpad.openstack.org/p/qa-rocky-ptg-Interop- > test-for-adds-on-project > > ..1 http://lists.openstack.org/pipermail/openstack-dev/2018- > January/126146.html > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed Mar 7 13:02:30 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 7 Mar 2018 14:02:30 +0100 Subject: [openstack-dev] [docs] Documentation meeting canceled Message-ID: <20180307140230.8de616afd4e7e4f364eb006a@redhat.com> Hi all, Canceling today's docs meeting as there is not much to share beyond what was in the PTG summary I sent. As always, we're in #openstack-doc if you want to talk to us! Thanks, pk From cdent+os at anticdent.org Wed Mar 7 13:07:33 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 7 Mar 2018 13:07:33 +0000 (GMT) Subject: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: On Wed, 7 Mar 2018, Rabi Mishra wrote: >> Projects (designate and heat) and QA team will work closely to move >> interop tests to Tempest repo which might needs some extra work of >> standardizing their tests and interface used by them like service clients >> etc. >> > > Though I've not been part of any of these discussions, this seems to be > exactly opposite to what I've been made to understand by the team i.e. Heat > is not rewriting the gabbi api tests used by Trademark program, but would > create a new tempest plugin (new repo > 'orchestration-trademark-tempest-plugin') to host the heat related tests > that are currently candidates for Trademark program? There was additional discussion on Friday with people from the TC, trademark program, heat and QA that resulted in the plan you describe, which is being codified at: https://review.openstack.org/#/c/521602/ -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From fungi at yuggoth.org Wed Mar 7 13:12:29 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Mar 2018 13:12:29 +0000 Subject: [openstack-dev] [horizon][ptg] Horizon PTG Highlights In-Reply-To: References: Message-ID: <20180307131228.3g2k6yallt2qzofu@yuggoth.org> On 2018-03-07 00:08:39 +0200 (+0200), Ivan Kolodyazhny wrote: [...] > - we agreed to make go forward with Eventlet by default and make it > configurable to allow native Python threads which are used now > - let's ask the community about their experience with Eventlet > - Eventlet is not the best option for Python 3 at the moment [...] There was a discussion[*] during TC office hours three weeks ago wherein we rehashed a general desire to see eventlet usage decline within OpenStack services (we recognize that the volunteer workforce needed to rearchitect existing eventlet-using services simply doesn't exist, though it was suggested in jest as a potential community goal). At a minimum, there seemed to be some consensus that we should strongly discourage new uses of eventlet because its stdlib monkey-patching has created all manner of incompatibilities with other libraries in the past. Most recently it seems to be hampering etcd adoption, which we had as a community previously agreed on using to provide a consistent DLM implementation across projects. [*] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-15.log.html#t2018-02-15T15:12:44 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From andrea.frittoli at gmail.com Wed Mar 7 13:15:20 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Wed, 07 Mar 2018 13:15:20 +0000 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: On Wed, Mar 7, 2018 at 12:42 PM Ghanshyam Mann wrote: > Hi All, > > QA had discussion in Dublin PTG about interop adds-on tests location. > First of all thanks all (specially markvoelker, dhellmann, mugsie) for > joining the sessions. and I am glad we conclude the things and agreed on > solution. > > Discussion was carry forward from the ML discussion [1] and to get the > agreement about interop adds-on program tests location. > > Till now only 2 projects (heat and designate) are in list of adds-on > program from interop side. After discussion and points from all stack > holders, QA team agreed to host these 2 projects interop tests. Tests from > both projects are not much as of now and QA team can accommodate to host > their interop tests. > > Along with that agreement we had few more technical points to consider > while moving designate and heat interop tests in Tempest repo. All the > interop tests going to be added in Tempest must to be Tempest like tests. > Tempest like tests here means tests written using Tempest interfaces and > guidelines. For example, heat has their tests in heat-tempest-plugin based > on gabbi and to move heat interop tests to Tempest those have to be written > as Tempest like test. This is because if we accept non-tempest like tests > in Tempest then, it will be too difficult to maintain by Tempest team. > > Projects (designate and heat) and QA team will work closely to move > interop tests to Tempest repo which might needs some extra work of > standardizing their tests and interface used by them like service clients > etc. > > In future, if there are more new interop adds-on program proposal then, we > need to analyse the situation again regarding QA team bandwidth. TC or QA > or interop team needs to raise the resource requirement to Board of > Directors before any more new adds-on program is being proposed. If QA team > has less resource and less review bandwitdh then we cannot accept the more > interop programs till QA get more resource to maintain new interop tests. > > Overall Summary: > - QA team agreed to host the interop tests for heat and designate in > Tempest repo. > - Existing TC resolution needs to be adjust about the QA team resource > bandwidth requirement. If there is going to be more adds-on program > proposal then, QA team will not accept the new interop tests if QA team > bandwidth issue still exist that time also. > - Tempest will document the clear process about interop tests addition and > other more care items etc. > - Projects team to make their tests and interface as Tempest like tests > and stable interfaces standards. Tempest team will closely work and help > Designate and Heat on this. > > Thanks for the summary Ghanshyam! We had some follow up discussion on Friday about this, after the Heat team expressed their concern about proceeding with the plan we discussed during the session on Wednesday. A group of representatives of the Heat, Designate and Interop teams met with the TC and agreed on reviving the resolution started by mugsie in https://review.openstack.org/#/c/521602 to add an alternative to hosting tests in the Tempest repo. Unfortunately I was only there for the last few minutes of the meeting, but I understand that the proposal drafted there was to allow team to have interop-specific Tempest plugins co-owned by QA/Interop/add-on project team. mugsie has updated the resolution accordingly and I think the discussion on that can continue in gerrit directly. Just to clarify, nothing has been decided yet, but at least the new proposal was received positively by all parties involved in the discussion on Friday. Action Items: > - mugsie to abandon https://review.openstack.org/#/c/521602 with quick > summary of discussion here at PTG > This is not valid anymore, we should discuss this further and hopefully reach an agreement. > - markvoelker to write up clarification to InteropWG process stating that > tests should be moved into Tempest before being proposed to the BoD > - markvoelker to work with gmann before next InteropWG+BoD discussion to > frame up a note about resourcing testing for add-on/vertical programs > - dhellmann to adjust the TC resolution for resource requirement in QA > when new adds-on program is being proposed > - project teams to convert interop test and framework as per tempest > like tests and propose to add to tempest repo. > If the new resolution is agreed on, this will become one of the options. > - gmann to define process in QA about interop tests addition and > maintainance > This is still an option so you may still want to do it. Andrea Frittoli (andreaf) > > We have added this as one of the monitoring/helping item for QA to make > sure it is done without delay. Let's work together to finish this > activity. > > Discussion Details: > https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-test-for-adds-on-project > > ..1 > http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html > > > -gmann > _______________________________________________ > Interop-wg mailing list > Interop-wg at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 7 13:17:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 7 Mar 2018 13:17:42 +0000 Subject: [openstack-dev] Fwd: [Release-job-failures] Release of openstack/paunch failed In-Reply-To: References: Message-ID: <20180307131742.d5i6ml4d2gxmhc6a@yuggoth.org> On 2018-03-07 00:55:15 -0600 (-0600), Sean McGinnis wrote: [...] > When someone from infra gets a chance, would you be able to > reenqueue this job? I'm just now catching up on E-mail, but I reenqueued this tag at 11:55 UTC after Thierry brought it to my attention in the #openstack-release channel. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Wed Mar 7 13:21:32 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 7 Mar 2018 08:21:32 -0500 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: On 03/06/2018 09:36 PM, Alex Xu wrote: > 2018-03-07 10:21 GMT+08:00 Alex Xu >: > > > > 2018-03-06 22:45 GMT+08:00 Mooney, Sean K >: > > __ __ > > __ __ > > *From:*Matthew Booth [mailto:mbooth at redhat.com > ] > *Sent:* Saturday, March 3, 2018 4:15 PM > *To:* OpenStack Development Mailing List (not for usage > questions) > > *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple > functions____ > > __ __ > > On 2 March 2018 at 14:31, Jay Pipes > wrote:____ > > On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:____ > > Hello Nova team, > >      During the Cyborg discussion at Rocky PTG, we > proposed a flow for FPGAs wherein the request spec asks > for a device type as a resource class, and optionally a > function (such as encryption) in the extra specs. This > does not seem to work well for the usage model that I’ll > describe below. > > An FPGA device may implement more than one function. For > example, it may implement both compression and > encryption. Say a cluster has 10 devices of device type > X, and each of them is programmed to offer 2 instances > of function A and 4 instances of function B. More > specifically, the device may implement 6 PCI functions, > with 2 of them tied to function A, and the other 4 tied > to function B. So, we could have 6 separate instances > accessing functions on the same device.____ > > __ __ > > Does this imply that Cyborg can't reprogram the FPGA at all?____ > > */[Mooney, Sean K] cyborg is intended to support fixed function > acclerators also so it will not always be able to program the > accelerator. In this case where an fpga is preprogramed with a > multi function bitstream that is statically provisioned cyborge > will not be able to reprogram the slot if any of the fuctions > from that slot are already allocated to an instance. In this > case it will have to treat it like a fixed function device and > simply allocate a unused  vf  of the corret type if available. > ____/* > > > ____ > > > In the current flow, the device type X is modeled as a > resource class, so Placement will count how many of them > are in use. A flavor for ‘RC device-type-X + function A’ > will consume one instance of the RC device-type-X.  But > this is not right because this precludes other functions > on the same device instance from getting used. > > One way to solve this is to declare functions A and B as > resource classes themselves and have the flavor request > the function RC. Placement will then correctly count the > function instances. However, there is still a problem: > if the requested function A is not available, Placement > will return an empty list of RPs, but we need some way > to reprogram some device to create an instance of > function A.____ > > > Clearly, nova is not going to be reprogramming devices with > an instance of a particular function. > > Cyborg might need to have a separate agent that listens to > the nova notifications queue and upon seeing an event that > indicates a failed build due to lack of resources, then > Cyborg can try and reprogram a device and then try > rebuilding the original request.____ > > __ __ > > It was my understanding from that discussion that we intend to > insert Cyborg into the spawn workflow for device configuration > in the same way that we currently insert resources provided by > Cinder and Neutron. So while Nova won't be reprogramming a > device, it will be calling out to Cyborg to reprogram a device, > and waiting while that happens.____ > > My understanding is (and I concede some areas are a little > hazy):____ > > * The flavors says device type X with function Y____ > > * Placement tells us everywhere with device type X____ > > * A weigher orders these by devices which already have an > available function Y (where is this metadata stored?)____ > > * Nova schedules to host Z____ > > * Nova host Z asks cyborg for a local function Y and blocks____ > >   * Cyborg hopefully returns function Y which is already > available____ > >   * If not, Cyborg reprograms a function Y, then returns it____ > > Can anybody correct me/fill in the gaps?____ > > */[Mooney, Sean K] that correlates closely to my recollection > also. As for the metadata I think the weigher may need to call > to cyborg to retrieve this as it will not be available in the > host state object./* > > Is it the nova scheduler weigher or we want to support weigh on > placement? Function is traits as I think, so can we have > preferred_traits? I remember we talk about that parameter in the > past, but we don't have good use-case at that time. This is good > use-case. > > > If we call the Cyborg from the nova scheduler weigher, that will slow > down the scheduling a lot also. Right, which is why I don't want to do any weighing in Placement at all. If folks want to sort by things that require long-running code/callbacks or silly temporal things like metrics, they can do that in a custom weigher in the nova-scheduler and take the performance hit there. Best, -jay From gmann at ghanshyammann.com Wed Mar 7 13:32:40 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 7 Mar 2018 22:32:40 +0900 Subject: [openstack-dev] [PTG][QA] QA PTG Rocky Summary Message-ID: Hi All, First of all, thanks for joining Rocky PTG and making it really productive and successful. I am writing the QA PTG summary. Wwe started the 'owner' for each working item so that we have single point of contact to track those. That will help to make each priority item to complete on time. 1. Queens Retrospective --------------------------------------------------------- We discussed the Queens Retrospective at start the PTG. We went through 1. what went well and 2. what needs to improve and gather some concrete action items. Action Items: - chandankumar: Use newly tempest plugin jobs for stable branches for other projects. - felipemonteiro: stable branches jobs needs to be done for Patrole for in-repo zuul gate. - masayukig: Mail to ML to abandon no active patches, then put a comment and record the list, then abandon them - gmann: will start the some etherpad and ML to notify the projects using plugins for best practice and improve the interfaces used by them. - gmann: will check with SamP on progress on destructive HA testing. - mguiney: will start to put some unit tests for CLIs We will be tracking the above AI in our QA meeting so that we really work on mentioned AI and do not forget these as PTG finished. Owner: gmann Etherpad link: https://etherpad.openstack.org/p/qa-queens-retrospective 2. Zuul v3 native jobs ----------------------------------------------------------- andreaf explained about the devstack and tempest base jobs and migration of jobs. That was really helpful and good learning sessions. Basic idea to finish the devstack and tempest base jobs to make them available for projects specific jobs. We decided to have only 2 devstack base jobs, 1. base abstract job 2. base job for single and multinode jobs. Inherited jobs can adjust single and multinode setup with nodeset var. Action Items: - andreaf to merge current hierarchy of devstack jobs to make a single job for single node and multinode jobs. Owner: andreaf Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-zuul-v3-native-jobs 3. Cold upgrades capabilities (Rocky community goal) --------------------------------------------------------------------------- This is not Rocky goal now but we did talk on this little bit. We discussed about preparation for grenade plugins developments. Masayuki will check whether we have enough documentation and process written for implementing the plugins. Action Items: - masayukig to check the current documentation about grenade plugins whether it is enough for projects to implement plugins. Owner: masayukig Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-cold-upgrades-capabilities 4. Interop test for adds-on project ------------------------------------------------------------------------- I sent separate detailed mail on this and outcomes with action items. please refer - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127994.html 5. Remove Deprecated APIs tests from Tempest ------------------------------------------------------------------------- We talked about the testing of Deprecated APIs in tempest and on stable branches. We concluded to tests all Deprecated APIs on master as well on all stable branch. Till they are being removed we should tests APIs even they are in any state. There are few APIs like glance v1 and keystone admin v2 which are being skipped now So we are going to enable these APIs tests on each corresponding stable and master branch. Volume APIs testing will be little different way. We will tests v3 as default in all jobs with all existing tests. v2 APIs will be tested with new job running all current tests with v2 endpoints on tempest and cinder CI. Action Items: - gmann to make all glance v2 tests to run on all job - gmann to make keystone v2 admin tests in all jobs - gmann to make volume tests testing v3 as default and setup v2 new job on tempest and cinder Owner: gmann Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-remove-deprecated-apis-tests 6. Backlogs from Queens Cycle ------------------------------------------------------------------------- We went through the backlogs items of queens release which we discussed in Denver PTG but did not completed. We picked up the items which we still wanted to do but need volunteers to take those items. I will publish those items to ML and find out some volunteer if I can. Action Items: - gmann to send the backlogs items to ML to find the volunteers. Owner: Need Volunteer to pickup the items Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-queens-backlogs 7. Consuming Kolla tempest container source image in CI ----------------------------------------------------------------------------- The tempest kolla image contains tempest as well as container. We can use that to tests image and process of creating the image. For that we can add a job on tempest and kolla CI to use of kolla Tempest image and run few or more tempest test. Action Items: - chandankumar to add job in tempest as non-voting with creating the image dynamicaly with current proposed patches using kolla-ansible -run tempest tests -check sanity things only for tempest plugin - chandankumar to add the same job on kolla CI but with precreated image. Owner: chandankumar Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-consuming-kolla-tempest-container 8. QA SIG ---------------------------------------------------------- We talked about QA SIG and further steps on this. QA SIG is not meant to replace the QA Program. It is meant for OpenStack users, operators - including other communities interested in OpenStack / cross-community testing. Our next step is to get more feedback from various stack holders and start the wiki page and governance proposal patch. Action Items: - andreaf: reach out to OpenLab team to see if they are interested - andreaf: send out a doodle for meeting time for an initial meeting to discuss scope of QA SIG - andreaf: review formal process for SIG creation. SIG wiki, Governance patch etc - georgk will bring it up in the OPNFV test WG and other OPNFV fora - chandankumar to reachout Tripleo HA team to participate in SIG Owner: andreaf, QA Team Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-qa-sig 9. Future of OpenStack-Health ----------------------------------------------------------------- OpenStack health is a still useful tool/dashboard to understand the status of upstream development and useful for downstream development too. We discussed about issues and challenges in health dashboard. - Graph library is hard to be created and modified without (deep) Javascript knowledge - Diffifcult to keep npm libraries updates - lack of number of developers Action Items: - masayukig and mtreinish and other developers keep working on current working items. Owner: masayukig, mtreinish 10. QA Rocky Priority: -------------------------------------------------------------------- In the last, we discussed the priority items for Rocky and listed the items and owner of each items in below etherpads. Zuulv3 jobs are main priority for QA team along with Bug Triage, Deprecated API testing, interop adss-on program testing etc. We will be tracking each item weekly and try to keep the good progress on these. Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-rocky-priority Let's work closely as team and finish the mentioned item in time. -gmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Mar 7 13:44:25 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 7 Mar 2018 22:44:25 +0900 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: On Wed, Mar 7, 2018 at 10:15 PM, Andrea Frittoli wrote: > > > On Wed, Mar 7, 2018 at 12:42 PM Ghanshyam Mann > wrote: > >> Hi All, >> >> QA had discussion in Dublin PTG about interop adds-on tests location. >> First of all thanks all (specially markvoelker, dhellmann, mugsie) for >> joining the sessions. and I am glad we conclude the things and agreed on >> solution. >> >> Discussion was carry forward from the ML discussion [1] and to get the >> agreement about interop adds-on program tests location. >> >> Till now only 2 projects (heat and designate) are in list of adds-on >> program from interop side. After discussion and points from all stack >> holders, QA team agreed to host these 2 projects interop tests. Tests from >> both projects are not much as of now and QA team can accommodate to host >> their interop tests. >> >> Along with that agreement we had few more technical points to consider >> while moving designate and heat interop tests in Tempest repo. All the >> interop tests going to be added in Tempest must to be Tempest like tests. >> Tempest like tests here means tests written using Tempest interfaces and >> guidelines. For example, heat has their tests in heat-tempest-plugin based >> on gabbi and to move heat interop tests to Tempest those have to be written >> as Tempest like test. This is because if we accept non-tempest like tests >> in Tempest then, it will be too difficult to maintain by Tempest team. >> >> Projects (designate and heat) and QA team will work closely to move >> interop tests to Tempest repo which might needs some extra work of >> standardizing their tests and interface used by them like service clients >> etc. >> >> In future, if there are more new interop adds-on program proposal then, >> we need to analyse the situation again regarding QA team bandwidth. TC or >> QA or interop team needs to raise the resource requirement to Board of >> Directors before any more new adds-on program is being proposed. If QA team >> has less resource and less review bandwitdh then we cannot accept the more >> interop programs till QA get more resource to maintain new interop tests. >> >> Overall Summary: >> - QA team agreed to host the interop tests for heat and designate in >> Tempest repo. >> - Existing TC resolution needs to be adjust about the QA team resource >> bandwidth requirement. If there is going to be more adds-on program >> proposal then, QA team will not accept the new interop tests if QA team >> bandwidth issue still exist that time also. >> - Tempest will document the clear process about interop tests addition >> and other more care items etc. >> - Projects team to make their tests and interface as Tempest like tests >> and stable interfaces standards. Tempest team will closely work and help >> Designate and Heat on this. >> >> Thanks for the summary Ghanshyam! > > We had some follow up discussion on Friday about this, after the Heat team > expressed their concern about proceeding with the plan we discussed during > the session on Wednesday. > A group of representatives of the Heat, Designate and Interop teams met > with the TC and agreed on reviving the resolution started by mugsie in > https://review.openstack.org/#/c/521602 to add an alternative to hosting > tests in the Tempest repo. Unfortunately I was only there for the last few > minutes of the meeting, but I understand that the proposal drafted there > was to allow team to have interop-specific Tempest plugins > ​​ > co-owned by QA/Interop/add-on project team. mugsie has updated the > resolution accordingly and I think the discussion on that can continue in > gerrit directly. > ​Thanks for pointing that. I feel ​ co-owned is not solving any issue here and i am little worried if that makes things more difficult on controlling the tests. If tests are not tempest like tests then, it is little difficult for QA team to control or input. and if it is also owned by project then how it make sure to control the test modification by non-project team. I mean i am all ok with separate plugin which is more easy for QA team but ownership to QA is kind of going to same direction(QA team maintaining interop ads-on tests) in more difficult way. I will check and add my points on gerrit. ​ > > Just to clarify, nothing has been decided yet, but at least the new > proposal was received positively by all parties involved in the discussion > on Friday. > > Action Items: >> - mugsie to abandon https://review.openstack.org/#/c/521602 with quick >> summary of discussion here at PTG >> > This is not valid anymore, we should discuss this further and hopefully > reach an agreement. > > >> - markvoelker to write up clarification to InteropWG process stating that >> tests should be moved into Tempest before being proposed to the BoD >> - markvoelker to work with gmann before next InteropWG+BoD discussion to >> frame up a note about resourcing testing for add-on/vertical programs >> - dhellmann to adjust the TC resolution for resource requirement in QA >> when new adds-on program is being proposed >> - project teams to convert interop test and framework as per tempest >> like tests and propose to add to tempest repo. >> > If the new resolution is agreed on, this will become one of the options. > > >> - gmann to define process in QA about interop tests addition and >> maintainance >> > This is still an option so you may still want to do it. > > Andrea Frittoli (andreaf) > >> >> We have added this as one of the monitoring/helping item for QA to make >> sure it is done without delay. Let's work together to finish this >> activity. >> >> Discussion Details: https://etherpad.openstack. >> org/p/qa-rocky-ptg-Interop-test-for-adds-on-project >> >> ..1 http://lists.openstack.org/pipermail/openstack-dev/2018- >> January/126146.html >> >> -gmann >> _______________________________________________ >> Interop-wg mailing list >> Interop-wg at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/interop-wg >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Mar 7 14:22:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 7 Mar 2018 14:22:07 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] Notes on eventually extracting placement Message-ID: At the PTG we decided that while it was unlikely we could manage extracting Placement to its own project during Rocky, it would be useful to make incremental progress in that direction so the ground is prepared for when we do get around to it. This means making sure there are clear boundaries between what is "placement code" and what is "nova code", limiting imports of "nova code" into "placement code", and keeping (or moving) "placement code" under a single directory so that an eventual lift and shift to a new repo can maintain history[1]. The placement etherpad for rocky has some additional info: https://etherpad.openstack.org/p/nova-ptg-rocky-placement I've already done a fair amount of experimentation around these ideas, resulting in some blog posts [2] and some active reviews [3]. There's a mix in those reviews of work to consolidate placement, and work to make sure that while placement still exists in the nova hierarchy it doesn't import nova code it doesn't want to use. This leaves plenty of other things that need to happen: * Migration strategies need to be determined, mostly for data, but also in general. It may be best to simply document the options and let people do what they like. One option is to simply carry on using the nova_api db, but this presents eventual problems for schema adjustments [4]. * There are functional tests which currently live at functional/db which tests the persistence layer handling the placement-related objects. These should probably move under functional/api/openstack/placement * There are functional tests at api/openstack/placement that tests the scheduler report client (put there initially because they run the placement service using wsgi-intercept). These should be moved elsewhere. * Resource class fields are used by both nova and placement (and eventually other things) so should probably get the same treatment as os-traits [5], so we need an os-resource-classes and adjustments in both placement and nova to use it. In the meantime, a pending patch [6] puts those fields at the top of nova. Switching to os-resource-classes will also allow us to remove the resource class cache, which is confusing to manage during this transition. * We should experiment with strategies for how nova will do testing when placement is no longer in-repo. It should (dangerous word) be possible for placement to provide (or for nova to create) a fixture which is a wsgi-intercepted placement service with a real datastore (which is what is done now, but in-tree) but this is not something we traditionally do in functional tests, so it may be important to start migrating some functional tests (e.g., the stuff in test_servers) to integration (which could still be in nova's tree). * Eventually the work of creating a new repo, establishing status as an official project, setting up translation handling, and creating a core team will need to happen, but that can be put off until a time when we are actually doing the extraction. * All the things I'm forgetting. There's plenty. As stated at the PTG these are not things I can complete by myself (especially the things I'm forgetting). Volunteers are welcome and encouraged for the stuff above. Good first steps are reading the blog posts linked below, and reviewing the patches linked below. This will establish some of the issues and reveal things I'm forgetting. Thanks to everyone who has provided feedback on this stuff, either at the PTG, on the reviews and blogs posts, or elsewhere. Even though we can't magically do the extraction _right now_ the process of experimentation and refactoring is improving placement in place and setting the foundation for doing it later. The footnotes: [1] Some incantations with 'git filter-branch' ought to allow this. [2] Placement extraction related blog posts: * A series on placement in a container (which helps identify boundaries): * https://anticdent.org/placement-container-playground.html * https://anticdent.org/placement-container-playground-2.html * https://anticdent.org/placement-container-playground-3.html * Notes on extraction: https://anticdent.org/placement-extraction.html * Notes on scale (which also helps to identify boundaires): https://anticdent.org/placement-scale-fun.html [3] * Using a simplified, non-nova, FaultWrapper wsgi middleware: https://review.openstack.org/533752 * Moving object, database and exception handing into the placement hierarchy: https://review.openstack.org/#/c/540049/ * Refactoring wsgi-related modules to limit nova's presence in placement during the transition: https://review.openstack.org/#/c/533797/ * Cleaning up db imports so that import database models doesn't import a big pile of nova: https://review.openstack.org/#/c/533797/ [4] We keep coming up with reasons to change the schema. The latest is adding generations to consumers. [5] https://pypi.python.org/pypi/os-traits [6] https://review.openstack.org/#/c/540049/11/nova/rc_fields.py -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Wed Mar 7 14:43:06 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Mar 2018 15:43:06 +0100 Subject: [openstack-dev] [ptg] Release cycles vs. downstream consuming models discussion summary Message-ID: <6e283dce-a1ce-4878-2af8-8441beb3dc33@openstack.org> Hi everyone, On Tuesday afternoon of the PTG week we had a track of discussions to brainstorm how to better align our release cycle and stable branch maintenance with the OpenStack downstream consumption models. You can find the notes at: https://etherpad.openstack.org/p/release-cycles-ptg-rocky TL;DR: summary: * No consensus on longer / shorter release cycles * Focus on FFU to make upgrades less painful * Longer stable branch maintenance time (18 months for Ocata) * Bootstrap the "extended maintenance" concept with common policy * Group most impacted by release cadence are packagers/distros/vendors * Need for finer user survey questions on upgrade models * Need more data and more discussion, next discussion at Vancouver forum * Release Management team tracks it between events Details: We started the discussion by establishing a taxonomy of consumption models and upgrade patterns. This exercise showed that we are lacking good data on how many people follow which. The user survey asks what people are using to deploy and what they are running, but the questions are a bit too simple (some deploys have a mix of versions, what should they answer) or incomplete (some deployment mechanisms combine high-level and low-level packaging). It also misses the question of the upgrade pattern completely. I took the action of circling back with the user survey folks to see if we could enrich (or add to) those questions in the future. Another point of data, Swift seems to be the only component with an established pattern of upgrading at every intermediary release (as opposed to random points on master, or every final release). It is probably due to it being consumed standalone more than others. We continued by discussing upgrade motivation and upgrade issues. A lot of participants reported on keeping current so that you don't put yourself in a corner having an impossible upgrade in the future. Otherwise not much surprise there. The bulk of the discussion was around the impact of the release cadence. The most obvious user impact (pressure to upgrade) would be mostly covered by the work being done on fast-forward upgrades. Once that is a proven model of upgrading OpenStack, the cadence of release stops being a problem to become an asset (more choice has to where you fast-forward to). The other big user impact (support ending too early) would be mostly covered by the work being done to extend maintenance on stable branches. Again, the cadence of release is actually not the real cause of the pain felt there, and there is already work in progress to directly address the issue. That said, the release cadence definitely has cost for people working downstream from the OpenStack software release. Release marketing, and the community-generated roadmap are both examples of per-release work. We need to work on ways to make releases more business as usual and less of an exceptional event there. At this point the groups the most impacted by the release cadence are those working on packaging OpenStack releases, either as part of the open source project (OpenStackAnsible, tripleO...) or as part of a vendor product. It can be a lot of work to do the packaging/integration/test/certification work and releasing more often means that this work needs to be done more often. It is difficult for those to "skip" a release since users are generally asking for the latest features to be made available. We have also traditionally tied a number of other things to the release cadence: COA, events, elections. That said nothing forces us to really tie those one-for-one, although to keep our sanity we'd likely want to keep one a multiple of the other. Overall, the discussion on cadence concluded that ongoing work on fast-forward upgrades and longer stable branch maintenance would alleviate 80% of the release cadence pain with none of the drawbacks of releasing less often, and therefore we should focus our efforts on that for the moment. The topic of discussion then switched to discussing stable branch maintenance and LTS in more detail. The work done on tracking upper-constraints finally paid off, with stable branches now breaking less often. The stable team is therefore comfortable extending the life of Ocata for 6 more months (for a total of 18 months). This should make Ocata the first candidate for "extended maintenance" (a new name for "LTS" that does not imply anyone providing "support"). Extended maintenance, as discussed by the group, would be about letting branches open for as long as there is someone caring for them (and close them once they are broken or abandoned). This inverts the current chicken and egg resource issue on stable maintenance: we should establish the concept and once it exists hopefully interested parties will come. We discussed the need for a common policy around those branches (like "no feature backport") so that there is still some consistency. mriedem volunteered to work on a TC resolution to define what we exactly meant by that (the proposal is now being discussed at https://review.openstack.org/#/c/548916/). The afternoon concluded with the group agreeing on the need to continue the discussion on this important topic. The next stop should be a forum session at the Vancouver Summit. The release management team agreed to track progress on this between events. Thanks for reading until the end! -- Thierry Carrez (ttx) From lbragstad at gmail.com Wed Mar 7 14:58:23 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Mar 2018 08:58:23 -0600 Subject: [openstack-dev] [keystone] [oslo] new unified limit library Message-ID: Hi all, Per the identity-integration track at the PTG [0], I proposed a new oslo library for services to use for hierarchical quota enforcement [1]. Let me know if you have any questions or concerns about the library. If the oslo team would like, I can add an agenda item for next weeks oslo meeting to discuss. Thanks, Lance [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg [1] https://review.openstack.org/#/c/550491/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From chris.friesen at windriver.com Wed Mar 7 15:31:44 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Mar 2018 09:31:44 -0600 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: Message-ID: <5AA005E0.7050808@windriver.com> On 03/07/2018 08:58 AM, Lance Bragstad wrote: > Hi all, > > Per the identity-integration track at the PTG [0], I proposed a new oslo > library for services to use for hierarchical quota enforcement [1]. Let > me know if you have any questions or concerns about the library. If the > oslo team would like, I can add an agenda item for next weeks oslo > meeting to discuss. > > Thanks, > > Lance > > [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg Looks interesting. Some complications related to quotas: 1) Nova currently supports quotas for a user/group tuple that can be stricter than the overall quotas for that group. As far as I know no other project supports this. 2) Nova and cinder also support the ability to set the "default" quota class (which applies to any group that hasn't overridden their quota). Currently once it's set there is no way to revert back to the original defaults. 3) Neutron allows you to list quotas for projects with non-default quota values. This is useful, and I'd like to see it extended to optionally just display the non-default quota values rather than all quota values for that project. If we were to support user/group quotas this would be the only way to efficiently query which user/group tuples have non-default quotas. 4) In nova, keypairs belong to the user rather than the project. (This is a bit messed up, but is the current behaviour.) The quota for these should really be outside of any group, or else we should modify nova to make them belong to the project. Chris From lbragstad at gmail.com Wed Mar 7 15:49:18 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Mar 2018 09:49:18 -0600 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <5AA005E0.7050808@windriver.com> References: <5AA005E0.7050808@windriver.com> Message-ID: <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> On 03/07/2018 09:31 AM, Chris Friesen wrote: > On 03/07/2018 08:58 AM, Lance Bragstad wrote: >> Hi all, >> >> Per the identity-integration track at the PTG [0], I proposed a new oslo >> library for services to use for hierarchical quota enforcement [1]. Let >> me know if you have any questions or concerns about the library. If the >> oslo team would like, I can add an agenda item for next weeks oslo >> meeting to discuss. >> >> Thanks, >> >> Lance >> >> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > > Looks interesting. > > Some complications related to quotas: > > 1) Nova currently supports quotas for a user/group tuple that can be > stricter than the overall quotas for that group.  As far as I know no > other project supports this. By group, do you mean keystone group? Or are you talking about the quota associated to a project? > > 2) Nova and cinder also support the ability to set the "default" quota > class (which applies to any group that hasn't overridden their > quota).  Currently once it's set there is no way to revert back to the > original defaults. This sounds like a registered limit [0], but again, I'm not exactly sure what "group" means in this context. It sounds like group is supposed to be a limit for a specific project? [0] https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html#registered-limits > > 3) Neutron allows you to list quotas for projects with non-default > quota values.  This is useful, and I'd like to see it extended to > optionally just display the non-default quota values rather than all > quota values for that project.  If we were to support user/group > quotas this would be the only way to efficiently query which > user/group tuples have non-default quotas. This might be something we can work into the keystone implementation since it's still marked as experimental [1]. We have two APIs, one returns the default limits, also known as a registered limit, for a resource and one that returns the project-specific overrides. It sounds like you're interested in the second one? [1] https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits > > 4) In nova, keypairs belong to the user rather than the project.  > (This is a bit messed up, but is the current behaviour.)  The quota > for these should really be outside of any group, or else we should > modify nova to make them belong to the project. I think the initial implementation of a unified limit pattern is targeting limits and quotas for things associated to projects. In the future, we can probably expand on the limit information in keystone to include user-specific limits, which would be great if nova wants to move away from handling that kind of stuff. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From openstack at nemebean.com Wed Mar 7 16:21:05 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 7 Mar 2018 10:21:05 -0600 Subject: [openstack-dev] [oslo.db] oslo_db "max_retries" option In-Reply-To: <876451519797304@web34o.yandex.ru> References: <876451519797304@web34o.yandex.ru> Message-ID: <7f2940a2-c774-1157-eeb5-bd37e20657a3@nemebean.com> On 02/27/2018 11:55 PM, Vitalii Solodilov wrote: > Hi folks! > > I have a question about oslo_db "max_retries" option. > https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L381 > Why only DBConnectionError is considered as a reason for reconnecting here? > Wouldn't it be a good idea to check for more general DBError? > For example, DB host is down at the time of engine creation, but will become running some time later. That sounds like it would result in a DBConnectionError since we would be unable to connect. Is that not the case, and if so what exception is raised instead? From Tim.Bell at cern.ch Wed Mar 7 16:27:57 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Mar 2018 16:27:57 +0000 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> Message-ID: <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> There was discussion that Nova would deprecate the user quota feature since it really didn't fit well with the 'projects own resources' approach and was little used. At one point, some of the functionality stopped working and was repaired. The use case we had identified goes away if you have 2 level deep nested quotas (and we have now worked around it). Tim -----Original Message----- From: Lance Bragstad Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 7 March 2018 at 16:51 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library On 03/07/2018 09:31 AM, Chris Friesen wrote: > On 03/07/2018 08:58 AM, Lance Bragstad wrote: >> Hi all, >> ] > > 1) Nova currently supports quotas for a user/group tuple that can be > stricter than the overall quotas for that group. As far as I know no > other project supports this. ... I think the initial implementation of a unified limit pattern is targeting limits and quotas for things associated to projects. In the future, we can probably expand on the limit information in keystone to include user-specific limits, which would be great if nova wants to move away from handling that kind of stuff. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Tim.Bell at cern.ch Wed Mar 7 16:33:18 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Mar 2018 16:33:18 +0000 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> Message-ID: <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Sorry, I remember more detail now... it was using the 'owner' of the VM as part of the policy rather than quota. Is there a per-user/per-group quota in Nova? Tim -----Original Message----- From: Tim Bell Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 7 March 2018 at 17:29 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library There was discussion that Nova would deprecate the user quota feature since it really didn't fit well with the 'projects own resources' approach and was little used. At one point, some of the functionality stopped working and was repaired. The use case we had identified goes away if you have 2 level deep nested quotas (and we have now worked around it). Tim -----Original Message----- From: Lance Bragstad Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 7 March 2018 at 16:51 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library On 03/07/2018 09:31 AM, Chris Friesen wrote: > On 03/07/2018 08:58 AM, Lance Bragstad wrote: >> Hi all, >> ] > > 1) Nova currently supports quotas for a user/group tuple that can be > stricter than the overall quotas for that group. As far as I know no > other project supports this. ... I think the initial implementation of a unified limit pattern is targeting limits and quotas for things associated to projects. In the future, we can probably expand on the limit information in keystone to include user-specific limits, which would be great if nova wants to move away from handling that kind of stuff. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Wed Mar 7 16:36:08 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 8 Mar 2018 00:36:08 +0800 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: This is certainly a feature will make Public Cloud providers very happy :) On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell wrote: > Sorry, I remember more detail now... it was using the 'owner' of the VM as > part of the policy rather than quota. > > Is there a per-user/per-group quota in Nova? > > Tim > > -----Original Message----- > From: Tim Bell > Reply-To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Date: Wednesday, 7 March 2018 at 17:29 > To: "OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library > > > There was discussion that Nova would deprecate the user quota feature > since it really didn't fit well with the 'projects own resources' approach > and was little used. At one point, some of the functionality stopped > working and was repaired. The use case we had identified goes away if you > have 2 level deep nested quotas (and we have now worked around it). > > Tim > -----Original Message----- > From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Wednesday, 7 March 2018 at 16:51 > To: "openstack-dev at lists.openstack.org" openstack.org> > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit > library > > > > On 03/07/2018 09:31 AM, Chris Friesen wrote: > > On 03/07/2018 08:58 AM, Lance Bragstad wrote: > >> Hi all, > >> > ] > > > > 1) Nova currently supports quotas for a user/group tuple that > can be > > stricter than the overall quotas for that group. As far as I > know no > > other project supports this. > ... > I think the initial implementation of a unified limit pattern is > targeting limits and quotas for things associated to projects. In > the > future, we can probably expand on the limit information in > keystone to > include user-specific limits, which would be great if nova wants > to move > away from handling that kind of stuff. > > > > Chris > > > > ____________________________________________________________ > ______________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > > > > > ____________________________________________________________ > ______________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Mar 7 16:44:10 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 7 Mar 2018 16:44:10 +0000 Subject: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: I think nested quotas would give the same thing, i.e. you have a parent project for the group and child projects for the users. This would not need user/group quotas but continue with the ‘project owns resources’ approach. It can be generalised to other use cases like the value add partner or the research experiment working groups (http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html) Tim From: Zhipeng Huang Reply-To: "openstack-sigs at lists.openstack.org" Date: Wednesday, 7 March 2018 at 17:37 To: "OpenStack Development Mailing List (not for usage questions)" , openstack-operators , "openstack-sigs at lists.openstack.org" Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified limit library This is certainly a feature will make Public Cloud providers very happy :) On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell > wrote: Sorry, I remember more detail now... it was using the 'owner' of the VM as part of the policy rather than quota. Is there a per-user/per-group quota in Nova? Tim -----Original Message----- From: Tim Bell > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 March 2018 at 17:29 To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library There was discussion that Nova would deprecate the user quota feature since it really didn't fit well with the 'projects own resources' approach and was little used. At one point, some of the functionality stopped working and was repaired. The use case we had identified goes away if you have 2 level deep nested quotas (and we have now worked around it). Tim -----Original Message----- From: Lance Bragstad > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 7 March 2018 at 16:51 To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library On 03/07/2018 09:31 AM, Chris Friesen wrote: > On 03/07/2018 08:58 AM, Lance Bragstad wrote: >> Hi all, >> ] > > 1) Nova currently supports quotas for a user/group tuple that can be > stricter than the overall quotas for that group. As far as I know no > other project supports this. ... I think the initial implementation of a unified limit pattern is targeting limits and quotas for things associated to projects. In the future, we can probably expand on the limit information in keystone to include user-specific limits, which would be great if nova wants to move away from handling that kind of stuff. > > Chris > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Wed Mar 7 17:18:30 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 7 Mar 2018 17:18:30 +0000 Subject: [openstack-dev] [vitrage] alarm and resource equivalence Message-ID: <343AC903-C3FB-41D5-A2E0-A8E067E775C2@nokia.com> Hi, Since we need to design these days both alarm equivalence/merge [1] and resource equivalence/merge features, I thought it might be a good idea to start with a use cases document. Let’s agree on the requirements, and then see if we can come up with a design that matches both cases. I pushed the first draft for the use cases document [2], and I’ll be happy to get your comments. [1] https://review.openstack.org/#/c/547931 [2] https://review.openstack.org/#/c/550534 Thanks, Ifat. From mriedemos at gmail.com Wed Mar 7 18:11:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 7 Mar 2018 12:11:12 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: References: Message-ID: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> On 3/7/2018 6:12 AM, Chris Dent wrote: > # Talking about the PTG at the PTG > > At the [board > meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html), > > the future of the PTG was a big topic. As currently constituted it > presents some challenges: > > * It is difficult for some people to attend because of visa and other >   travel related issues. > * It is expensive to run and not everyone is convinced of the return >   on investment. > * Some people don't like it (they either miss the old way of doing the >   design summit, or midcycles, or $OTHER). > * Plenty of other reasons that I'm probably not aware of. All of this is true of the summit too isn't it? When talking about the PTG, I always hear someone say essentially something like, "you know, things would be better if we did ". It's funny how we seem to only remember the last 6 months of anything. > > This same topic was reviewed at [yesterday's office > hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32). > > > For now, the next 2018 PTG is going to happen (destination unknown) but > plans for 2019 are still being discussed. > > If you have opinions about the PTG, there will be an opportunity to > express them in a forthcoming survey. Beyond that, however, it is > important [that management at contributing > companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24) > > hear from more people (notably their employees) than the foundation > about the value of the PTG. > > My own position is that of the three different styles of in-person > events for technical contributors to OpenStack that I've experienced > (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes > distractions from other obligations (customer meetings, presentations, > marketing requirements) while maximizing cross-project interaction. Agree. > > One idea, discussed > [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24) > > and [earlier > today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20) > > was to have the PTG be open to technical participants of any sort, not > just so-called "OpenStack developers". Make it more of a place for > people who hack on and with OpenStack to hack and talk. Leave the > summit (without a forum) for presentations, marketing, pre-sales, etc. I don't understand why some people/organizations/groups think that they shouldn't attend the PTG - maybe it's something in the 'who should attend' docs on the website? But I hear time and again that operators think they shouldn't attend the PTG, but we know a few do and they are extremely valuable in the developer discussions for their perspective on how they, and other operators, run their clouds and what they want/need to see happen on the dev side. The silo effect between dev and ops communities is very weird and counter-productive IMO. And the Forum doesn't solve that problem really because not everyone can get funding to travel to the summit (Sydney, hello). Case in point: the public cloud WG session held at the PTG on Monday morning where we went through the spreadsheet of missing features; I think I was the only full time core project developer in the room which was otherwise operators (CERN, OVH, City Network and Vexxhost were there) and it was much more productive actually having us sitting together going through the list and checking things off which had either been completed already, or were bugs instead of features, or that I could just say, "this depends on that and Jane Doe is working on it, so follow up with her" or "this is a known thing, it's been discussed, but it needs a driver (project manager) - so that's your next step". That wouldn't have been possible if the public cloud WG operators weren't attending the PTG. > > An issue raised with conflating the PTG and the Forum is that it would > remove the > [inward/outward > focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17) > > concept that is supposed to distinguish the two events. > > I guess it depends on how we define "we" but I've always assumed that > both events were for outward focus and that for any inward focussing > effort we ought to be able use asynchronous tools more. > I don't get the inward/outward thing. First two days of the old design summit (ops summit?) format was all cross-project stuff (docs, upgrades, testing, ops feedback, etc). That's the same as what happens at the PTG now too. The last three days of the old design summit (and now PTG) are vertical project discussion for the most part, but Thursday has also become a de-facto cross-project day for a lot of teams (nova/cinder, nova/neutron, nova/ironic all happened on Thursday). I'm not sure what is happening at the Forum events that is so wildly different, or more productive, than what we can do at the PTG - and arguably do it better at the PTG because of fewer distractions to be giving talks, talking to customers, and having time-boxed 40 minute slots. > > * Rather than having Long Term Support, which implies too much, a >   better thing to do is enable [extended >   maintenance](https://review.openstack.org/#/c/548916/) for those >   parties who want to do it. > Good lord I'm already regretting even thinking it would be a good idea (or fun) to just throw something up as a resolution based on previous discussions about all of this. Tony, save me. -- Thanks, Matt From chris.friesen at windriver.com Wed Mar 7 19:20:18 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Mar 2018 13:20:18 -0600 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: <5AA03B72.6080201@windriver.com> On 03/07/2018 10:33 AM, Tim Bell wrote: > Sorry, I remember more detail now... it was using the 'owner' of the VM as part of the policy rather than quota. > > Is there a per-user/per-group quota in Nova? Nova supports setting quotas for individual users within a project (as long as they are smaller than the project quota for that resource). I'm not sure how much it's actually used, or if they want to get rid of it. (Maybe melwitt can chime in.) But it's there now. As you can see at "https://developer.openstack.org/api-ref/compute/#update-quotas", there's an optional "user_id" field in the request. Same thing for the "delete" and "detailed get" operations. Chris From chris.friesen at windriver.com Wed Mar 7 19:27:14 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Mar 2018 13:27:14 -0600 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> Message-ID: <5AA03D12.1040403@windriver.com> On 03/07/2018 09:49 AM, Lance Bragstad wrote: > > > On 03/07/2018 09:31 AM, Chris Friesen wrote: >> On 03/07/2018 08:58 AM, Lance Bragstad wrote: >>> Hi all, >>> >>> Per the identity-integration track at the PTG [0], I proposed a new oslo >>> library for services to use for hierarchical quota enforcement [1]. Let >>> me know if you have any questions or concerns about the library. If the >>> oslo team would like, I can add an agenda item for next weeks oslo >>> meeting to discuss. >>> >>> Thanks, >>> >>> Lance >>> >>> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >> >> Looks interesting. >> >> Some complications related to quotas: >> >> 1) Nova currently supports quotas for a user/group tuple that can be >> stricter than the overall quotas for that group. As far as I know no >> other project supports this. > By group, do you mean keystone group? Or are you talking about the quota > associated to a project? Sorry, typo. I meant quotas for a user/project tuple, which can be stricter than the overall quotas for that project. >> 2) Nova and cinder also support the ability to set the "default" quota >> class (which applies to any group that hasn't overridden their >> quota). Currently once it's set there is no way to revert back to the >> original defaults. > This sounds like a registered limit [0], but again, I'm not exactly sure > what "group" means in this context. It sounds like group is supposed to > be a limit for a specific project? > > [0] > https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html#registered-limits Again, should be project instead of group. And registered limits seem essentially analogous. >> 3) Neutron allows you to list quotas for projects with non-default >> quota values. This is useful, and I'd like to see it extended to >> optionally just display the non-default quota values rather than all >> quota values for that project. If we were to support user/group >> quotas this would be the only way to efficiently query which >> user/group tuples have non-default quotas. > This might be something we can work into the keystone implementation > since it's still marked as experimental [1]. We have two APIs, one > returns the default limits, also known as a registered limit, for a > resource and one that returns the project-specific overrides. It sounds > like you're interested in the second one? > > [1] > https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits Again, should be user/project tuples. Yes, in this case I'm talking about the project-specific ones. (It's actually worse if you support user/project limits since with the current nova API you can potentially get combinatorial explosion if many users are part of many projects.) I think it would be useful to be able to constrain this query to report limits for a specific project, (and a specific user if that will be supported.) I also think it would be useful to be able to constrain it to report only the limits that have been explicitly set (rather than inheriting the default from the project or the registered limit). Maybe it's already intended to work this way--if so that should be explicitly documented. >> 4) In nova, keypairs belong to the user rather than the project. >> (This is a bit messed up, but is the current behaviour.) The quota >> for these should really be outside of any group, or else we should >> modify nova to make them belong to the project. > I think the initial implementation of a unified limit pattern is > targeting limits and quotas for things associated to projects. In the > future, we can probably expand on the limit information in keystone to > include user-specific limits, which would be great if nova wants to move > away from handling that kind of stuff. The quota handling for keypairs is a bit messed up in nova right now, but it's legacy behaviour at this point. It'd be nice to be able to get it right if we're switching to new quota management mechanisms. Chris From lbragstad at gmail.com Wed Mar 7 20:24:13 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Mar 2018 14:24:13 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: References: Message-ID: <730dd4e7-58b5-bd3c-c3bc-ffb072394118@gmail.com> On 03/07/2018 06:12 AM, Chris Dent wrote: > > HTML: https://anticdent.org/tc-report-18-10.html > > This is a TC Report, but since everything that happened in its window > of observation is preparing for the > [PTG](https://www.openstack.org/ptg), being at the PTG, trying to get > home from the PTG, and recovering from the PTG, perhaps think of this > as "What the TC talked about [at] the PTG". As it is impossible to be > everywhere at once (especially when the board meeting overlaps with > other responsibilities) this will miss a lot of important stuff.  I > hope there are other summaries. > > As you may be aware, it [snowed in > Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty > of disruption to the > [PTG](https://twitter.com/search?q=%23openstackptg) but everyone > (foundation staff, venue staff, hotel staff, attendees, uisce beatha) > worked together to make a good week. > > # Talking about the PTG at the PTG > > At the [board > meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html), > > the future of the PTG was a big topic. As currently constituted it > presents some challenges: > > * It is difficult for some people to attend because of visa and other >   travel related issues. > * It is expensive to run and not everyone is convinced of the return >   on investment. > * Some people don't like it (they either miss the old way of doing the >   design summit, or midcycles, or $OTHER). > * Plenty of other reasons that I'm probably not aware of. > > This same topic was reviewed at [yesterday's office > hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32). > > > For now, the next 2018 PTG is going to happen (destination unknown) but > plans for 2019 are still being discussed. > > If you have opinions about the PTG, there will be an opportunity to > express them in a forthcoming survey. Beyond that, however, it is > important [that management at contributing > companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24) > > hear from more people (notably their employees) than the foundation > about the value of the PTG. > > My own position is that of the three different styles of in-person > events for technical contributors to OpenStack that I've experienced > (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes > distractions from other obligations (customer meetings, presentations, > marketing requirements) while maximizing cross-project interaction. > > One idea, discussed > [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24) > > and [earlier > today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20) > > was to have the PTG be open to technical participants of any sort, not > just so-called "OpenStack developers". Make it more of a place for > people who hack on and with OpenStack to hack and talk. Leave the > summit (without a forum) for presentations, marketing, pre-sales, etc. > > An issue raised with conflating the PTG and the Forum is that it would > remove the > [inward/outward > focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17) > concept that is supposed to distinguish the two events. > > I guess it depends on how we define "we" but I've always assumed that > both events were for outward focus and that for any inward focussing > effort we ought to be able use asynchronous tools more. I tried bringing this up during the PTG feedback session last Thursday, but figured I would highlight it here (it also kinda resonates with Matt's note, too). Several projects have suffered from aggressive attrition, where there are only a few developers from a few companies. I fear going back to midcycles will be extremely tough with less corporate sponsorship. The PTGs are really where smaller teams can sit down with developers from other projects and work on cross-project issues. > > # Foundation and OCI > > Thierry mentioned > [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04) > > that it is likely that the OpenStack Foundation will join the [Open > Container Initiative](https://www.opencontainers.org/) because of > [Kata](https://katacontainers.io/) and > [LOCI](https://governance.openstack.org/tc/reference/projects/loci.html). > > This segued into some brief concerns about the [attentions and > intentions of the > Foundation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:13:34), > > aggravated by the board meeting schedule conflict (there's agreement > that will never ever happen again), and the rumor milling about the > PTG. > > # Friday at the PTG with the TC > > The TC had scheduled a half day of discussion for Friday at the PTG. A > big [agenda](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics), a > fun filled week, and the snow meant we went nearly all day (and since > there's no place to go, let's talk, let's talk, let's talk) with some > reasonable progress. Some highlights: > > * There was some discussion on trying to move forward with >   constellations concept, but I don't recall specific outcomes from >   that discussion. > > * The team diversity tags need to be updated to reflect adjustments in >   the very high bars we set earlier in the history of OpenStack. We >   agreed to not remove projects from the tc-approved tag, as that >   could be taken the wrong way. Instead we'll create a new tag for >   projects that are in the trademark program. > > * Rather than having Long Term Support, which implies too much, a >   better thing to do is enable [extended >   maintenance](https://review.openstack.org/#/c/548916/) for those >   parties who want to do it. > > * Heat was approved to be a part of the trademark program, but then >   there were issues with where to put their tests and the tooling used >   to manage them. By the power of getting the right people in the room >   at the same time, we reached some consensus which is being finalized >   on a [proposed >   resolution](https://review.openstack.org/#/c/521602/). > > * We need to make an official timeline for the deprecation (and >   eventual removal) of support for Python 2, meaning we also need to >   accelerate the adoption of Python 3 as the primary environment. > > * In a discussion about the availability of >   [etcd](https://coreos.com/etcd/) it was decided that [tooz needs to >   be >   > finished](https://docs.openstack.org/tooz/latest/user/compatibility.html). > > See the > [etherpad](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics) for > additional details. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Wed Mar 7 20:29:05 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 7 Mar 2018 20:29:05 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: <730dd4e7-58b5-bd3c-c3bc-ffb072394118@gmail.com> References: <730dd4e7-58b5-bd3c-c3bc-ffb072394118@gmail.com> Message-ID: <9fe1d705-808c-8928-f08e-8192a7c84aa9@ham.ie> On 07/03/18 20:24, Lance Bragstad wrote: > > > On 03/07/2018 06:12 AM, Chris Dent wrote: >> >> HTML: https://anticdent.org/tc-report-18-10.html >> >> This is a TC Report, but since everything that happened in its window >> of observation is preparing for the >> [PTG](https://www.openstack.org/ptg), being at the PTG, trying to get >> home from the PTG, and recovering from the PTG, perhaps think of this >> as "What the TC talked about [at] the PTG". As it is impossible to be >> everywhere at once (especially when the board meeting overlaps with >> other responsibilities) this will miss a lot of important stuff.  I >> hope there are other summaries. >> >> As you may be aware, it [snowed in >> Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty >> of disruption to the >> [PTG](https://twitter.com/search?q=%23openstackptg) but everyone >> (foundation staff, venue staff, hotel staff, attendees, uisce beatha) >> worked together to make a good week. >> >> # Talking about the PTG at the PTG >> >> At the [board >> meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html), >> >> the future of the PTG was a big topic. As currently constituted it >> presents some challenges: >> >> * It is difficult for some people to attend because of visa and other >>   travel related issues. >> * It is expensive to run and not everyone is convinced of the return >>   on investment. >> * Some people don't like it (they either miss the old way of doing the >>   design summit, or midcycles, or $OTHER). >> * Plenty of other reasons that I'm probably not aware of. >> >> This same topic was reviewed at [yesterday's office >> hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32). >> >> >> For now, the next 2018 PTG is going to happen (destination unknown) but >> plans for 2019 are still being discussed. >> >> If you have opinions about the PTG, there will be an opportunity to >> express them in a forthcoming survey. Beyond that, however, it is >> important [that management at contributing >> companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24) >> >> hear from more people (notably their employees) than the foundation >> about the value of the PTG. >> >> My own position is that of the three different styles of in-person >> events for technical contributors to OpenStack that I've experienced >> (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes >> distractions from other obligations (customer meetings, presentations, >> marketing requirements) while maximizing cross-project interaction. >> >> One idea, discussed >> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24) >> >> and [earlier >> today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20) >> >> was to have the PTG be open to technical participants of any sort, not >> just so-called "OpenStack developers". Make it more of a place for >> people who hack on and with OpenStack to hack and talk. Leave the >> summit (without a forum) for presentations, marketing, pre-sales, etc. >> >> An issue raised with conflating the PTG and the Forum is that it would >> remove the >> [inward/outward >> focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17) >> concept that is supposed to distinguish the two events. >> >> I guess it depends on how we define "we" but I've always assumed that >> both events were for outward focus and that for any inward focussing >> effort we ought to be able use asynchronous tools more. > I tried bringing this up during the PTG feedback session last Thursday, > but figured I would highlight it here (it also kinda resonates with > Matt's note, too). > > Several projects have suffered from aggressive attrition, where there > are only a few developers from a few companies. I fear going back to > midcycles will be extremely tough with less corporate sponsorship. The > PTGs are really where smaller teams can sit down with developers from > other projects and work on cross-project issues. This ^ . If we go back to the Design Summits, where these small projects would get 3 or 4 40min slots, and very little chance of a mid-cycle, it will cause teams issues. >> >> # Foundation and OCI >> >> Thierry mentioned >> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04) >> >> that it is likely that the OpenStack Foundation will join the [Open >> Container Initiative](https://www.opencontainers.org/) because of >> [Kata](https://katacontainers.io/) and >> [LOCI](https://governance.openstack.org/tc/reference/projects/loci.html). >> >> This segued into some brief concerns about the [attentions and >> intentions of the >> Foundation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:13:34), >> >> aggravated by the board meeting schedule conflict (there's agreement >> that will never ever happen again), and the rumor milling about the >> PTG. >> >> # Friday at the PTG with the TC >> >> The TC had scheduled a half day of discussion for Friday at the PTG. A >> big [agenda](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics), a >> fun filled week, and the snow meant we went nearly all day (and since >> there's no place to go, let's talk, let's talk, let's talk) with some >> reasonable progress. Some highlights: >> >> * There was some discussion on trying to move forward with >>   constellations concept, but I don't recall specific outcomes from >>   that discussion. >> >> * The team diversity tags need to be updated to reflect adjustments in >>   the very high bars we set earlier in the history of OpenStack. We >>   agreed to not remove projects from the tc-approved tag, as that >>   could be taken the wrong way. Instead we'll create a new tag for >>   projects that are in the trademark program. >> >> * Rather than having Long Term Support, which implies too much, a >>   better thing to do is enable [extended >>   maintenance](https://review.openstack.org/#/c/548916/) for those >>   parties who want to do it. >> >> * Heat was approved to be a part of the trademark program, but then >>   there were issues with where to put their tests and the tooling used >>   to manage them. By the power of getting the right people in the room >>   at the same time, we reached some consensus which is being finalized >>   on a [proposed >>   resolution](https://review.openstack.org/#/c/521602/). >> >> * We need to make an official timeline for the deprecation (and >>   eventual removal) of support for Python 2, meaning we also need to >>   accelerate the adoption of Python 3 as the primary environment. >> >> * In a discussion about the availability of >>   [etcd](https://coreos.com/etcd/) it was decided that [tooz needs to >>   be >>   >> finished](https://docs.openstack.org/tooz/latest/user/compatibility.html). >> >> See the >> [etherpad](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics) for >> additional details. >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From georg.kunz at ericsson.com Wed Mar 7 20:59:46 2018 From: georg.kunz at ericsson.com (Georg Kunz) Date: Wed, 7 Mar 2018 20:59:46 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> Message-ID: Hi Adam, > Raoul Scarazzini wrote: > >On 06/03/2018 13:27, Adam Spiers wrote: > >> Hi Raoul and all, > >> Sorry for joining this discussion late! > >[...] > >> I do not work on TripleO, but I'm part of the wider OpenStack > >> sub-communities which focus on HA[0] and more recently, > >> self-healing[1].  With that hat on, I'd like to suggest that maybe > >> it's possible to collaborate on this in a manner which is agnostic to > >> the deployment mechanism.  There is an open spec on this> > >> https://review.openstack.org/#/c/443504/ > >> which was mentioned in the Denver PTG session on destructive testing > >> which you referenced[2]. > >[...] > >>    https://www.opnfv.org/community/projects/yardstick > >[...] > >> Currently each sub-community and vendor seems to be reinventing HA > >> testing by itself to some extent, which is easier to accomplish in > >> the short-term, but obviously less efficient in the long-term.  It > >> would be awesome if we could break these silos down and join efforts! > >> :-) > > > >Hi Adam, > >First of all thanks for your detailed answer. Then let me be honest > >while saying that I didn't know yardstick. > > Neither did I until Sydney, despite being involved with OpenStack HA for > many years ;-) I think this shows that either a) there is room for improved > communication between the OpenStack and OPNFV communities, or b) I > need to take my head out of the sand more often ;-) > > >I need to start from scratch > >here to understand what this project is. In any case, the exact meaning > >of this thread is to involve people and have a more comprehensive look > >at what's around. > >The point here is that, as you can see from the tripleo-ha-utils spec > >[1] I've created, the project is meant for TripleO specifically. On one > >side this is a significant limitation, but on the other one, due to the > >pluggable nature of the project, I think that integrations with other > >software like you are proposing is not impossible. > > Yep. I totally sympathise with the tension between the need to get > something working quickly, vs. the need to collaborate with the community > in the most efficient way. > > >Feel free to add your comments to the review. > > The spec looks great to me; I don't really have anything to add, and I don't > feel comfortable voting in a project which I know very little about. > > >In the meantime, I'll check yardstick to see which kind of bridge we > >can build to avoid reinventing the wheel. > > Great, thanks! I wish I could immediately help with this, but I haven't had the > chance to learn yardstick myself yet. We should probably try to recruit > someone from OPNFV to provide advice. I've cc'd Georg who IIRC was the > person who originally told me about yardstick :-) He is an NFV expert and is > also very interested in automated testing efforts: > > http://lists.openstack.org/pipermail/openstack-dev/2017- > November/124942.html > > so he may be able to help with this architectural challenge. Thank you for bringing this up here. Better collaboration and sharing of knowledge, methodologies and tools across the communities is really what I'd like to see and facilitate. Hence, I am happy to help. I have already started to advertise the newly proposed QA SIG in the OPNFV test WG and I'll happily do the same for the self-healing SIG and any HA testing efforts in general. There is certainly some overlapping interest in these testing aspects between the QA SIG and the self-healing SIG and hence collaboration between both SIGs is crucial. One remark regarding tools and frameworks: I consider the true value of a SIG to be a place for talking about methodologies and best practices: What do we need to test? What are the challenges? How can we approach this across communities? The tools and frameworks are important and we should investigate which tools are available, how good they are, how much they fit a given purpose, but at the end of the day they are tools meant to enable well designed testing methodologies. > Also you should be aware that work has already started on Eris, the extreme > testing framework proposed in this user story: > > http://specs.openstack.org/openstack/openstack-user-stories/user- > stories/proposed/openstack_extreme_testing.html > > and in the spec you already saw: > > https://review.openstack.org/#/c/443504/ > > You can see ongoing work here: > > https://github.com/LCOO/eris > https://openstack- > lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+- > +Extreme+Testing+Framework+for+OpenStack > > It looks like there is a plan to propose a new SIG for this, although personally I > would be very happy to see it adopted by the self-healing SIG, since this > framework is exactly what is needed for testing any self-healing mechanism. > > I'm hoping that Sampath and/or Gautum will chip in here, since I think they're > currently the main drivers for Eris. > > I'm beginning to think that maybe we should organise a video conference call > to coordinate efforts between the various interested parties. If there is > appetite for that, the first question is: who wants to be involved? To answer > that, I have created an etherpad where interested people can sign up: > > https://etherpad.openstack.org/p/extreme-testing-contacts > > and I've cc'd people who I think would probably be interested. Does this > sound like a good approach? We discussed a very similar idea in Dublin in the context of the QA SIG. I very much like the idea of a cross-community, cross-team, and apparently even cross-SIG approach. Cheers Georg From hongbin.lu at huawei.com Wed Mar 7 21:02:26 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Wed, 7 Mar 2018 21:02:26 +0000 Subject: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters Message-ID: <0957CD8F4B55C0418161614FEC580D6B2F8C3E1B@YYZEML702-CHM.china.huawei.com> Hi all, This is a follow-up for the discussion in Dublin PTG about how Neutron API server should handle invalid query parameter [1]. According to the feedback, I sent this ML to seek advice from API-WG in this regards. As a brief recap, we were discussing how Neutron API server should behave if invalid query parameters were inputted. Per my understanding, the general consensus is to make Neutron API server behave consistently with other OpenStack projects. The question for API-WG is if there is any guideline to clarify how OpenStack projects should handle invalid query parameters. Query parameters are various across different projects but it seems most projects support these four categories of query parameters: sorting, pagination, filtering, and fields selection. I saw API-WG provided a guideline to define how to handle valid parameters of these categories [2], but it doesn’t seem to define how to handle invalid parameters. I wonder if API-WG could clarify it. For example, if users provide an invalid filter on listing the resources, should the API server ignore the invalid filter and return a successful response? Or it should return an error response? Below is a list of specific scenarios and examples to consider: 1. Invalid sorting. For example: GET "/v2.0/networks?sort_dir=desc&sort_key=" GET "/v2.0/networks?sort_dir=&sort_key=xxx" 2. Invalid pagination. For example: GET "/v2.0/networks?limit=&marker=xxx" GET "/v2.0/networks?limit=1&marker=" 3. Invalid filter. For example: GET "/v2.0/networks?=xxx" GET "/v2.0/networks?xxx=" 4. Invalid field. For example: GET "/v2.0/networks?fields=" Best regards, Hongbin [1] https://bugs.launchpad.net/neutron/+bug/1749820 [2] https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html ________________________________ 华为技术有限公司 Huawei Technologies Co., Ltd. [Company_logo] ________________________________  本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件! This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5474 bytes Desc: image001.png URL: From hongbin.lu at huawei.com Wed Mar 7 21:09:03 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Wed, 7 Mar 2018 21:09:03 +0000 Subject: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters Message-ID: <0957CD8F4B55C0418161614FEC580D6B2F8C3E2E@YYZEML702-CHM.china.huawei.com> Hi all, This is a follow-up for the discussion in Dublin PTG about how Neutron API server should handle invalid query parameter [1]. According to the feedback, I sent this ML to seek advice from API-WG in this regards. As a brief recap, we were discussing how Neutron API server should behave if invalid query parameters were inputted. Per my understanding, the general consensus is to make Neutron API server behave consistently with other OpenStack projects. The question for API-WG is if there is any guideline to clarify how OpenStack projects should handle invalid query parameters. Query parameters are various across different projects but it seems most projects support these four categories of query parameters: sorting, pagination, filtering, and fields selection. I saw API-WG provided a guideline to define how to handle valid parameters of these categories [2], but it doesn't seem to define how to handle invalid parameters. I wonder if API-WG could clarify it. For example, if users provide an invalid filter on listing the resources, should the API server ignore the invalid filter and return a successful response? Or it should return an error response? Below is a list of specific scenarios and examples to consider: 1. Invalid sorting. For example: GET "/v2.0/networks?sort_dir=desc&sort_key=" GET "/v2.0/networks?sort_dir=&sort_key=xxx" 2. Invalid pagination. For example: GET "/v2.0/networks?limit=&marker=xxx" GET "/v2.0/networks?limit=1&marker=" 3. Invalid filter. For example: GET "/v2.0/networks?=xxx" GET "/v2.0/networks?xxx=" 4. Invalid field. For example: GET "/v2.0/networks?fields=" Best regards, Hongbin [1] https://bugs.launchpad.net/neutron/+bug/1749820 [2] https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Mar 7 21:12:22 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 7 Mar 2018 21:12:22 +0000 (GMT) Subject: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters In-Reply-To: <0957CD8F4B55C0418161614FEC580D6B2F8C3E1B@YYZEML702-CHM.china.huawei.com> References: <0957CD8F4B55C0418161614FEC580D6B2F8C3E1B@YYZEML702-CHM.china.huawei.com> Message-ID: On Wed, 7 Mar 2018, Hongbin Lu wrote: > As a brief recap, we were discussing how Neutron API server should behave if invalid query parameters were inputted. Per my understanding, the general consensus is to make Neutron API server behave consistently with other OpenStack projects. The question for API-WG is if there is any guideline to clarify how OpenStack projects should handle invalid query parameters. Query parameters are various across different projects but it seems most projects support these four categories of query parameters: sorting, pagination, filtering, and fields selection. I saw API-WG provided a guideline to define how to handle valid parameters of these categories [2], but it doesn’t seem to define how to handle invalid parameters. > > I wonder if API-WG could clarify it. For example, if users provide an invalid filter on listing the resources, should the API server ignore the invalid filter and return a successful response? Or it should return an error response? Below is a list of specific scenarios and examples to consider: It's hard to find, but there's existing guidance that touches on this. From http://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications : [I]f the API supports query parameters and a request contains an unknown or unsupported parameter, the server should return a 400 Bad Request response. Invalid values in the request URL should never be silently ignored, as the response may not match the client’s expectation. For example, consider the case where an API allows filtering on name by specifying ‘?name=foo’ in the query string, and in one such request there is a typo, such as ‘?nmae=foo’. If this error were silently ignored, the user would get back all resources instead of just the ones named ‘foo’, which would not be correct. The error message that is returned should clearly indicate the problem so that the user could correct it and re-submit. This same logic can be applied to invalid fields used in parameters which can only accept a limited number of inputs (such as sort_key) so in the examples you give a 400 would be the way to ensure that the user agent is actually made aware that their request had issues. I hope this helps. Please let the api-sig know if you think we should adjust the guidelines to make this more explicit somehow. -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From hongbin.lu at huawei.com Wed Mar 7 21:12:53 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Wed, 7 Mar 2018 21:12:53 +0000 Subject: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters Message-ID: <0957CD8F4B55C0418161614FEC580D6B2F8C3E42@YYZEML702-CHM.china.huawei.com> Hi all, Please disregard the email below since I used the wrong template. Sorry about that. The email with the same content was re-sent in a new thread http://lists.openstack.org/pipermail/openstack-dev/2018-March/128022.html . Best regards, Hongbin From: Hongbin Lu Sent: March-07-18 4:02 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [api-wg][api][neutron] How to handle invalid query parameters Hi all, This is a follow-up for the discussion in Dublin PTG about how Neutron API server should handle invalid query parameter [1]. According to the feedback, I sent this ML to seek advice from API-WG in this regards. As a brief recap, we were discussing how Neutron API server should behave if invalid query parameters were inputted. Per my understanding, the general consensus is to make Neutron API server behave consistently with other OpenStack projects. The question for API-WG is if there is any guideline to clarify how OpenStack projects should handle invalid query parameters. Query parameters are various across different projects but it seems most projects support these four categories of query parameters: sorting, pagination, filtering, and fields selection. I saw API-WG provided a guideline to define how to handle valid parameters of these categories [2], but it doesn’t seem to define how to handle invalid parameters. I wonder if API-WG could clarify it. For example, if users provide an invalid filter on listing the resources, should the API server ignore the invalid filter and return a successful response? Or it should return an error response? Below is a list of specific scenarios and examples to consider: 1. Invalid sorting. For example: GET "/v2.0/networks?sort_dir=desc&sort_key=" GET "/v2.0/networks?sort_dir=&sort_key=xxx" 2. Invalid pagination. For example: GET "/v2.0/networks?limit=&marker=xxx" GET "/v2.0/networks?limit=1&marker=" 3. Invalid filter. For example: GET "/v2.0/networks?=xxx" GET "/v2.0/networks?xxx=" 4. Invalid field. For example: GET "/v2.0/networks?fields=" Best regards, Hongbin [1] https://bugs.launchpad.net/neutron/+bug/1749820 [2] https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html ________________________________ 华为技术有限公司 Huawei Technologies Co., Ltd. [Company_logo] ________________________________  本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件! This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5474 bytes Desc: image001.png URL: From mriedemos at gmail.com Wed Mar 7 21:35:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 7 Mar 2018 15:35:57 -0600 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: <730dd4e7-58b5-bd3c-c3bc-ffb072394118@gmail.com> References: <730dd4e7-58b5-bd3c-c3bc-ffb072394118@gmail.com> Message-ID: On 3/7/2018 2:24 PM, Lance Bragstad wrote: > I tried bringing this up during the PTG feedback session last Thursday Unless you wanted to talk about snow, there was no feedback to be had at the feedback session. Being able to actually give feedback on the PTG during the PTG feedback session is some unsolicited feedback that I'm going to give now. -- Thanks, Matt From chris.friesen at windriver.com Wed Mar 7 21:55:53 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Mar 2018 15:55:53 -0600 Subject: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA005E0.7050808@windriver.com> <4a8db303-318d-c385-c350-ef25702d8b20@gmail.com> <60EC27CD-7F2F-4328-A09D-94CB92ED7988@cern.ch> <0C7BCB2F-BE9C-4B8B-8344-0DA03F16BA9A@cern.ch> Message-ID: <5AA05FE9.6050708@windriver.com> On 03/07/2018 10:44 AM, Tim Bell wrote: > I think nested quotas would give the same thing, i.e. you have a parent project > for the group and child projects for the users. This would not need user/group > quotas but continue with the ‘project owns resources’ approach. Agreed, I think that if we support nested quotas with a suitable depth of nesting it could be used to handle the existing nova user/project quotas. Chris From lbragstad at gmail.com Thu Mar 8 00:10:42 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 7 Mar 2018 18:10:42 -0600 Subject: [openstack-dev] [keystone] batch processing with unified limits Message-ID: <0a90f7be-1764-fa50-269a-91b2f252f05f@gmail.com> The keystone team is parsing the unified limits discussions from last week. One of the things we went over as a group was the usability of the current API [0]. Currently, the create and update APIs support batch processing. So specifying a list of limits is valid for both. This was a part of the original proposal as a way to make it easier for operators to set all their registered limits with a single API call. The API also has unique IDs for each limit reference. The consensus was that this felt a bit weird with a resource that contains a unique set of attributes that can make up a constraints (service, resource type, and optionally a region). We're discussing ways to make this API more consistent with how the rest of keystone works while maintaining usability for operators. Does anyone see issues with supporting batch creation for limits and individual updates? In other words, removing the ability to update a set of limits in a single API call, but keeping the ability to create them in batches? We were talking about this in the keystone channel, but opening this up on the ML to get more feedback from other people who were present in those discussions last week. [0] https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits [1] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-07.log.html#t2018-03-07T22:49:46 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gong.yongsheng at 99cloud.net Thu Mar 8 01:07:43 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Thu, 8 Mar 2018 09:07:43 +0800 (CST) Subject: [openstack-dev] [tacker] tacker project team meeting is changed to GMT 0800 on Tuesdays Message-ID: <6a76dc7e.5b1f.16203264960.Coremail.gong.yongsheng@99cloud.net> FYI https://review.openstack.org/#/c/550326/ yong sheng gong 99CLOUD Co. Ltd. Email:gong.yongsheng at 99cloud.net Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, Haidian District, Beijing, China Mobile:+86-18618199879 http://99cloud.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Mar 8 01:44:08 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 8 Mar 2018 10:44:08 +0900 Subject: [openstack-dev] [PTG] [Infra] [all] zuulv3 Job Template vs irrelevant files/branch var Message-ID: Hi All, Before PTG, we were discussing about Job Template and irrelevant files issues on multiple mailing thread [1]. Both things does not work as expected and it leads to run the jobs on irrelevant files also and on excluded branch. In Dublin PTG, during infra help hours on Tuesday, we had talk on this topic and to find the best approach. First of all thanks to Jim for explaining the workflow of zuulv3 about selecting and integrating the matched jobs. How jobs are being matched and how variables like branch and irrelevant-files are being taken care between job definition and job template and project's pipeline list. Current issue (explained in ML [1]) is with the integrated-gate job template [2] where integrated job like tempest-full are being run. Other job template like 'system-required', 'openstack-python-jobs' etc. After discussion, It is found more complicated to solve these issues as of now and it might take time for Jim/infra team to come up with better way to handle job template and irrelevant_files/branch var etc. We talked about few possibilities like one way is to supersede the job template defined var by project's pipeline list. For example if irrelevant_files are defined by both job template and project's pipelines then ignore/skip the job template values of that var or all var. But this is just idea and not sure how feasible and best it can be. But till the best approach/solution is ready, we need to have some workaround as current issue cause running many jobs on unnecessary patches and consume lot of infra resources. We discussed few of the workaround mentioned below and we can go for one based on majority of people or infra team like/suggest- 1. Do not use integrated-gate template and let each project have the jobs in their pipeline list 2. Define all the irrelevant files for each projects in job template ? 3. Leave as it is. ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-February/127349.html http://lists.openstack.org/pipermail/openstack-dev/2018-February/127347.html ..2 https://github.com/openstack-infra/openstack-zuul-jobs/blob/49cd964470c081005f671d6829a14dace2c9ccc2/zuul.d/zuul-legacy-project-templates.yaml#L82 -gmann From gmann at ghanshyammann.com Thu Mar 8 02:36:19 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 8 Mar 2018 11:36:19 +0900 Subject: [openstack-dev] [QA] Meeting Thursday Mar 8th at 8:00 UTC Message-ID: Hello everyone, Hope everyone is back to home after Dublin PTG. This is reminder for QA team meeting on Thursday, Mar 8th at 8:00 UTC in the #openstack-meeting channel. The agenda for the meeting can be found here: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Mar_8th_2018_.280800_UTC.29 We discussed about new meeting/office hour times which we will discuss in this meeting and then publish it to ML and wiki etc. Anyone is welcome to add an item to the agenda. -gmann From gmann at ghanshyammann.com Thu Mar 8 02:51:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 8 Mar 2018 11:51:04 +0900 Subject: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters In-Reply-To: References: <0957CD8F4B55C0418161614FEC580D6B2F8C3E1B@YYZEML702-CHM.china.huawei.com> Message-ID: On Thu, Mar 8, 2018 at 6:12 AM, Chris Dent wrote: > On Wed, 7 Mar 2018, Hongbin Lu wrote: > >> As a brief recap, we were discussing how Neutron API server should behave >> if invalid query parameters were inputted. Per my understanding, the general >> consensus is to make Neutron API server behave consistently with other >> OpenStack projects. The question for API-WG is if there is any guideline to >> clarify how OpenStack projects should handle invalid query parameters. Query >> parameters are various across different projects but it seems most projects >> support these four categories of query parameters: sorting, pagination, >> filtering, and fields selection. I saw API-WG provided a guideline to define >> how to handle valid parameters of these categories [2], but it doesn’t seem >> to define how to handle invalid parameters. >> >> I wonder if API-WG could clarify it. For example, if users provide an >> invalid filter on listing the resources, should the API server ignore the >> invalid filter and return a successful response? Or it should return an >> error response? Below is a list of specific scenarios and examples to >> consider: > > > It's hard to find, but there's existing guidance that touches on > this. From > http://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications > : > > [I]f the API supports query parameters and a request contains an > unknown or unsupported parameter, the server should return a 400 > Bad Request response. Invalid values in the request URL should > never be silently ignored, as the response may not match the > client’s expectation. For example, consider the case where an > API allows filtering on name by specifying ‘?name=foo’ in the > query string, and in one such request there is a typo, such as > ‘?nmae=foo’. If this error were silently ignored, the user would > get back all resources instead of just the ones named ‘foo’, > which would not be correct. The error message that is returned > should clearly indicate the problem so that the user could > correct it and re-submit. > > This same logic can be applied to invalid fields used in parameters > which can only accept a limited number of inputs (such as sort_key) > so in the examples you give a 400 would be the way to ensure that > the user agent is actually made aware that their request had issues. +1. Nova also implemented query parameters validation using JSON Schema [1] and 400 for few sorting param which were mainly joined table and ignore others. We had to leave and ignore the unsupported parameter as of now due to backward compatibility. But with newly introduced API, we follow the above guidelines and 400 on any additional or wrong parameter. Example [2]. > > I hope this helps. Please let the api-sig know if you think we > should adjust the guidelines to make this more explicit somehow. > ..1 https://github.com/openstack/nova/blob/c7b54a80ac25f6a01d0a150c546532f5ae2592ce/nova/api/openstack/compute/schemas/servers.py#L334 ..2 https://github.com/openstack/nova/blob/c7b54a80ac25f6a01d0a150c546532f5ae2592ce/nova/api/openstack/compute/schemas/migrations.py#L43 > -- > Chris Dent (⊙_⊙') https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From glongwave at gmail.com Thu Mar 8 02:56:02 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Thu, 8 Mar 2018 10:56:02 +0800 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: Message-ID: Yeah, we need a unified limit library , from oslo side we need a spec according to new library process. The spec will be useful to track the background and update oslo wiki [1] [0] http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html [1] https://wiki.openstack.org/wiki/Oslo 2018-03-07 22:58 GMT+08:00 Lance Bragstad : > Hi all, > > Per the identity-integration track at the PTG [0], I proposed a new oslo > library for services to use for hierarchical quota enforcement [1]. Let > me know if you have any questions or concerns about the library. If the > oslo team would like, I can add an agenda item for next weeks oslo > meeting to discuss. > > Thanks, > > Lance > > [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > [1] https://review.openstack.org/#/c/550491/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From shaohe.feng at intel.com Thu Mar 8 03:41:07 2018 From: shaohe.feng at intel.com (Feng, Shaohe) Date: Thu, 8 Mar 2018 03:41:07 +0000 Subject: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion. In-Reply-To: <7B5303F69BB16B41BB853647B3E5BD7054026FB2@SHSMSX101.ccr.corp.intel.com> References: <7B5303F69BB16B41BB853647B3E5BD7054026FB2@SHSMSX101.ccr.corp.intel.com> Message-ID: <7B5303F69BB16B41BB853647B3E5BD7054027D8E@SHSMSX101.ccr.corp.intel.com> Hi All: The POC is here: https://github.com/shaohef/cyborg BR Shaohe Feng _____________________________________________ From: Feng, Shaohe Sent: 2018年2月12日 15:06 To: openstack-dev at lists.openstack.org; openstack-operators at lists.openstack.org Cc: Du, Dolpher ; Zhipeng Huang ; Ding, Jian-feng ; Sun, Yih Leong ; Nadathur, Sundar ; Dutch ; Rushil Chugh ; Nguyen Hung Phuong ; Justin Kilpatrick ; Ranganathan, Shobha ; zhuli ; bao.yumeng at zte.com.cn; xiaodongpan at tencent.com; kong.wei2 at zte.com.cn; li.xiang2 at zte.com.cn; Feng, Shaohe Subject: [openstack-dev][cyborg][glance][nova]cyborg FPGA management flow disscusion. Now I am working on an FPGA management POC with Dolpher. We have finished some code, and have discussion with Li Liu and some cyborg developer guys. Here are some discussions: image management 1. User should upload the FPGA image to glance and set the tags as follow: There are two suggestions to upload an FPGA image. A. use raw glance api like: $ openstack image create --file mypath/FPGA.img fpga.img $ openstack image set --tag FPGA --property vendor=intel --property type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22 The image must have "FPGA" tag and accelerator type(such as type=crypto). B. cyborg support a new api to upload a image. This API will wrap glance api and include the above steps, also make image record in it's local DB. 2. Cyborg agent/conductor get the FPGA image info from glance. There are also two suggestions to get the FPGA image info. A. use raw glance api. Cyborg will get the images by FPGA tag and timestamp periodically and store them in it's local cache. It will use the images tags and properties to form placement taits and resource_class name. B. store the imformations when call cybort's new upload API. 3. Image download. call glance image download API to local file. and make a corresponding md5 files for checksum. GAP in image management: missing related glance image client in cyborg. resource report management for scheduler. 1. Cyborg agent/conductor need synthesize all useful information from FPGA driver and image information. The traits will be like: CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO, The resource_class will be like: CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF {"inventories": "CUSTOM_FPGA_INTEL_PF": { "allocation_ratio": 1.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 4 } } Accelerator claim and release: 1. Cybort will support the releated API for accelerator claim and release. It can pass the follow parameters: nodename: Which host that accelerator located on, it is required. type: This accelerator type, cyborg can get image uuid by it. it is optional. image uuid: the uuid of FPGA bitstream image, . it is optional. traits: the traits info that cyborg reports to placement. resource_class: the resource_class name that reports to placement. And return the address for the accelerator. At present, it is the PCIE_ADDRESS. 2. When claim an accelerator, type and image is None, cybort will not program the fpga for user. FPGA accelerator program API: We still need to support an independent program API for some specific scenarios. Such as as a FPGA developer, I will change my verilog logical frequently and need to do verification on my guest. I upload my new bitstream image to glance, and call cyborg to program my FPGA accelerator. End user operations follow: 1. upload an bitstream image to glance if necessary and set its tags(at least FPGA is requied) and property. sucn as: --tag FPGA --property vendor=intel --property type=crypto 2. list the FPGA related traits and resource_class names by placement API. such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits. 3. create a new falvor wiht his expected traits and resource_class as extra spec. such as: "resourcesn:CUSTOM_FPGA_INTEL_PF=2" n is an integer or empty string. "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO". 4. create the VM with this flavor. BR Shaohe Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Mar 8 04:08:13 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 8 Mar 2018 12:08:13 +0800 Subject: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion. In-Reply-To: <7B5303F69BB16B41BB853647B3E5BD7054027D8E@SHSMSX101.ccr.corp.intel.com> References: <7B5303F69BB16B41BB853647B3E5BD7054026FB2@SHSMSX101.ccr.corp.intel.com> <7B5303F69BB16B41BB853647B3E5BD7054027D8E@SHSMSX101.ccr.corp.intel.com> Message-ID: Thanks Shaohe, Let's schedule a video conf session next week. On Thu, Mar 8, 2018 at 11:41 AM, Feng, Shaohe wrote: > Hi All: > > The POC is here: > *https://github.com/shaohef/cyborg* > > BR > Shaohe Feng > > _____________________________________________ > *From:* Feng, Shaohe > *Sent:* 2018年2月12日 15:06 > *To:* openstack-dev at lists.openstack.org; openstack-operators at lists. > openstack.org > *Cc:* Du, Dolpher ; Zhipeng Huang < > zhipengh512 at gmail.com>; Ding, Jian-feng ; Sun, > Yih Leong ; Nadathur, Sundar < > sundar.nadathur at intel.com>; Dutch ; Rushil Chugh < > rushil.chugh at gmail.com>; Nguyen Hung Phuong ; > Justin Kilpatrick ; Ranganathan, Shobha < > shobha.ranganathan at intel.com>; zhuli ; > bao.yumeng at zte.com.cn; xiaodongpan at tencent.com; kong.wei2 at zte.com.cn; > li.xiang2 at zte.com.cn; Feng, Shaohe > *Subject:* [openstack-dev][cyborg][glance][nova]cyborg FPGA management > flow disscusion. > > > Now I am working on an FPGA management POC with Dolpher. > We have finished some code, and have discussion with Li Liu and some > cyborg developer guys. > > Here are some discussions: > > image management > 1. User should upload the FPGA image to glance and set the tags as follow: > There are two suggestions to upload an FPGA image. > A. use raw glance api like: > $ openstack image create --file mypath/FPGA.img fpga.img > $ openstack image set --tag FPGA --property vendor=intel --property > type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22 > The image must have "FPGA" tag and accelerator type(such as > type=crypto). > B. cyborg support a new api to upload a image. > This API will wrap glance api and include the above steps, also make > image record in it's local DB. > > 2. Cyborg agent/conductor get the FPGA image info from glance. > There are also two suggestions to get the FPGA image info. > A. use raw glance api. > Cyborg will get the images by FPGA tag and timestamp periodically and > store them in it's local cache. > It will use the images tags and properties to form placement taits and > resource_class name. > B. store the imformations when call cybort's new upload API. > > 3. Image download. > call glance image download API to local file. and make a corresponding md5 > files for checksum. > > GAP in image management: > missing related glance image client in cyborg. > > resource report management for scheduler. > 1. Cyborg agent/conductor need synthesize all useful information from > FPGA driver and image information. > The traits will be like: > CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO, > The resource_class will be like: > CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF > {"inventories": > "CUSTOM_FPGA_INTEL_PF": { > "allocation_ratio": 1.0, > "max_unit": 4, > "min_unit": 1, > "reserved": 0, > "step_size": 1, > "total": 4 > } > } > > > Accelerator claim and release: > 1. Cybort will support the releated API for accelerator claim and release. > It can pass the follow parameters: > nodename: Which host that accelerator located on, it is required. > type: This accelerator type, cyborg can get image uuid by it. it is > optional. > image uuid: the uuid of FPGA bitstream image, . it is optional. > traits: the traits info that cyborg reports to placement. > resource_class: the resource_class name that reports to placement. > And return the address for the accelerator. At present, it is the > PCIE_ADDRESS. > 2. When claim an accelerator, type and image is None, cybort will not > program the fpga for user. > > FPGA accelerator program API: > We still need to support an independent program API for some specific > scenarios. > Such as as a FPGA developer, I will change my verilog logical frequently > and need to do verification on my guest. > I upload my new bitstream image to glance, and call cyborg to program my > FPGA accelerator. > > End user operations follow: > 1. upload an bitstream image to glance if necessary and set its tags(at > least FPGA is requied) and property. > sucn as: --tag FPGA --property vendor=intel --property type=crypto > 2. list the FPGA related traits and resource_class names by placement API. > such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and > "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits. > 3. create a new falvor wiht his expected traits and resource_class as > extra spec. > such as: > "resourcesn:CUSTOM_FPGA_INTEL_PF=2" n is an integer or empty > string. > "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO". > 4. create the VM with this flavor. > > > BR > Shaohe Feng > > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Thu Mar 8 05:55:50 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Wed, 07 Mar 2018 21:55:50 -0800 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: Message-ID: <5AA0D066.1070600@fastmail.com> So the following was a prior effort: https://github.com/openstack/delimiter Maybe just continue down the path of that and/or take that whole repo over and iterate (or adjust the prior code, or ...)?? Or if not that's ok to, ya'll get to decide. https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal Lance Bragstad wrote: > Hi all, > > Per the identity-integration track at the PTG [0], I proposed a new oslo > library for services to use for hierarchical quota enforcement [1]. Let > me know if you have any questions or concerns about the library. If the > oslo team would like, I can add an agenda item for next weeks oslo > meeting to discuss. > > Thanks, > > Lance > > [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > [1] https://review.openstack.org/#/c/550491/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Thu Mar 8 06:25:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 8 Mar 2018 00:25:53 -0600 Subject: [openstack-dev] [release] Release countdown for week R-24 and R-23, March 12-23 Message-ID: <20180308062553.GA26162@smpop.zsmctybcx2nudpv14fpuyj253d.gx.internal.cloudapp.net> Welcome back to our regular release countdown email. Now that the PTG is over (hopefully no one is still waiting for their flight in DUB), we will send regular weekly countdown emails. Development Focus ----------------- Teams should be focusing on taking back discussions from the PTG and planning what can be done for Rocky. General Information ------------------- All teams should review their release liaison information and make sure it is up to date [1]. [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons While reviewing liaisons, this would also be a good time to make sure your declared release model matches the project's plans for Rocky (e.g. [2]). This should be done prior to the first milestone and can be done by proposing a change to the Rocky deliverable file for the project(s) affected [3]. [2] https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5 [3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/rocky Teams should start brainstorming Forum topics. For more information on the Forum selection process, see the information posted to the mailing list [4]. [4] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.htmlx Upcoming Deadlines & Dates -------------------------- Rocky-1 milestone: April 19 (R-19 week) Forum at OpenStack Summit in Vancouver: May 21-24 -- Sean McGinnis (smcginnis) From aj at suse.com Thu Mar 8 07:08:11 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 8 Mar 2018 08:08:11 +0100 Subject: [openstack-dev] [PTG] [Infra] [all] zuulv3 Job Template vs irrelevant files/branch var In-Reply-To: References: Message-ID: On 2018-03-08 02:44, Ghanshyam Mann wrote: > Hi All, > > Before PTG, we were discussing about Job Template and irrelevant files > issues on multiple mailing thread [1]. > > Both things does not work as expected and it leads to run the jobs on > irrelevant files also and on excluded branch. > > In Dublin PTG, during infra help hours on Tuesday, we had talk on this > topic and to find the best approach. > > First of all thanks to Jim for explaining the workflow of zuulv3 about > selecting and integrating the matched jobs. How jobs are being matched > and how variables like branch and irrelevant-files are being taken > care between job definition and job template and project's pipeline > list. > > Current issue (explained in ML [1]) is with the integrated-gate job > template [2] where integrated job like tempest-full are being run. > Other job template like 'system-required', 'openstack-python-jobs' > etc. > > After discussion, It is found more complicated to solve these issues > as of now and it might take time for Jim/infra team to come up with > better way to handle job template and irrelevant_files/branch var etc. > > We talked about few possibilities like one way is to supersede the job > template defined var by project's pipeline list. For example if > irrelevant_files are defined by both job template and project's > pipelines then ignore/skip the job template values of that var or all > var. But this is just idea and not sure how feasible and best it can > be. > > But till the best approach/solution is ready, we need to have some > workaround as current issue cause running many jobs on unnecessary > patches and consume lot of infra resources. > > We discussed few of the workaround mentioned below and we can go for > one based on majority of people or infra team like/suggest- > 1. Do not use integrated-gate template and let each project have the > jobs in their pipeline list > 2. Define all the irrelevant files for each projects in job template ? > 3. Leave as it is. > > ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-February/127349.html > http://lists.openstack.org/pipermail/openstack-dev/2018-February/127347.html > > ..2 https://github.com/openstack-infra/openstack-zuul-jobs/blob/49cd964470c081005f671d6829a14dace2c9ccc2/zuul.d/zuul-legacy-project-templates.yaml#L82 I'm fine with option 2 for those projects that want to do some changes for now. Breaking up the integrated-gate will cause more maintenance problems. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From geguileo at redhat.com Thu Mar 8 10:20:36 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 8 Mar 2018 11:20:36 +0100 Subject: [openstack-dev] [cinder] Cinder volume revert to snapshot with Ceph In-Reply-To: References: Message-ID: <20180308102036.in6mwvumv3bbh62a@localhost> On 06/03, 李杰 wrote: > Hi,all > > > This is the patch [0] about volume revert to snapshot with Ceph.Is anyone working on this patchset or maybe new patchset was proposed to implement RBD specific functionality?Can you tell me more about this ?Thank you very much. > The link is here. > Re:https://review.openstack.org/#/c/481566/ > > > Best Regards > Lijie Hi, As far as I know nobody else has been working on that feature, and if you are looking to work on it please keep in mind the comments I wrote on that patch during the review. Cheers, Gorka. From thierry at openstack.org Thu Mar 8 10:52:13 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Mar 2018 11:52:13 +0100 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> References: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> Message-ID: <31982df4-38cd-6d2b-890e-102f6f7bf879@openstack.org> Matt Riedemann wrote: > I don't get the inward/outward thing. First two days of the old design > summit (ops summit?) format was all cross-project stuff (docs, upgrades, > testing, ops feedback, etc). That's the same as what happens at the PTG > now too. The last three days of the old design summit (and now PTG) are > vertical project discussion for the most part, but Thursday has also > become a de-facto cross-project day for a lot of teams (nova/cinder, > nova/neutron, nova/ironic all happened on Thursday). I'm not sure what > is happening at the Forum events that is so wildly different, or more > productive, than what we can do at the PTG - and arguably do it better > at the PTG because of fewer distractions to be giving talks, talking to > customers, and having time-boxed 40 minute slots. The PTG has always been about taking the team discussions that happened at the Ops Summit / Design Summit to have them in a more productive environment. Beyond the suboptimal productivity (due to too many distractions / other commitments), the problem with the old Design Summit was that it prevented team members from making the best use of the Summit event. You would travel to a place where all our community gets together, only to isolate yourself with your teammates trying to get stuff done. That was silly. You should use the time there to engage *outside* of your team. And by that I don't mean inter-team work, or participating to other groups like SIGs or horizontal teams. I mean giving talks, presenting the work you do (and how you do it) to newcomers, watching talks, engaging with happy users, learning about the state of our ecosystem, and discussing cross-community issues with a larger section of our community (at the Forum). The context switch between this inward work (work with your team, or work within any transversal work group you're interested in), and this outward work (engaging with other groups you're not a part of, listening to newcomers) is expensive. It's hard to take the time to *listen* when you try to get your work for the next 6 months organized and done. Oh, and in the above paragraphs, I'm not distinguishing "devs" from "ops". This applies to all teams, to any contributor engaged in making OpenStack a reality. Having the Public Cloud WG team meet at the PTG was great, and we should definitely have ANY OpenStack team wanting to meet and get things done at future PTGs. -- Thierry Carrez (ttx) From tpb at dyncloud.net Thu Mar 8 11:46:16 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 8 Mar 2018 06:46:16 -0500 Subject: [openstack-dev] [manila] no weekly meeting March 8 Message-ID: <20180308114616.educnoa424b6aukp@barron.net> Let's skip the manila weekly meeting March 8 since people are still catching up after travel delays or still travelling (/me confesses) and the weekly agenda shows no new non-recurring additions. We'll plan on meeting as normal at 1500 UTC March 15 in #openstack-meeting-alt. Add agenda items here [1]. Cheers, -- Tom Barron [1] https://wiki.openstack.org/wiki/Manila/Meetings -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jaypipes at gmail.com Thu Mar 8 12:51:27 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 8 Mar 2018 07:51:27 -0500 Subject: [openstack-dev] [nova][placement] PTG Summary and Rocky Priorities Message-ID: <9b6b4b7e-02d7-28e0-8d6d-53e1849827f8@gmail.com> We had a productive PTG and were able to discuss a great many scheduler-related topics. I've put together an etherpad [0] with a summary, reproduced below. Expect follow-up emails about each priority item in the scheduler track from those contributors working on that area. Best, -jay Placement/scheduler: Rocky PTG Summary == Key topics == - Aggregates - How we messed up operators using nova host aggregates for allocation ratios - Placement currently doesn't "auto-create" placement aggregates when nova host aggregates change - Standardizing trait handling for virt drivers - Placement REST API - Partial allocation patching - Removing assumptions around generation 0 - Supporting policy/RBAC -NUMA - Supporting both shared and dedicated CPU on the same host as well as the same instance - vGPU handling - Tracking ingress/egress bandwidth resources using placement - Finally supporting live migration of CPU-pinned instances == Agreements and decisions == - dansmith's "placement request filters" work is an important enabler of a number of use cases, particularly around aggregate filtering. Spec is already approved here: https://review.openstack.org/#/c/544585/ - We need a method of filtering providers that do NOT have a certain trait. This is tentatively being called "forbidden traits". Spec review here: https://review.openstack.org/548915 - For parity/consistency reasons, we should add the in_tree= query parameter to GET /resource_providers - To assist operators, add some new osc-placement CLI commands for applying traits/allocation ratio to batches of resource providers in an aggregate - We should allow image metadata to specify required traits in the same fashion as flavor extra specs. Spec review here: https://review.openstack.org/#/c/541507/ - virt drivers should begin reporting their CPU features as traits. Spec review here: https://review.openstack.org/#/c/497733/ - Furthermore, virt drivers should respect the cpu_model CONF option for overriding CPU-related traits - We will eventually want to provide the ability to patch an already existing allocation - Hot-attaching a network interface is the canonical use case here. We want to add the new NIC resources to the existing allocation for the instance consumer without needing to re-PUT the entire allocation - In order to do this, we will need to add a generation field to the consumers table, allowing multiple allocation writers to ensure their view of the consumer is consistent (TODO: need a blueprint/spec for this) - We should extricate the standard resource classes currently defined in `nova.objects.fields.ResourceClass` into a small `os-resource-classes` library (TODO: need a blueprint/spec for this) - We should use oslo.policy in the placement API (TODO: specless blueprint for this) - Use case here is making the transition to placement easy for operators that currently use the os-aggregates interface for managing compute resources - Calling code should not assume the initial generation for a resource provider is zero. Spec review here: https://review.openstack.org/#/c/548903/ - Extracting placement into separate packages is not a priority, but we think incrementatl progress to extraction can be made in Rocky - Placement's microversion handling should be extracted into a separate library - Trimming nova imports - We should add some support to nova-manage to assist operators using the caching scheduler to migrate to placement (and get rid of the caching - VGPU_DISPLAY_HEAD resource class should be removed and replaced with a set of os-traits traits that indicate the maximum supported number of display heads for the vGPU type - A new PCPU resource class should be created to describe physical CPUs (logical processors in the hardware). Virt drivers will be able to set inventories of PCPU on resource providers representing NUMA nodes and therefore use placement to track dedicated CPU resources (TODO: need a blueprint/spec for this) - artom is going to write a spec for supporting live migration of CPU-pinned instances (and abandon the complicated old patches) - Multiple agreements about strict minimum bandwidth support feature in nova - Spec has already been updated accordingly: https://review.openstack.org/#/c/502306/ - For now we keep the hostname as the information connecting the nova-compute and the neutron-agent on the same host but we are aiming for having the hostname as an FQDN to avoid possible ambiguity. - We agreed not to make this feature dependent on moving the nova port create to the conductor. The current scope is to support pre-created neutron port only. - Neutron will provide the resource request in the port API so this feature does not depend on the neutron port binding API work - Neutron will create resource providers in placement under the compute RP. Also Neutron will report inventories on those RPs - Nova will do the claim of the port related resources in placement and the consumer_id will be the instance UUID - We should mirror nova host aggregate information to placement using an online data migration technique on the add/remove_host methods of nova.objects.Aggregate and a `nova-manage db online_migration` command == Priorities for Rocky release cycle == 1. Merge the update_provider_tree patch series (efried) 2. Placement request filters (dansmith) 3. Mirror aggregate information from nova to placement (jaypipes) 4. Forbidden traits (cdent) == Non-priority Items for Rocky == - Add consumers.generation field and related API plumbing (efried and cdent) - Support requested traits in image metadata (arvind) - Provide CLI functionality to set traits and things like allocation ratios for a batch of resource providers via aggregate (ttsurya) - Migrating off of the caching scheduler and on to placement (mriedem) - Create `os-resource-classes` library and write migration code to replace `nova.objects.fields.ResourceClass` usage with calls to os_resource_classes ( - Policy/RBAC support in Placement REST API (mriedem) - Extract placement's microversion handling into separate library (cdent) - CPU-pinned instance live migration support (stephenfin and artom) [0] https://etherpad.openstack.org/p/rocky-ptg-scheduler-placement-summary From zhipengh512 at gmail.com Thu Mar 8 12:53:20 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 8 Mar 2018 20:53:20 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: @jay I'm also against a weigher in nova/placement. This should be an optional step depends on vendor implementation, not a default one. @Alex I think we should explore the idea of preferred trait. @Mathew: Like Sean said, Cyborg wants to support both reprogrammable FPGA and pre-programed ones. Therefore it is correct that in your description, the programming operation should be a call from Nova to Cyborg, and cyborg will complete the operation while nova waits. The only problem is that the weigher step should be an optional one. On Wed, Mar 7, 2018 at 9:21 PM, Jay Pipes wrote: > On 03/06/2018 09:36 PM, Alex Xu wrote: > >> 2018-03-07 10:21 GMT+08:00 Alex Xu > soulxu at gmail.com>>: >> >> >> >> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K > >: >> >> __ __ >> >> __ __ >> >> *From:*Matthew Booth [mailto:mbooth at redhat.com >> ] >> *Sent:* Saturday, March 3, 2018 4:15 PM >> *To:* OpenStack Development Mailing List (not for usage >> questions) > > >> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple >> functions____ >> >> __ __ >> >> On 2 March 2018 at 14:31, Jay Pipes > > wrote:____ >> >> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:____ >> >> Hello Nova team, >> >> During the Cyborg discussion at Rocky PTG, we >> proposed a flow for FPGAs wherein the request spec asks >> for a device type as a resource class, and optionally a >> function (such as encryption) in the extra specs. This >> does not seem to work well for the usage model that I’ll >> describe below. >> >> An FPGA device may implement more than one function. For >> example, it may implement both compression and >> encryption. Say a cluster has 10 devices of device type >> X, and each of them is programmed to offer 2 instances >> of function A and 4 instances of function B. More >> specifically, the device may implement 6 PCI functions, >> with 2 of them tied to function A, and the other 4 tied >> to function B. So, we could have 6 separate instances >> accessing functions on the same device.____ >> >> __ __ >> >> Does this imply that Cyborg can't reprogram the FPGA at all?____ >> >> */[Mooney, Sean K] cyborg is intended to support fixed function >> acclerators also so it will not always be able to program the >> accelerator. In this case where an fpga is preprogramed with a >> multi function bitstream that is statically provisioned cyborge >> will not be able to reprogram the slot if any of the fuctions >> from that slot are already allocated to an instance. In this >> case it will have to treat it like a fixed function device and >> simply allocate a unused vf of the corret type if available. >> ____/* >> >> >> ____ >> >> >> In the current flow, the device type X is modeled as a >> resource class, so Placement will count how many of them >> are in use. A flavor for ‘RC device-type-X + function A’ >> will consume one instance of the RC device-type-X. But >> this is not right because this precludes other functions >> on the same device instance from getting used. >> >> One way to solve this is to declare functions A and B as >> resource classes themselves and have the flavor request >> the function RC. Placement will then correctly count the >> function instances. However, there is still a problem: >> if the requested function A is not available, Placement >> will return an empty list of RPs, but we need some way >> to reprogram some device to create an instance of >> function A.____ >> >> >> Clearly, nova is not going to be reprogramming devices with >> an instance of a particular function. >> >> Cyborg might need to have a separate agent that listens to >> the nova notifications queue and upon seeing an event that >> indicates a failed build due to lack of resources, then >> Cyborg can try and reprogram a device and then try >> rebuilding the original request.____ >> >> __ __ >> >> It was my understanding from that discussion that we intend to >> insert Cyborg into the spawn workflow for device configuration >> in the same way that we currently insert resources provided by >> Cinder and Neutron. So while Nova won't be reprogramming a >> device, it will be calling out to Cyborg to reprogram a device, >> and waiting while that happens.____ >> >> My understanding is (and I concede some areas are a little >> hazy):____ >> >> * The flavors says device type X with function Y____ >> >> * Placement tells us everywhere with device type X____ >> >> * A weigher orders these by devices which already have an >> available function Y (where is this metadata stored?)____ >> >> * Nova schedules to host Z____ >> >> * Nova host Z asks cyborg for a local function Y and blocks____ >> >> * Cyborg hopefully returns function Y which is already >> available____ >> >> * If not, Cyborg reprograms a function Y, then returns it____ >> >> Can anybody correct me/fill in the gaps?____ >> >> */[Mooney, Sean K] that correlates closely to my recollection >> also. As for the metadata I think the weigher may need to call >> to cyborg to retrieve this as it will not be available in the >> host state object./* >> >> Is it the nova scheduler weigher or we want to support weigh on >> placement? Function is traits as I think, so can we have >> preferred_traits? I remember we talk about that parameter in the >> past, but we don't have good use-case at that time. This is good >> use-case. >> >> >> If we call the Cyborg from the nova scheduler weigher, that will slow >> down the scheduling a lot also. >> > > Right, which is why I don't want to do any weighing in Placement at all. > If folks want to sort by things that require long-running code/callbacks or > silly temporal things like metrics, they can do that in a custom weigher in > the nova-scheduler and take the performance hit there. > > Best, > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.harbott at x-ion.de Thu Mar 8 14:03:29 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Thu, 8 Mar 2018 14:03:29 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings Message-ID: With the current PTG just finished and seeing discussions happen about the format of the next[0], it seems that the advantages of these seem to be pretty clear to most, so let me use the occasion to remind everyone of the disadvantages. Every meeting that is happening is excluding those contributors that can not attend it. And with that it is violating the fourth Open principle[1], having a community that is open to everyone. If you are wondering whom this would affect, here's a non-exclusive (sic) list of valid reasons not to attend physical meetings: - Health issues - Privilege issues (like not getting visa or travel permits) - Caretaking responsibilities (children, other family, animals, plants) - Environmental concerns So when you are considering whether it is worth the money and effort to organise PTGs or similar events, I'd like you also to consider those being excluded by such activities. It is not without a reason that IRC and emails have been settled upon as preferred means of communication. I'm not saying that physical meetings should be dropped altogether, but maybe more effort can be placed into providing means of remote participation, which might at least reduce some effects. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html [1] https://governance.openstack.org/tc/reference/opens.html From paul.bourke at oracle.com Thu Mar 8 14:30:53 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Thu, 8 Mar 2018 14:30:53 +0000 Subject: [openstack-dev] [kolla] PTG Summary Message-ID: <9957a7bb-7823-311e-ba56-8b73a886b041@oracle.com> Hi all, Here's my summary of the various topics we discussed during the PTG. There were one or two I had to step out for but hopefully this serves as an overall recap. Please refer to the main etherpad[0] for more details and links to the session specific pads. build.py script refactor ======================== * I think was little debate that we need this. However, discussion moved fairly quickly towards if there's changes we can make to our images that will not require maintaining such a large build script in the first place. * loci images are making good progress and are already in use by openstack-helm * By moving the start scripts from the kolla images into kolla-ansible we can decouple ourselves from these images and open the possibility of comsuming images from other sources such as loci. Actions: * Do a poc of externalising start scripts (started under https://review.openstack.org/#/c/550500/) plugin split from main images ============================= * Plugins continue to be a contentious issue in Kolla * The current approach of installing all available plugins 'out of the box' is not working for certain users. * Sam Betts had a good example of why this is not working for them, I don't feel I can summarise it properly. Will reach out to him to clarify. * We didn't reach a conclusion on this, it seems there are pros and cons to each approach. Needs further discussion and possibly some pocs. ansible "--check" and "--diff" mode =================================== * Operators would like to see some dry run like features in kolla-ansible. * Would like to see the return of something like genconfig, where configs can be generated ahead of time and diffed/reviewed before deploy. * Also some general discussion in this session on management and scaling difficulties with kolla. * Inventory management needs to be more flexible. * Operations are too slow once you hit about 200 nodes, operators are finding they have to use manual trickery to divide up their inventories. * A lot of operations take place when very little has changed config wise. Actions: * No specific actions came out of this at this time. I think we'd need more time on this topic to determine specific work items that can make improvements here. Database backup & recovery ========================== * Interesting topic, all in agreement kolla should provide some functionality in this area. * Discussion around which areas of responsibility fall on kolla vs. the operator. E.g. 'kolla should allow for regular database backups, how those are restored is beyond project scope' * yankcrime has done some ground work on this as well as a poc. * Good documentation is important here. Actions: * Review yankcrime's poc and provide feedback * Form a spec detailing what mechanism we want to use to trigger backups, etc. ceph-ansible ============ * All seem in agreement that the issues and work seen in migrating to ceph-ansible currently outweigh the benefits. * Decided to stick with improving kolla ceph for now, with bluestore support being a priority. Actions: * Write a blueprint to add support for bluestore (https://blueprints.launchpad.net/kolla/+spec/kolla-ceph-bluestore) * Update docs to better inform operators on why they may or may not want to use kolla ceph vs the alternatives. Prometheus support for monitoring ================================= * There have been some previous attempts to add a monitoring stack in Kolla, though none have come to fruition. * Oracle are looking at prometheus and what it will take to integrate that to Kolla to fill this gap. Actions: * Write spec to detail how this will work. * Do the work. self health check support ========================= * This had some crossover with the monitoring discussion. * Kolla has some checks in the form of our 'sanity checks', but these are underutilised and not implemented for every service. Tempest or rally would be a better fit here. Actions: * Remove the sanity check code from kolla-ansible - it's not fit for purpose and our assumption is noone is using it. * Make contact with the self healing SIG, and see if we can help here. They may have recommendations for us. * Make a spec for this. destroy service & node ====================== * Several aspects to this: * We would like to be able to remove an individual service as part of kolla-ansible destroy * It is not clear what best practice is to remove a control node in Kolla * Likewise for compute * This could be automated but documentation would go a long way here also. Actions: * Clearly document how to remove a control/compute node from a kolla deployment. integrate with docker-compose ============================= * This is something Jeffrey is working on so we didn't have much to contribute in the way of discussion. Actions: * Review and provide feedback on https://review.openstack.org/538581 Implement rolling upgrade for all core projects =============================================== * Started by defining the 'terms of engagement', i.e. what do we mean by rolling upgrade in kolla, what we currently have vs. what projects support, etc. * There are two efforts under way here, 1) supporting online upgrade for all core projects that support it, 2) supporting FFU(offline) upgrade in Kolla. * lujinluo is working on a way to do online FFU in Kolla. * Testing - we need gates to test upgrade. Actions: * Finish implementation of rolling upgrade for all projects that support it in Rocky * Improve documentation around this and upgrades in general for Kolla * Spec in Rocky for FFU and associated efforts * Begin looking at what would be required for upgrade gates in Kolla Kayobe ====== * mgoddard gave us an overview of the project, what it is and potential cross over / collaboration areas with kolla. * In short, Kayobe adds the pieces to kolla-ansible required to build an end-to-end OpenStack deployment tool, along the lines of TripleO * There's lots of good info on this on https://etherpad.openstack.org/p/kolla-rocky-ptg-kayobe Actions: * None at this time. HAProxy config customisation ( customize non-openstack service conf) ==================================================================== * Discussion continues on the best way to handle non ini style config customisation in kolla. * Similar to the plugins we have lots of ideas but each comes with pros and cons so its not yet clear which is the right approach. [0] https://etherpad.openstack.org/p/kolla-rocky-ptg-planning From aspiers at suse.com Thu Mar 8 15:50:12 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 8 Mar 2018 15:50:12 +0000 Subject: [openstack-dev] [kolla] PTG Summary In-Reply-To: <9957a7bb-7823-311e-ba56-8b73a886b041@oracle.com> References: <9957a7bb-7823-311e-ba56-8b73a886b041@oracle.com> Message-ID: <20180308155012.flyvncjvnmtpr7xi@pacific.linksys.moosehall> Paul Bourke wrote: >Hi all, > >Here's my summary of the various topics we discussed during the PTG. >There were one or two I had to step out for but hopefully this serves >as an overall recap. Please refer to the main etherpad[0] for more >details and links to the session specific pads. [snipped] >self health check support >========================= >* This had some crossover with the monitoring discussion. >* Kolla has some checks in the form of our 'sanity checks', but these >are underutilised and not implemented for every service. Tempest or >rally would be a better fit here. > >Actions: >* Remove the sanity check code from kolla-ansible - it's not fit for >purpose and our assumption is noone is using it. >* Make contact with the self healing SIG, and see if we can help here. >They may have recommendations for us. >* Make a spec for this. [snipped] Would be great to collaborate! As the SIG is still new we don't have regular meetings set up yet, but please join #openstack-self-healing on IRC, and you can mail the openstack-sigs list with [self-healing] in the subject. >Implement rolling upgrade for all core projects >=============================================== >* Started by defining the 'terms of engagement', i.e. what do we mean >by rolling upgrade in kolla, what we currently have vs. what projects >support, etc. >* There are two efforts under way here, 1) supporting online upgrade >for all core projects that support it, 2) supporting FFU(offline) >upgrade in Kolla. >* lujinluo is working on a way to do online FFU in Kolla. >* Testing - we need gates to test upgrade. > >Actions: >* Finish implementation of rolling upgrade for all projects that >support it in Rocky >* Improve documentation around this and upgrades in general for Kolla >* Spec in Rocky for FFU and associated efforts >* Begin looking at what would be required for upgrade gates in Kolla Yes, a spec or other docs nailing down exactly what is meant by rolling upgrade and FFU upgrade would be a great help. I was in the FFU session in Dublin and it felt to me like not everyone was on the same page yet regarding the precise definitions, making it difficult for all projects to move forward together in a coherent fashion. From chris.friesen at windriver.com Thu Mar 8 15:54:25 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 8 Mar 2018 09:54:25 -0600 Subject: [openstack-dev] [keystone] batch processing with unified limits In-Reply-To: <0a90f7be-1764-fa50-269a-91b2f252f05f@gmail.com> References: <0a90f7be-1764-fa50-269a-91b2f252f05f@gmail.com> Message-ID: <5AA15CB1.7020509@windriver.com> On 03/07/2018 06:10 PM, Lance Bragstad wrote: > The keystone team is parsing the unified limits discussions from last > week. One of the things we went over as a group was the usability of the > current API [0]. > > Currently, the create and update APIs support batch processing. So > specifying a list of limits is valid for both. This was a part of the > original proposal as a way to make it easier for operators to set all > their registered limits with a single API call. The API also has unique > IDs for each limit reference. The consensus was that this felt a bit > weird with a resource that contains a unique set of attributes that can > make up a constraints (service, resource type, and optionally a region). > We're discussing ways to make this API more consistent with how the rest > of keystone works while maintaining usability for operators. Does anyone > see issues with supporting batch creation for limits and individual > updates? In other words, removing the ability to update a set of limits > in a single API call, but keeping the ability to create them in batches? I suspect this would cover the typical usecases we have for standing up new clouds or a new service within a cloud. Chris From aspiers at suse.com Thu Mar 8 16:03:53 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 8 Mar 2018 16:03:53 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> Message-ID: <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> Georg Kunz wrote: >Hi Adam, > >>Raoul Scarazzini wrote: >>>In the meantime, I'll check yardstick to see which kind of bridge we >>>can build to avoid reinventing the wheel. >> >>Great, thanks! I wish I could immediately help with this, but I haven't had the >>chance to learn yardstick myself yet. We should probably try to recruit >>someone from OPNFV to provide advice. I've cc'd Georg who IIRC was the >>person who originally told me about yardstick :-) He is an NFV expert and is >>also very interested in automated testing efforts: >> >> http://lists.openstack.org/pipermail/openstack-dev/2017-November/124942.html >> >>so he may be able to help with this architectural challenge. > >Thank you for bringing this up here. Better collaboration and sharing of knowledge, methodologies and tools across the communities is really what I'd like to see and facilitate. Hence, I am happy to help. > >I have already started to advertise the newly proposed QA SIG in the OPNFV test WG and I'll happily do the same for the self-healing SIG and any HA testing efforts in general. There is certainly some overlapping interest in these testing aspects between the QA SIG and the self-healing SIG and hence collaboration between both SIGs is crucial. That's fantastic - thank you so much! >One remark regarding tools and frameworks: I consider the true value of a SIG to be a place for talking about methodologies and best practices: What do we need to test? What are the challenges? How can we approach this across communities? The tools and frameworks are important and we should investigate which tools are available, how good they are, how much they fit a given purpose, but at the end of the day they are tools meant to enable well designed testing methodologies. Agreed 100%. [snipped] >>I'm beginning to think that maybe we should organise a video conference call >>to coordinate efforts between the various interested parties. If there is >>appetite for that, the first question is: who wants to be involved? To answer >>that, I have created an etherpad where interested people can sign up: >> >> https://etherpad.openstack.org/p/extreme-testing-contacts >> >>and I've cc'd people who I think would probably be interested. Does this >>sound like a good approach? > >We discussed a very similar idea in Dublin in the context of the QA SIG. I very much like the idea of a cross-community, cross-team, and apparently even cross-SIG approach. Yes agreed again, this is a strong case for collaboration between the self-healing and QA SIGs. In Dublin we also discussed the idea of the self-healing and API SIGs collaborating on the related topic of health check APIs. From rosmaita.fossdev at gmail.com Thu Mar 8 16:53:08 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 8 Mar 2018 11:53:08 -0500 Subject: [openstack-dev] [glance] bi-weekly bug team meeting Message-ID: To anyone interested in attending the bug team meetings: I'm proposing a 45 minute meeting every other week beginning the week of March 19. Time is 10:00 UTC (unless that turns out to be unworkable for everybody). Please indicate your preference for day of the week on this Doodle poll: https://doodle.com/poll/39xbzeu8nh4bgkhq cheers, brian From ed at leafe.com Thu Mar 8 17:03:07 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 8 Mar 2018 11:03:07 -0600 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <715F2E37-E9A4-4C1D-BDD2-FBC4FDBA0A1B@leafe.com> Greetings OpenStack community, Well, after a wonderful week in Dublin, it's back to work for the API-SIG. We had a productive session at the PTG, and came away from it with several action items for each of us. Due to travel and digging out from a week away, none of us had started them. Oh, except for cdent, who had already started on his (showoff!). It was also noted that we received an excellent compliment [7] from notmyname for the sessions at the PTG. We do try to be inclusive and hear all voices, so that was very important to us. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://twitter.com/anticdent/status/968503362927972353 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From harlowja at fastmail.com Thu Mar 8 17:11:25 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Thu, 08 Mar 2018 09:11:25 -0800 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: <31982df4-38cd-6d2b-890e-102f6f7bf879@openstack.org> References: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> <31982df4-38cd-6d2b-890e-102f6f7bf879@openstack.org> Message-ID: <5AA16EBD.9030903@fastmail.com> Thierry Carrez wrote: > Matt Riedemann wrote: >> I don't get the inward/outward thing. First two days of the old design >> summit (ops summit?) format was all cross-project stuff (docs, upgrades, >> testing, ops feedback, etc). That's the same as what happens at the PTG >> now too. The last three days of the old design summit (and now PTG) are >> vertical project discussion for the most part, but Thursday has also >> become a de-facto cross-project day for a lot of teams (nova/cinder, >> nova/neutron, nova/ironic all happened on Thursday). I'm not sure what >> is happening at the Forum events that is so wildly different, or more >> productive, than what we can do at the PTG - and arguably do it better >> at the PTG because of fewer distractions to be giving talks, talking to >> customers, and having time-boxed 40 minute slots. > > The PTG has always been about taking the team discussions that happened > at the Ops Summit / Design Summit to have them in a more productive > environment. I am just going to say it but can we *please* stop distinguishing between ops and devs (a ops summit, like why); the fact that these emails even continue to have the word op or dev or ops communicate with devs and then devs go do something that may work (hint this kind of feedback loop is wrong) for ops pisses me off. The world has moved beyond this kind of separation and openstack needs to as well... IMHO projects that still rely on this kind of interaction are dead in the water. If you aren't as a developer at least trying to operate even a small openstack cloud (even a personal one) then you really shouldn't be continuing as a developer in openstack... > > Beyond the suboptimal productivity (due to too many distractions / other > commitments), the problem with the old Design Summit was that it > prevented team members from making the best use of the Summit event. You > would travel to a place where all our community gets together, only to > isolate yourself with your teammates trying to get stuff done. That was > silly. You should use the time there to engage *outside* of your team. > And by that I don't mean inter-team work, or participating to other > groups like SIGs or horizontal teams. I mean giving talks, presenting > the work you do (and how you do it) to newcomers, watching talks, > engaging with happy users, learning about the state of our ecosystem, > and discussing cross-community issues with a larger section of our > community (at the Forum). > > The context switch between this inward work (work with your team, or > work within any transversal work group you're interested in), and this > outward work (engaging with other groups you're not a part of, listening > to newcomers) is expensive. It's hard to take the time to *listen* when > you try to get your work for the next 6 months organized and done. > > Oh, and in the above paragraphs, I'm not distinguishing "devs" from > "ops". This applies to all teams, to any contributor engaged in making > OpenStack a reality. Having the Public Cloud WG team meet at the PTG was > great, and we should definitely have ANY OpenStack team wanting to meet > and get things done at future PTGs. > From thierry at openstack.org Thu Mar 8 17:27:58 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 8 Mar 2018 18:27:58 +0100 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: <5AA16EBD.9030903@fastmail.com> References: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> <31982df4-38cd-6d2b-890e-102f6f7bf879@openstack.org> <5AA16EBD.9030903@fastmail.com> Message-ID: Joshua Harlow wrote: > Thierry Carrez wrote: >> The PTG has always been about taking the team discussions that happened >> at the Ops Summit / Design Summit to have them in a more productive >> environment. > > I am just going to say it but can we *please* stop distinguishing > between ops and devs (a ops summit, like why); the fact that these > emails even continue to have the word op or dev or ops communicate with > devs and then devs go do something that may work (hint this kind of > feedback loop is wrong) for ops pisses me off. The world has moved > beyond this kind of separation and openstack needs to as well... IMHO > projects that still rely on this kind of interaction are dead in the > water. If you aren't as a developer at least trying to operate even a > small openstack cloud (even a personal one) then you really shouldn't be > continuing as a developer in openstack... I totally agree with you. Did you read my email until the end ? See: >> [...] >> Oh, and in the above paragraphs, I'm not distinguishing "devs" from >> "ops". This applies to all teams, to any contributor engaged in making >> OpenStack a reality. Having the Public Cloud WG team meet at the PTG was >> great, and we should definitely have ANY OpenStack team wanting to meet >> and get things done at future PTGs. My mention above of "Ops Summit" / "Design Summit" was pointing to the old names of the events, which "Forum" and "PTG" were specifically designed to replace, avoiding the unnecessary split. -- Thierry Carrez (ttx) From harlowja at fastmail.com Thu Mar 8 17:31:51 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Thu, 08 Mar 2018 09:31:51 -0800 Subject: [openstack-dev] [tc] [all] TC Report 18-10 In-Reply-To: References: <39be1699-12ed-b81d-83df-352cc6e6b318@gmail.com> <31982df4-38cd-6d2b-890e-102f6f7bf879@openstack.org> <5AA16EBD.9030903@fastmail.com> Message-ID: <5AA17387.2000107@fastmail.com> Thierry Carrez wrote: > Joshua Harlow wrote: >> Thierry Carrez wrote: >>> The PTG has always been about taking the team discussions that happened >>> at the Ops Summit / Design Summit to have them in a more productive >>> environment. >> I am just going to say it but can we *please* stop distinguishing >> between ops and devs (a ops summit, like why); the fact that these >> emails even continue to have the word op or dev or ops communicate with >> devs and then devs go do something that may work (hint this kind of >> feedback loop is wrong) for ops pisses me off. The world has moved >> beyond this kind of separation and openstack needs to as well... IMHO >> projects that still rely on this kind of interaction are dead in the >> water. If you aren't as a developer at least trying to operate even a >> small openstack cloud (even a personal one) then you really shouldn't be >> continuing as a developer in openstack... > > I totally agree with you. Did you read my email until the end ? See: > >>> [...] >>> Oh, and in the above paragraphs, I'm not distinguishing "devs" from >>> "ops". This applies to all teams, to any contributor engaged in making >>> OpenStack a reality. Having the Public Cloud WG team meet at the PTG was >>> great, and we should definitely have ANY OpenStack team wanting to meet >>> and get things done at future PTGs. > My mention above of "Ops Summit" / "Design Summit" was pointing to the > old names of the events, which "Forum" and "PTG" were specifically > designed to replace, avoiding the unnecessary split. > Ya, my mention was more of just the whole continued mention in this whole mailing list around ops and devs and the separation and ... not just really this email (or its thread); it's a common theme around here that IMHO needs to die in a fire. From rasca at redhat.com Thu Mar 8 17:44:40 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 8 Mar 2018 18:44:40 +0100 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> Message-ID: <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> On 08/03/2018 17:03, Adam Spiers wrote: [...] > Yes agreed again, this is a strong case for collaboration between the > self-healing and QA SIGs.  In Dublin we also discussed the idea of the > self-healing and API SIGs collaborating on the related topic of health > check APIs. Guys, thanks a ton for your involvement in the topic, I am +1 to any kind of meeting we can have to discuss this (like it was proposed by Adam) so I'll offer my bluejeans channel for whatever kind of meeting we want to organize. About the best practices part Georg was mentioning I'm 100% in agreement, the testing methodologies are the first thing we need to care about, starting from what we want to achieve. That said, I'll keep studying Yardstick. Hope to hear from you soon, and thanks again! -- Raoul Scarazzini rasca at redhat.com From zbitter at redhat.com Thu Mar 8 17:45:11 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 8 Mar 2018 12:45:11 -0500 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: On 07/03/18 08:44, Ghanshyam Mann wrote: > I mean i am all ok with separate plugin which is more easy for QA team > but ownership to QA is kind of going to same direction(QA team > maintaining interop ads-on tests) in more difficult way. After reading this and the logs from the QA meeting,[1] I feel like there is some confusion/miscommunication over what the proposed resolution means by 'ownership'. Basically every Git repo has to be registered to *some* project in http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml The proposal was to register the trademark test plugins to the QA project. The implications of this are fairly minimal in my view: * The project gets a say on any new repo creation requests (this will help maintain e.g. a consistent naming scheme IMO) * Contributors to the repos are considered contributors to the project, get to vote in the PTL elections, and are allowed to put the logo sticker on their laptop.[2] (This seems appropriate to me, and in the best case might even help convert some people into becoming core reviewers for QA in the long term.) * The project would have to meet any other obligations in regards to those repos that the TC delegates to project teams and PTLs - though none of the ones I can think of (releases, tracking project-wide goals) would really apply in practice to the repos we're talking about. Perhaps I am missing something that you have a specific concern with? It is *not* meant to imply that the project has an obligation to write tests (nobody expects this, in fact), nor that the core reviewers it contributes to the core review team for the repo have any stronger obligation to do reviews than any of the other core reviewers (we really want all 3 teams to contribute to reviews, since they each bring different expertise). I think we have two options that could resolve this: * Change the wording to ensure that future readers cannot interpret the resolution as placing obligations on the QA team that we didn't intend and they do not want; or * Register the Git repos to the refstack project instead. cheers, Zane. [1] http://eavesdrop.openstack.org/meetings/qa/2018/qa.2018-03-08-07.59.log.html#l-34 [2] kidding! Everyone knows you can't have the sticker until after the initiation ;) From openstack at nemebean.com Thu Mar 8 17:46:42 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Mar 2018 11:46:42 -0600 Subject: [openstack-dev] [oslo] Oslo PTG Summary Message-ID: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> Hi, Here's my summary of the discussions we had in the Oslo room at the PTG. Please feel free to reply with any additions if I missed something or correct anything I've misrepresented. oslo.config drivers for secret management ----------------------------------------- The oslo.config implementation is in progress, while the Castellan driver still needs to be written. We want to land this early in Rocky as it is a significant change in architecture for oslo.config and we want it to be well-exercised before release. There are discussions with the TripleO team around adding support for this feature to its deployment tooling and there will be a functional test job for the Castellan driver with Custodia. There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 UTC for discussion of this feature. oslo.config driver implementation: https://review.openstack.org/#/c/513844 spec: https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html Custodia key management support for Castellan: https://review.openstack.org/#/c/515190/ "stable" libraries ------------------ Some of the Oslo libraries are in a mature state where there are very few, if any, meaningful changes to them. With the removal of the requirements sync process in Rocky, we may need to change the release process for these libraries. My understanding was that there were no immediate action items for this, but it was something we need to be aware of. dropping support for mox3 ------------------------- There was some concern that no one from the Oslo team is actually in a position to support mox3 if something were to break (such as happened in some libraries with Python 3.6). Since there is a community goal to remove mox from all OpenStack projects in Rocky this will hopefully not be a long-term problem, but there was some discussion that if projects needed to keep mox for some reason that they would be asked to provide a maintainer for mox3. This topic is kind of on hold pending the outcome of the community goal this cycle. automatic configuration migration on upgrade -------------------------------------------- There is a desire for oslo.config to provide a mechanism to automatically migrate deprecated options to their new location on version upgrades. This is a fairly complex topic that I can't cover adequately in a summary email, but there is a spec proposed at https://review.openstack.org/#/c/520043/ and POC changes at https://review.openstack.org/#/c/526314/ and https://review.openstack.org/#/c/526261/ One outcome of the discussion was that in the initial version we would not try to handle complex migrations, such as the one that happened when we combined all of the separate rabbit connection opts into a single connection string. To start with we will just raise a warning to the user that they need to handle those manually, but a templated or hook-based method of automating those migrations could be added as a follow-up if there is sufficient demand. oslo.messaging plans -------------------- There was quite a bit discussed under this topic. I'm going to break it down into sub-topics for clarity. oslo.messaging heartbeats ========================= Everyone seemed to be in favor of this feature, so we anticipate development moving forward in Rocky. There is an initial patch proposed at https://review.openstack.org/546763 We felt that it should be possible to opt in and out of the feature, and that the configuration should be done at the application level. This should _not_ be an operator decision as they do not have the knowledge to make it sanely. There was also a desire to have a TTL for messages. bug cleanup =========== There are quite a few launchpad bugs open against oslo.messaging that were reported against old, now unsupported versions. Since we have the launchpad bug expirer enabled in Oslo the action item proposed for such bugs was to mark them incomplete and ask the reporter to confirm that they still occur against a supported version. This way bugs that don't reproduce or where the reporter has lost interest will eventually be closed automatically, but bugs that do still exist can be updated with more current information. deprecations ============ The Pika driver will be deprecated in Rocky. To our knowledge, no one has ever used it and there are no known benefits over the existing Rabbit driver. Once again, the ZeroMQ driver was proposed for deprecation as well. The CI jobs for ZMQ have been broken for a while, and there doesn't seem to be much interest in maintaining them. Furthermore, the breakage seems to be a fundamental problem with the driver that would require non-trivial work to fix. Given that ZMQ has been a consistent pain point in oslo.messaging over the past few years, it was proposed that if someone does step forward and want to maintain it going forward then we should split the driver off into its own library which could then have its own core team and iterate independently of oslo.messaging. However, at this time the plan is to propose the deprecation and start that discussion first. CI == Need to migrate oslo.messaging to zuulv3 native jobs. The openstackclient library was proposed as a good example of how to do so. We also want to have voting hybrid messaging jobs (where the notification and rpc messages are sent via different backends). We will define a devstack job variant that other projects can turn on if desired. We also want to add amqp1 support to pifpaf for functional testing. Low level messaging API ======================= A proposal for a new oslo.messaging API to expose lower level messaging functionality was proposed. There is a presentation at https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing This seemed to generally be well-received by the room, and dragonflow and neutron reviewers were suggested for the spec. Kafka ===== Andy Smith gave an update on the status of the Kafka driver. Currently it is still experimental, and intended to be used for notifications only. There is a presentation with more details in https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p testing for Edge/FEMDC use cases ================================ Matthieu Simonin gave a presentation about the testing he has done related to messaging in the Edge/FEMDC scenario where messaging targets might be widely distributed. The slides can be found at https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 In short, there is a desire to build clouds that have widely distributed nodes such that content can be delivered to users from a location as close as possible. This puts a lot of pressure on the messaging layer as compute nodes (for example) could be halfway around the world from the control nodes, which is problematic for a broker-based system such as Rabbit. There is some very interesting data comparing Rabbit with a more distributed AMQP1 system based on qpid-dispatch-router. In short, the distributed system performed much better for this use case, although there was still some concern raised about the memory usage on the client side with both drivers. Some followup is needed on the oslo.messaging side to make sure we aren't leaking/wasting resources in some messaging scenarios. For further details I suggest taking a look at the presentation. mutable configuration --------------------- This is also a community goal for Rocky, and Chang Bo is driving its adoption. There was some discussion of how to test it, and also that we should provide an example of turning on mutability for the debug option since that is the target of the community goal. The cinder patch can be found here: https://review.openstack.org/#/c/464028/ Turns out it's really simple! Nova is also using this functionality for more complex options related to upgrades, so that would be a good place to look for more advanced use cases. Full documentation for the mutable config options is at https://docs.openstack.org/oslo.config/latest/reference/mutable.html The goal status is being tracked in https://storyboard.openstack.org/#!/story/2001545 Chang Bo was also going to talk to Lance about possibly coming up with a burndown chart like the one he had for the policy in code work. oslo healthcheck middleware --------------------------- As this ended up being the only major topic for the afternoon, the session was unfortunately lightly attended. However, the self-healing SIG was talking about related topics at the same time so we ended up moving to that room and had a good discussion. Overall the feature seemed to be well-received. There is some security concern with exposing service information over an un-authenticated endpoint, but because there is no authentication supported by the health checking functionality in things like Kubernetes or HAProxy this is unavoidable. The feature won't be mandatory, so if this exposure is unacceptable it can be turned off (with a corresponding loss of functionality, of course). There was also some discussion of dropping the asynchronous nature of the checks in the initial version in order to keep the complexity to a minimum. Asynchronous testing can always be added later if it proves necessary. The full spec is at https://review.openstack.org/#/c/531456 oslo.config strict validation ----------------------------- I actually had discussions with multiple people about this during the week. In both cases, they were just looking for a minimal amount of validation that would catch an error such at "devug=True". Such a validation might be fairly simple to write now that we have the YAML-based sample config with (ideally) information about all the options available to set in a project. It should be possible to compare the options set in the config file with the ones listed in the sample config and raise warnings for any that don't exist. There is also a more complete validation spec at http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html and a patch proposed at https://review.openstack.org/#/c/384559/ Unfortunately there has been little movement on that as of late, so it might be worthwhile to implement something more minimalist initially and then build from there. The existing patch is quite significant and difficult to review. Conclusion ---------- I feel like there were a lot of good discussions at the PTG and we have plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) Thanks to everyone who participated and I look forward to seeing how much progress we've made at the next Summit and PTG. -Ben From gael.therond at gmail.com Thu Mar 8 17:49:35 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 08 Mar 2018 17:49:35 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: Pretty easy, put the PTG online with a livestream on YouTube/Hangout/whatever platform that will then be saved and could even be watched later on! It’s just a matter of some hardware and a decent internet bandwidth that’s already available to almost every places where a PTG took place. Problem solved. PS: Even if I second your thoughts about the fact that some can make it to a physical meeting for some reason (And I’m one of them), your email sounds a little bit aggressive. Miss some smiley ? ;-) Le jeu. 8 mars 2018 à 15:04, Jens Harbott a écrit : > With the current PTG just finished and seeing discussions happen about > the format of the next[0], it seems that the advantages of these seem > to be pretty clear to most, so let me use the occasion to remind > everyone of the disadvantages. > > Every meeting that is happening is excluding those contributors that > can not attend it. And with that it is violating the fourth Open > principle[1], having a community that is open to everyone. If you are > wondering whom this would affect, here's a non-exclusive (sic) list of > valid reasons not to attend physical meetings: > > - Health issues > - Privilege issues (like not getting visa or travel permits) > - Caretaking responsibilities (children, other family, animals, plants) > - Environmental concerns > > So when you are considering whether it is worth the money and effort > to organise PTGs or similar events, I'd like you also to consider > those being excluded by such activities. It is not without a reason > that IRC and emails have been settled upon as preferred means of > communication. I'm not saying that physical meetings should be dropped > altogether, but maybe more effort can be placed into providing means > of remote participation, which might at least reduce some effects. > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html > [1] https://governance.openstack.org/tc/reference/opens.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Mar 8 17:52:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 8 Mar 2018 11:52:22 -0600 Subject: [openstack-dev] [cinder][summit] Forum topic proposal etherpad created ... Message-ID: <31474553-382e-8551-5779-99b81a125589@gmail.com> All, I just wanted to share the fact that I have created the etherpad for proposing topics for the Vancouver Forum.  [1] Please take a few moments to add topics there.  I will need to propose the topics we have in the next two weeks so this will need attention before that point in time. Thanks! Jay (jungleboyj) [1] https://etherpad.openstack.org/p/YVR-cinder-brainstorming From mnaser at vexxhost.com Thu Mar 8 17:57:05 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 8 Mar 2018 12:57:05 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 12:49 PM, Flint WALRUS wrote: > Pretty easy, put the PTG online with a livestream on > YouTube/Hangout/whatever platform that will then be saved and could even be > watched later on! +1 I think that we can work out a solution so that every room at the PTG has a live stream going with the ability of people to join remotely. This would be hugely beneficial for those who are unable to attend as they'll be able to join the room and be part of the discussion. I can imagine that it involves quite a bit of logistics, but with the right planning and testing, I think it could work out great. > It’s just a matter of some hardware and a decent internet bandwidth that’s > already available to almost every places where a PTG took place. > > Problem solved. > > PS: Even if I second your thoughts about the fact that some can make it to a > physical meeting for some reason (And I’m one of them), your email sounds a > little bit aggressive. Miss some smiley ? ;-) > > Le jeu. 8 mars 2018 à 15:04, Jens Harbott a écrit : >> >> With the current PTG just finished and seeing discussions happen about >> the format of the next[0], it seems that the advantages of these seem >> to be pretty clear to most, so let me use the occasion to remind >> everyone of the disadvantages. >> >> Every meeting that is happening is excluding those contributors that >> can not attend it. And with that it is violating the fourth Open >> principle[1], having a community that is open to everyone. If you are >> wondering whom this would affect, here's a non-exclusive (sic) list of >> valid reasons not to attend physical meetings: >> >> - Health issues >> - Privilege issues (like not getting visa or travel permits) >> - Caretaking responsibilities (children, other family, animals, plants) >> - Environmental concerns >> >> So when you are considering whether it is worth the money and effort >> to organise PTGs or similar events, I'd like you also to consider >> those being excluded by such activities. It is not without a reason >> that IRC and emails have been settled upon as preferred means of >> communication. I'm not saying that physical meetings should be dropped >> altogether, but maybe more effort can be placed into providing means >> of remote participation, which might at least reduce some effects. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html >> [1] https://governance.openstack.org/tc/reference/opens.html >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Thu Mar 8 17:57:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 08 Mar 2018 12:57:59 -0500 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: Message-ID: <1520531849-sup-5340@lrrr.local> Excerpts from Zane Bitter's message of 2018-03-08 12:45:11 -0500: > On 07/03/18 08:44, Ghanshyam Mann wrote: > > I mean i am all ok with separate plugin which is more easy for QA team > > but ownership to QA is kind of going to same direction(QA team > > maintaining interop ads-on tests) in more difficult way. > > After reading this and the logs from the QA meeting,[1] I feel like > there is some confusion/miscommunication over what the proposed > resolution means by 'ownership'. Basically every Git repo has to be > registered to *some* project in > http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml > > The proposal was to register the trademark test plugins to the QA > project. The implications of this are fairly minimal in my view: > > * The project gets a say on any new repo creation requests (this will > help maintain e.g. a consistent naming scheme IMO) > * Contributors to the repos are considered contributors to the project, > get to vote in the PTL elections, and are allowed to put the logo > sticker on their laptop.[2] (This seems appropriate to me, and in the > best case might even help convert some people into becoming core > reviewers for QA in the long term.) > * The project would have to meet any other obligations in regards to > those repos that the TC delegates to project teams and PTLs - though > none of the ones I can think of (releases, tracking project-wide goals) > would really apply in practice to the repos we're talking about. > > Perhaps I am missing something that you have a specific concern with? > > It is *not* meant to imply that the project has an obligation to write > tests (nobody expects this, in fact), nor that the core reviewers it > contributes to the core review team for the repo have any stronger > obligation to do reviews than any of the other core reviewers (we really > want all 3 teams to contribute to reviews, since they each bring > different expertise). > > I think we have two options that could resolve this: > * Change the wording to ensure that future readers cannot interpret the > resolution as placing obligations on the QA team that we didn't intend > and they do not want; or > * Register the Git repos to the refstack project instead. > > cheers, > Zane. > > [1] > http://eavesdrop.openstack.org/meetings/qa/2018/qa.2018-03-08-07.59.log.html#l-34 > [2] kidding! Everyone knows you can't have the sticker until after the > initiation ;) > Why would the repos be owned by anyone other than the original project team? Doug From kennelson11 at gmail.com Thu Mar 8 17:59:10 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 08 Mar 2018 17:59:10 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: Yes! In the past I have helped a few different projects set up a way to do a hangout that live streams to youtube so people can join the conversation actively or passively depending on their involvement. I actually helped write a blog post about it as well[1]. Cinder has been doing it for a few years now and Neutron started doing the same at the PTG in Denver. Monasca just got started live streaming in Dublin. If there are any teams reading this that want help getting things set up for next time I would be happy to help. Just let me know and I can walk you through the process. Also in the past, the Foundation has provided bluetooth speaker/microphones to those that needed them to help with virtualization for remote attendees. Perhaps that is something we could bring back. Hope this helps! -Kendall Nelson (diablo_rojo) [1]http://superuser.openstack.org/articles/community-participation-remote/ On Thu, Mar 8, 2018 at 9:49 AM Flint WALRUS wrote: > Pretty easy, put the PTG online with a livestream on > YouTube/Hangout/whatever platform that will then be saved and could even be > watched later on! > > It’s just a matter of some hardware and a decent internet bandwidth that’s > already available to almost every places where a PTG took place. > > Problem solved. > > PS: Even if I second your thoughts about the fact that some can make it to > a physical meeting for some reason (And I’m one of them), your email sounds > a little bit aggressive. Miss some smiley ? ;-) > Le jeu. 8 mars 2018 à 15:04, Jens Harbott a écrit : > >> With the current PTG just finished and seeing discussions happen about >> the format of the next[0], it seems that the advantages of these seem >> to be pretty clear to most, so let me use the occasion to remind >> everyone of the disadvantages. >> >> Every meeting that is happening is excluding those contributors that >> can not attend it. And with that it is violating the fourth Open >> principle[1], having a community that is open to everyone. If you are >> wondering whom this would affect, here's a non-exclusive (sic) list of >> valid reasons not to attend physical meetings: >> >> - Health issues >> - Privilege issues (like not getting visa or travel permits) >> - Caretaking responsibilities (children, other family, animals, plants) >> - Environmental concerns >> >> So when you are considering whether it is worth the money and effort >> to organise PTGs or similar events, I'd like you also to consider >> those being excluded by such activities. It is not without a reason >> that IRC and emails have been settled upon as preferred means of >> communication. I'm not saying that physical meetings should be dropped >> altogether, but maybe more effort can be placed into providing means >> of remote participation, which might at least reduce some effects. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html >> [1] https://governance.openstack.org/tc/reference/opens.html >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 8 18:06:16 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 18:06:16 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> On 2018-03-08 17:49:35 +0000 (+0000), Flint WALRUS wrote: > Pretty easy, put the PTG online with a livestream on > YouTube/Hangout/whatever platform that will then be saved and could even be > watched later on! > > It’s just a matter of some hardware and a decent internet bandwidth that’s > already available to almost every places where a PTG took place. > > Problem solved. [...] Have you ever actually tried it? I know this seems simple to "solve" with technology, but put 50 people in a room having a heated conversation (or sometimes several conversations at once) and then try to bridge some people in via phone, video conference, whatever and see how it works out in reality. The times it's been tried, either the remote participants get frustrated because nobody is paying attention to them/speaking into microphones/keeping discussion to one thread at a time, or the in-person participants get frustrated because they have to start acting like they're all on separate telephones and drag down the bandwidth of the conversation to the point where it may as well be 100% remote/separate participation anyway. We've made it work to varying degrees in the past, but it's not so simple as you would seem to imply no matter how good the technology. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gr at ham.ie Thu Mar 8 18:14:39 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 8 Mar 2018 18:14:39 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> Message-ID: <8ce633c0-57f9-251d-5d7d-bd16ed202110@ham.ie> On 08/03/18 18:06, Jeremy Stanley wrote: > On 2018-03-08 17:49:35 +0000 (+0000), Flint WALRUS wrote: >> Pretty easy, put the PTG online with a livestream on >> YouTube/Hangout/whatever platform that will then be saved and could even be >> watched later on! >> >> It’s just a matter of some hardware and a decent internet bandwidth that’s >> already available to almost every places where a PTG took place. >> >> Problem solved. > [...] > > Have you ever actually tried it? I know this seems simple to "solve" > with technology, but put 50 people in a room having a heated > conversation (or sometimes several conversations at once) and then > try to bridge some people in via phone, video conference, whatever > and see how it works out in reality. > > The times it's been tried, either the remote participants get > frustrated because nobody is paying attention to them/speaking into > microphones/keeping discussion to one thread at a time, or the > in-person participants get frustrated because they have to start > acting like they're all on separate telephones and drag down the > bandwidth of the conversation to the point where it may as well be > 100% remote/separate participation anyway. > > We've made it work to varying degrees in the past, but it's not so > simple as you would seem to imply no matter how good the technology. I would echo ^. As a remote employee for ~ 5 years, I have a large amount of experience being the person at the far end of a laptop / phone / etc. Some of the VC systems work well, but they are expensive, complex systems, with multiple mics, speakers, and cameras. I am not saying we shouldn't try it, but lets not get our hopes up. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From jungleboyj at gmail.com Thu Mar 8 18:16:18 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 8 Mar 2018 12:16:18 -0600 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> Message-ID: On 3/8/2018 12:06 PM, Jeremy Stanley wrote: > On 2018-03-08 17:49:35 +0000 (+0000), Flint WALRUS wrote: >> Pretty easy, put the PTG online with a livestream on >> YouTube/Hangout/whatever platform that will then be saved and could even be >> watched later on! >> >> It’s just a matter of some hardware and a decent internet bandwidth that’s >> already available to almost every places where a PTG took place. >> >> Problem solved. > [...] > > Have you ever actually tried it? I know this seems simple to "solve" > with technology, but put 50 people in a room having a heated > conversation (or sometimes several conversations at once) and then > try to bridge some people in via phone, video conference, whatever > and see how it works out in reality. > > The times it's been tried, either the remote participants get > frustrated because nobody is paying attention to them/speaking into > microphones/keeping discussion to one thread at a time, or the > in-person participants get frustrated because they have to start > acting like they're all on separate telephones and drag down the > bandwidth of the conversation to the point where it may as well be > 100% remote/separate participation anyway. > > We've made it work to varying degrees in the past, but it's not so > simple as you would seem to imply no matter how good the technology. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Jeremy, Cinder has been doing this for many years and it has worked relatively well.  It requires a good remote speaker and it also requires the people in the room to be sensitive to the needs of those who are remote.  I.E. planning topics at a time appropriate for the remote attendees, ensuring everyone speaks up, etc.  If everyone, however, works to be inclusive with remote participants it works well. We have even managed to make this work between separate mid-cycles (Cinder and Nova) in the past before we did PTGs. Jay (jungleboyj) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rraja at redhat.com Thu Mar 8 18:16:28 2018 From: rraja at redhat.com (Ramana Raja) Date: Thu, 8 Mar 2018 13:16:28 -0500 (EST) Subject: [openstack-dev] [manila] write integrity with NFS-Ganesha over CephFS In-Reply-To: <1523418029.11361946.1520510445712.JavaMail.zimbra@redhat.com> Message-ID: <188372712.11481200.1520532988133.JavaMail.zimbra@redhat.com> Hi Jeff, Currently, there is no open source backend in manila that provides scalable and highly-available NFS servers for dynamic cloud workloads. Manila's CephFS driver could integrate with your on-going work on active-active NFS over CephFS (with Kubernetes managing the lifecycle of containerized user-space NFS-Ganesha servers) [1] to fill this gap. During the manila project team gathering, we discussed this plan under the topic of high availability of share servers [2]. One of the questions was about write integrity issue when a NFS-Ganesha server container goes down and another container comes up to replace it. Would there be any such write integrity issues when NFS clients do asynchronous writes to files with write caching in the NFS client and the NFS server (NFS-Ganesha server side caching or the libcephfs client caching), and the NFS-Ganesha server goes down? I guess this a general NFS protocol question or maybe things get complicated with NFS-Ganesha over CephFS? I looked up the NFSv4 protocol documentation, the implementation of COMMIT operation [3]. So if a NFS client issues a aysnc write followed by a COMMIT operation that succeeds, then it's expected that the NFS server has flushed cached data and metadata onto stable storage, here CephFS. And if the NFS server crashes losing cached data and metadata, then the write verifier cookie returned by the WRITE or COMMIT operation indicates to the client that the server crashed. Now it's up to the NFS client to re-transmit the uncached data and metadata. Thanks, Ramana [1] https://jtlayton.wordpress.com/2017/11/07/active-active-nfs-over-cephfs/ [2] line 186 in https://etherpad.openstack.org/p/manila-rocky-ptg the actual spec https://review.openstack.org/#/c/504987/ [3] https://tools.ietf.org/html/rfc7530#section-16.3.5 From anteaya at anteaya.info Thu Mar 8 18:22:01 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 8 Mar 2018 13:22:01 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: On 2018-03-08 09:03 AM, Jens Harbott wrote: > With the current PTG just finished and seeing discussions happen about > the format of the next[0], it seems that the advantages of these seem > to be pretty clear to most, so let me use the occasion to remind > everyone of the disadvantages. > > Every meeting that is happening is excluding those contributors that > can not attend it. And with that it is violating the fourth Open > principle[1], having a community that is open to everyone. If you are > wondering whom this would affect, here's a non-exclusive (sic) list of > valid reasons not to attend physical meetings: > > - Health issues > - Privilege issues (like not getting visa or travel permits) > - Caretaking responsibilities (children, other family, animals, plants) > - Environmental concerns I'll add dislike for travel, which affects some of our contributors. > > So when you are considering whether it is worth the money and effort > to organise PTGs or similar events, I'd like you also to consider > those being excluded by such activities. It is not without a reason > that IRC and emails have been settled upon as preferred means of > communication. I'm not saying that physical meetings should be dropped > altogether, but maybe more effort can be placed into providing means > of remote participation, which might at least reduce some effects. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html > [1] https://governance.openstack.org/tc/reference/opens.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Keep in mind that the alternative to the Project Teams Gathering is not no Project Teams Gathering, it is projects meeting face-to-face after each of them organizied their own meeting. These used to be called mid-cycle meetups and the majority of them used to be held in the United States. Now I can't predict what will happen in the future, but should there be a void in frequency of face-to-face meetings that the PTG is currently filling, I would not be surprised to see individual projects plan their own face-to-face meetings of some kind regardless of what they are called. Thank you, Anita From jungleboyj at gmail.com Thu Mar 8 18:24:52 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 8 Mar 2018 12:24:52 -0600 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: On 3/8/2018 12:22 PM, Anita Kuno wrote: > On 2018-03-08 09:03 AM, Jens Harbott wrote: >> With the current PTG just finished and seeing discussions happen about >> the format of the next[0], it seems that the advantages of these seem >> to be pretty clear to most, so let me use the occasion to remind >> everyone of the disadvantages. >> >> Every meeting that is happening is excluding those contributors that >> can not attend it. And with that it is violating the fourth Open >> principle[1], having a community that is open to everyone. If you are >> wondering whom this would affect, here's a non-exclusive (sic) list of >> valid reasons not to attend physical meetings: >> >> - Health issues >> - Privilege issues (like not getting visa or travel permits) >> - Caretaking responsibilities (children, other family, animals, plants) >> - Environmental concerns > > I'll add dislike for travel, which affects some of our contributors. > >> >> So when you are considering whether it is worth the money and effort >> to organise PTGs or similar events, I'd like you also to consider >> those being excluded by such activities. It is not without a reason >> that IRC and emails have been settled upon as preferred means of >> communication. I'm not saying that physical meetings should be dropped >> altogether, but maybe more effort can be placed into providing means >> of remote participation, which might at least reduce some effects. >> >> [0] >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html >> [1] https://governance.openstack.org/tc/reference/opens.html >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > Keep in mind that the alternative to the Project Teams Gathering is > not no Project Teams Gathering, it is projects meeting face-to-face > after each of them organizied their own meeting. These used to be > called mid-cycle meetups and the majority of them used to be held in > the United States. Now I can't predict what will happen in the future, > but should there be a void in frequency of face-to-face meetings that > the PTG is currently filling, I would not be surprised to see > individual projects plan their own face-to-face meetings of some kind > regardless of what they are called. > > Thank you, > Anita > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Anita, We have already discussed this with the Cinder team and agreed that we would want to go back to our mid-cycles in the absence of a PTG. Jay (jungleboyj) From fungi at yuggoth.org Thu Mar 8 18:34:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 8 Mar 2018 18:34:51 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> Message-ID: <20180308183451.yplvionoenut3j5i@yuggoth.org> On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote: [...] > Cinder has been doing this for many years and it has worked > relatively well. It requires a good remote speaker and it also > requires the people in the room to be sensitive to the needs of > those who are remote. I.E. planning topics at a time appropriate > for the remote attendees, ensuring everyone speaks up, etc. If > everyone, however, works to be inclusive with remote participants > it works well. > > We have even managed to make this work between separate mid-cycles > (Cinder and Nova) in the past before we did PTGs. [...] I've seen it work okay when the number of remote participants is small and all are relatively known to the in-person participants. Even so, bridging Doug into the TC discussion at the PTG was challenging for all participants. When meeting in person there is a fair amount of shared body language which helps keep the flow and balance of participants so none of us can start to ramble too much when we need to get through a topic quickly. Remote participants are robbed of that higher bandwidth interaction which extends beyond mere intonation and, perhaps, facial expression. It's harder for someone on the other end of a phone or computer to successfully interject during a heated conversation, and also conversely harder for the in-person participants to interject when a remote speaker is saying something. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Thu Mar 8 18:54:06 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 8 Mar 2018 12:54:06 -0600 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: Good overview.  Thank you! One additional goal I want to mention on the list, for awareness, is the fact that we would like to eventually get some consistency to the pages that the 'Contributor Guide' lands on for each of the projects.  Needs to be a page that is friendly to new contributors, makes it easy to learn about the project and is not overwhelming. What exactly that looks like isn't defined yet but I have talked to Manila about this.  They were interested in working together on this.  Cinder and Manila will work together to get something consistent put together and then we can work on spreading that to other projects once we have agreement from the SIG that the approach is agreeable. Jay (jungleboyj) On 3/5/2018 2:00 PM, Kendall Nelson wrote: > Hello Everyone :) > > It was wonderful to see and talk with so many of you last week! For > those that couldn't attend our whole day of chats or those that > couldn't attend at all, I thought I would put forth a summary of our > discussions which were mostly noted in the etherpad[1] > > #Contributor Guide# > > - Walkthrough: We walked through every section of what exists and came > up with a variety of improvements on what is there. Most of these > items have been added to our StoryBoard project[2]. This came up again > Tuesday in docs sessions and I have added those items to StoryBoard as > well. > > - Google Analytics: It was discussed we should do something about > getting the contributor portal[3] to appear higher in Google searches > about onboarding. Not sure what all this entails. NEEDS AN OWNER IF > ANYONE WANTS TO VOLUNTEER. > > #Mission Statement# > > We updated our mission statement[4]! It now states: > > To provide a place for new contributors to come for information and > advice. This group will also analyze and document successful > contribution models while seeking out and providing information to new > members of the community. > > #Weekly Meeting# > > We discussed beginning a weekly meeting- optimized for APAC/Europe and > settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed > here[5]. For now I added a section to our wiki for agenda > organization[6]. The two main items we want to cover on a weekly basis > are new contributor patches in gerrit and if anything has come up on > ask.openstack.org about contributors so > those will be standing agenda items. > > #Forum Session# > > We discussed proposing some forum sessions in order to get more > involvement from operators. Currently, our activities focus on > development activities and we would like to diversify. When this SIG > was first proposed we wanted to have two chairs- one to represent > developers and one to represent operators. We will propose a session > or two when the call for forum proposals go out (should be today). > > #IRC Channels# > We want to get rid of #openstack-101 and begin using #openstack-dev > instead. The 101 channel isn't watched closely enough anymore and it > makes more sense to move onboarding activities (like in OpenStack > Upstream Institute) to a channel where there are people that can > answer questions rather than asking those to move to a new channel. > For those concerned about noise, OUI is run the weekend before the > summit when most people are traveling to the Summit anyway. > > #Ongoing Onboarding Efforts# > > - GSOC: Unfortunately we didn't get accepted this year. We will try > again next year. > > - Outreachy: Applications for the next round of interns are due March > 22nd, 2018 [7]. Decisions will be made by April and then internships > run May to August. > > - WoO Mentoring: The format of mentoring is changing from 1x1 to > cohorts focused on a single goal. If you are interested in helping > out, please contact me! I NEED HELP :) > > - Contributor guide: Please see the above section. > > - OpenStack Upstream Institute: It will be run, as usual, the weekend > before the Summit in Vancouver. Depending on how much progress is made > on the contributor guide, we will make use of it as opposed to slides > like previous renditions. There have also been a number of OpenStack > Days requesting we run it there as well. More details of those to come. > > #Project Liaisons# > > The list is filling out nicely, but we still need more coverage. If > you know someone from a project not listed that might be willing to > help, please reach out to them and get them added to our list [8]. > > I thiiiiiink that is just about everything. Hopefully I at least > covered everything important :) > > Thanks Everyone! > > - Kendall Nelson (diablo_rojo) > > [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG > [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 > > [3] Contributor Portal https://www.openstack.org/community/ > [4] Mission Statement Update https://review.openstack.org/#/c/548054/ > [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ > [6] Meeting Agenda > https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda > [7] Outreachy https://www.outreachy.org/apply/ > [8] Project Liaisons > https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From persia at shipstone.jp Thu Mar 8 18:57:26 2018 From: persia at shipstone.jp (persia at shipstone.jp) Date: Thu, 8 Mar 2018 13:57:26 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: <8c972315-befb-4e3d-8a0c-c4eeb3eccd22@Spark> Jens Harbott wrote: > With the current PTG just finished and seeing discussions happen about > the format of the next[0], it seems that the advantages of these seem > to be pretty clear to most, so let me use the occasion to remind > everyone of the disadvantages. <…> > So when you are considering whether it is worth the money and effort > to organise PTGs or similar events, I'd like you also to consider > those being excluded by such activities. It is not without a reason > that IRC and emails have been settled upon as preferred means of > communication. I'm not saying that physical meetings should be dropped > altogether, but maybe more effort can be placed into providing means > of remote participation, which might at least reduce some effects.      I have been involved in many physical gatherings in many communities over a number of years, and have concluded that there is no substitute for occasionally being able to look over someone else’s screen (or have them look over yours), bumping into folk after sessions in the hotel and ending up hunting down someone else for a discussion you forgot, or even just having lots of folk in the same timezone.  Broadly speaking, I believe that the total velocity of a community is enhanced by 10-20% by having physical meetings.  Experience with groups that meet monthly, semi-monthly, quarterly, semiannually, and annually suggests that the most effective balance between the benefits of meeting and the penalties of travel is quarterly, ideally with a larger group about half the time and a smaller group the other half of the time (echoes of Thierry’s comments about inward vs. outward in another thread).  That this happens to be a close match to OpenStack is either a coincidence or that others in our community share some of my experience.     Sadly, there are no good solutions to extend the benefits of such meetings to folk that don’t make them, but over the years many communities have adopted various practices that significantly reduce the productivity losses of non-attendees during the events.  These are not substitutes for attendance (even if only occasional attendance), but they do a lot to reduce the amount of guesswork that non-attendees must perform afterwards. Using Etherpads     If there is a discussion at a physical meeting that involves more than a couple people in the hall, an etherpad (or similar shared documentation solution) should be used to take notes.  This both ensures that participants correctly remember the discussion (as with so much happening during a week, things easily slip out of focus), and makes the content discoverable for non-attendees, if only to inform future reviews or mailing list posts.  Ideally, there is a fairly simple way to make these documents discoverable, such as having the URLs posted to channels on a per-topic basis, or similar. Video streaming and recordings of presentations     Where there are formal presentations (e.g. the lunch presentations at the recent PTG), streaming them ensures that non-attendees are able to follow along and have context.  Ensuring they are recorded and promptly posted helps those in different timezones follow along.  Having them archived can significantly reduce the effort involved in catching up to speed for new community participants, as they can review what was presented at the conference they didn’t know they wanted to attend remotely, as well as building face-to-name mappings that will serve them well if they are a future attendee. Public internet audio streams     Streaming audio from meeting rooms and selected lounge spaces is incredibly useful for larger discussions, especially those conducted fishbowl-style, with a few primary participants and others hacking in the corners.  When one knows a meeting Is scheduled, and can follow the discussion and etherpad, one can be a very effective participant, even remotely (assuming compatible timezones or personal timezone shifts).  Streaming from lounges allows non-attendees to also participate in ad-hoc “hallway” discussions sometimes, albeit to a limited degree.  This can be very useful when something runs over a time box, but a few participants wish to continue chatting for another 10 minutes.  Unfortunately, this requires rather a lot of infrastructure and can be fairly expensive. Audio conferencing     I have participated in conferences where each room is wired to a two-way bridge, rather than just streaming, both as an in-room participant and an external participant.  I have been unsatisfied with the experience in both directions.  Some problems include incorrect volumes on the PA, remote folk speaking over each other (or with unexpected lag), limitations of classic telephony codecs when dealing with many people speaking simultaneously in a room (partly mono vs. stereo issues, partly distance-from-mic issues).     I have had success with arranging specific remote contributors to connect to separate devices for specific sessions.  This usually happens by appointment, with one of the physical participants using some device to connect to the participant.  I once participated in a meeting with two such remote participants, and it worked much less well, largely due to the audio lag in each session meaning the remote participants could not converse with each other (making both feel more remote). IRC Projectors:     Having dedicated per-room IRC channels with projectors on the wall can work fairly well, as folk in-room who are not looking at their laptops will notice comments on IRC.  I have observed fairly long conversations where some participants are speaking and others typing, for good inclusivity of all involved.  These are ideally not the regular channels used for project communication, as this mechanism does not work as well for channels that are high-volume or are used for topics other than that being discussed in the room. Nothing Happens policy:     Establishing a policy that no decisions can be taken during physical meetings is hugely valuable to ensuring that everyone has a voice: if people think they have consensus, they submit the results of this to typical fora (e.g. mailing list, gerrit, etc.) for review, some of which review may happen in person, but only coincidentally.  Making this work well depends on everyone expecting that process during all steps (including the content of the conversations causing posts and changes), to avoid anyone saying “but we decided at Summit”, which suddenly destroys the discussion. (Note: “Summit” was chosen as an event that doesn’t happen anymore, to avoid blaming any specific event) Wide geographic rotation     Having regular meetings in widely different geographic locations tends to cause the population attending meetings to change over time.  This both forces attendees to recognise that not everyone is present and increases the number of folk that sometimes to not attend, which helps drive policies that support those who cannot attend (either occasionally or ever).  For example, one conference I used to attend that rotated between APAC, EMEA, and the Americas over each year tended to have about 50% all the same people, 35% people from the relevant continent and 15% random folk who happened to be able to travel that week. Continued regular team activities:     Teams that halt everything for an event, and then start again after the event end up blocking all team work (sometimes for two or three weeks) during the hiatus.  It may be that a subset of the team is productive at a meeting during that time, but ensuring that regular meetings continue to be scheduled, time is set aside to do reviews or respond to mail, etc. both helps the team gain higher productivity gains from the meeting (due to less losses from the hiatus) and ensures that non-attendees can continue to work normally despite the meeting. Regular Updates:     Reporting on events ranges from proceedings documents that appear months afterwards to active microblogging of nearly every statement.  I do not typically find either of these useful as a remote participant.  My personal preference for this consists of a) IRC notification of the start of most larger discussions, and b) summary posts of topics at conclusion (could be as simple as “We talked about foo, we reached consensus, and bar will submit a change.  The etherpad is at baz.”).  Telling folk about lack of consensus is as important as reporting consensus: that helps remote participants who aren’t able to align their personal timezone appreciate where spending their day thinking about something can have value. External Conference tracking:     Some communities in which I’ve been involved make a practice of tracking whether anyone will be at conferences hosted by others over the year.  Where there are a few folk who are going, it often makes sense to try to schedule some time to meet up.  While not a substitute for broad community meetings, this practice can help many folk who are commonly non-attendees to be able to occasionally meet with folk, and get some of the benefits of physical interaction, ideally somewhere much closer to home, and perhaps only for a day or a few hours, rather than a full 10-day international excursion. — Emmet HIKORY -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Mar 8 18:59:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 08 Mar 2018 13:59:32 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <20180308183451.yplvionoenut3j5i@yuggoth.org> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> <20180308183451.yplvionoenut3j5i@yuggoth.org> Message-ID: <1520535197-sup-8174@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +0000: > On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote: > [...] > > Cinder has been doing this for many years and it has worked > > relatively well. It requires a good remote speaker and it also > > requires the people in the room to be sensitive to the needs of > > those who are remote. I.E. planning topics at a time appropriate > > for the remote attendees, ensuring everyone speaks up, etc. If > > everyone, however, works to be inclusive with remote participants > > it works well. > > > > We have even managed to make this work between separate mid-cycles > > (Cinder and Nova) in the past before we did PTGs. > [...] > > I've seen it work okay when the number of remote participants is > small and all are relatively known to the in-person participants. > Even so, bridging Doug into the TC discussion at the PTG was > challenging for all participants. I agree, and I'll point out I was just across town (snowed in at a different hotel). The conversation the previous day with just the 5-6 people on the release team worked a little bit better, but was still challenging at times because of audio quality issues. So, yes, this can be made to work. It's not trivial, though, and the degree to which it works depends a lot on the participants on both sides of the connection. I would not expect us to be very productive with a large number of people trying to be active in the conversation remotely. Doug From duncan.thomas at gmail.com Thu Mar 8 19:00:25 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Thu, 8 Mar 2018 19:00:25 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> Message-ID: On 8 March 2018 at 18:16, Jay S Bryant wrote: > Cinder has been doing this for many years and it has worked relatively well. > It requires a good remote speaker and it also requires the people in the > room to be sensitive to the needs of those who are remote. I.E. planning > topics at a time appropriate for the remote attendees, ensuring everyone > speaks up, etc. If everyone, however, works to be inclusive with remote > participants it works well. Having been both in the room and on the phone, I'd have to say it was better than nothing but a long way from 'working well'. There's definitely a huge imbalance between being in the room and able to follow everything, and being on the phone, where you have to ask for people to repeat things (if you even know something was said to ask for the repeat), speak up, stop talking over other people, etc. It always feels like a very second-class position to me. -- Duncan Thomas From Tim.Bell at cern.ch Thu Mar 8 19:18:26 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 8 Mar 2018 19:18:26 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <1520535197-sup-8174@lrrr.local> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> <20180308183451.yplvionoenut3j5i@yuggoth.org> <1520535197-sup-8174@lrrr.local> Message-ID: <4F8D01FD-3660-4ACF-8AE1-4B99E6D88451@cern.ch> Fully agree with Doug. At CERN, we use video conferencing for 100s, sometimes >1000 participants for the LHC experiments, the trick we've found is to fully embrace the chat channels (so remote non-native English speakers can provide input) and chairs/vectors who can summarise the remote questions constructively, with appropriate priority. This is actually very close to the etherpad approach, we benefit from the local bandwidth if available but do not exclude those who do not have it (or the language skills to do it in real time). Tim -----Original Message----- From: Doug Hellmann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 8 March 2018 at 20:00 To: openstack-dev Subject: Re: [openstack-dev] Pros and Cons of face-to-face meetings Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +0000: > On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote: > [...] > > Cinder has been doing this for many years and it has worked > > relatively well. It requires a good remote speaker and it also > > requires the people in the room to be sensitive to the needs of > > those who are remote. I.E. planning topics at a time appropriate > > for the remote attendees, ensuring everyone speaks up, etc. If > > everyone, however, works to be inclusive with remote participants > > it works well. > > > > We have even managed to make this work between separate mid-cycles > > (Cinder and Nova) in the past before we did PTGs. > [...] > > I've seen it work okay when the number of remote participants is > small and all are relatively known to the in-person participants. > Even so, bridging Doug into the TC discussion at the PTG was > challenging for all participants. I agree, and I'll point out I was just across town (snowed in at a different hotel). The conversation the previous day with just the 5-6 people on the release team worked a little bit better, but was still challenging at times because of audio quality issues. So, yes, this can be made to work. It's not trivial, though, and the degree to which it works depends a lot on the participants on both sides of the connection. I would not expect us to be very productive with a large number of people trying to be active in the conversation remotely. Doug __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rico.lin.guanyu at gmail.com Thu Mar 8 19:34:16 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 9 Mar 2018 03:34:16 +0800 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: <1520531849-sup-5340@lrrr.local> References: <1520531849-sup-5340@lrrr.local> Message-ID: > > Why would the repos be owned by anyone other than the original project > team? > For normal tempest tests, which owned and maintained by original projects. I think there were discussions in that PTG QA session about interop tests should be maintained by QA team. > In the new resolution, we can make sure QA team and project teams will stay in their obligation to interop testing structure together (isn't that just like how current tempest plugin structure works). And allow interop team to focus on interop structure (ideally not the tests itself). I agree with Zane, that we really want all 3 teams to contribute to reviews, since they each bring different expertise to format this interop structure. Rico Lin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 8 19:44:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Mar 2018 13:44:59 -0600 Subject: [openstack-dev] [ptg] Release cycles vs. downstream consuming models discussion summary In-Reply-To: <6e283dce-a1ce-4878-2af8-8441beb3dc33@openstack.org> References: <6e283dce-a1ce-4878-2af8-8441beb3dc33@openstack.org> Message-ID: <4789132d-19f5-7afa-6f0b-5a5f4764dce4@gmail.com> On 3/7/2018 8:43 AM, Thierry Carrez wrote: > mriedem volunteered to work on a TC resolution to define > what we exactly meant by that (the proposal is now being discussed at > https://review.openstack.org/#/c/548916/). A new revision is now up for this after much discussion in the review itself and in the #openstack-tc channel today: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T16:17:03 https://review.openstack.org/#/c/548916/ I'm quite sure it's now perfect in every form and all stakeholders will be equally elated at its magnificence. -- Thanks, Matt From mriedemos at gmail.com Thu Mar 8 19:57:39 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Mar 2018 13:57:39 -0600 Subject: [openstack-dev] [nova][placement] PTG Summary and Rocky Priorities In-Reply-To: <9b6b4b7e-02d7-28e0-8d6d-53e1849827f8@gmail.com> References: <9b6b4b7e-02d7-28e0-8d6d-53e1849827f8@gmail.com> Message-ID: On 3/8/2018 6:51 AM, Jay Pipes wrote: > -  VGPU_DISPLAY_HEAD resource class should be removed and replaced with > a set of os-traits traits that indicate the maximum supported number of > display heads for the vGPU type > How does a trait express a quantifiable limit? Would we end up have several different traits with varying levels of limits? > > - Multiple agreements about strict minimum bandwidth support feature in > nova -  Spec has already been updated accordingly: > https://review.openstack.org/#/c/502306/ > >   - For now we keep the hostname as the information connecting the > nova-compute and the neutron-agent on the same host but we are aiming > for having the hostname as an FQDN to avoid possible ambiguity. > >   - We agreed not to make this feature dependent on moving the nova > port create to the conductor. The current scope is to support > pre-created neutron port only. I could rat-hole in the spec, but figured it would be good to also mention it here. When we were talking about this in Dublin, someone also mentioned that depending on the network on which nova-compute creates a port, the port could have a QoS policy applied to it for bandwidth, and then nova-compute would need to allocate resources in Placement for that port (with the instance as the consumer). So then we'd be doing allocations both in the scheduler for pre-created ports and in the compute for ports that nova creates. So the scope statement here isn't entirely true, and leaves us with some technical debt until we move port creation to conductor. Or am I missing something? > >   - Neutron will provide the resource request in the port API so this > feature does not depend on the neutron port binding API work > >   - Neutron will create resource providers in placement under the > compute RP. Also Neutron will report inventories on those RPs > >   - Nova will do the claim of the port related resources in placement > and the consumer_id will be the instance UUID -- Thanks, Matt From anteaya at anteaya.info Thu Mar 8 19:59:38 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 8 Mar 2018 14:59:38 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: References: Message-ID: <436c456d-46e4-e562-be8e-c70c06db2563@anteaya.info> On 2018-03-08 01:24 PM, Jay S Bryant wrote: > > > On 3/8/2018 12:22 PM, Anita Kuno wrote: >> On 2018-03-08 09:03 AM, Jens Harbott wrote: >>> With the current PTG just finished and seeing discussions happen about >>> the format of the next[0], it seems that the advantages of these seem >>> to be pretty clear to most, so let me use the occasion to remind >>> everyone of the disadvantages. >>> >>> Every meeting that is happening is excluding those contributors that >>> can not attend it. And with that it is violating the fourth Open >>> principle[1], having a community that is open to everyone. If you are >>> wondering whom this would affect, here's a non-exclusive (sic) list of >>> valid reasons not to attend physical meetings: >>> >>> - Health issues >>> - Privilege issues (like not getting visa or travel permits) >>> - Caretaking responsibilities (children, other family, animals, plants) >>> - Environmental concerns >> >> I'll add dislike for travel, which affects some of our contributors. >> >>> >>> So when you are considering whether it is worth the money and effort >>> to organise PTGs or similar events, I'd like you also to consider >>> those being excluded by such activities. It is not without a reason >>> that IRC and emails have been settled upon as preferred means of >>> communication. I'm not saying that physical meetings should be dropped >>> altogether, but maybe more effort can be placed into providing means >>> of remote participation, which might at least reduce some effects. >>> >>> [0] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html >>> >>> [1] https://governance.openstack.org/tc/reference/opens.html >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> Keep in mind that the alternative to the Project Teams Gathering is >> not no Project Teams Gathering, it is projects meeting face-to-face >> after each of them organizied their own meeting. These used to be >> called mid-cycle meetups and the majority of them used to be held in >> the United States. Now I can't predict what will happen in the future, >> but should there be a void in frequency of face-to-face meetings that >> the PTG is currently filling, I would not be surprised to see >> individual projects plan their own face-to-face meetings of some kind >> regardless of what they are called. >> >> Thank you, >> Anita >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Anita, > > We have already discussed this with the Cinder team and agreed that we > would want to go back to our mid-cycles in the absence of a PTG. > > Jay > (jungleboyj) Thanks for sharing the data point Jay, Anita > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anteaya at anteaya.info Thu Mar 8 20:06:40 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Thu, 8 Mar 2018 15:06:40 -0500 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <4F8D01FD-3660-4ACF-8AE1-4B99E6D88451@cern.ch> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> <20180308183451.yplvionoenut3j5i@yuggoth.org> <1520535197-sup-8174@lrrr.local> <4F8D01FD-3660-4ACF-8AE1-4B99E6D88451@cern.ch> Message-ID: <535bb9dc-a155-d594-8437-026af8996411@anteaya.info> On 2018-03-08 02:18 PM, Tim Bell wrote: > Fully agree with Doug. At CERN, we use video conferencing for 100s, sometimes >1000 participants for the LHC experiments, the trick we've found is to fully embrace the chat channels (so remote non-native English speakers can provide input) and chairs/vectors who can summarise the remote questions constructively, with appropriate priority. > > This is actually very close to the etherpad approach, we benefit from the local bandwidth if available but do not exclude those who do not have it (or the language skills to do it in real time). Just expanding on the phrase 'the etherpad approach' one instance on the Friday saw some infra team members discussing the gerrit upgrade in person and one infra team member (snowed in at the same hotel as Doug) following along on the etherpad as it was updated and weighing in on the updates (either via the etherpad or irc, I'm not sure, my laptop was not open). So again echoing the chorus, there are possibilities, but those possibilities require effort and usually prior knowledge of participants and their habits. Thank you, Anita > > Tim > > -----Original Message----- > From: Doug Hellmann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 8 March 2018 at 20:00 > To: openstack-dev > Subject: Re: [openstack-dev] Pros and Cons of face-to-face meetings > > Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +0000: > > On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote: > > [...] > > > Cinder has been doing this for many years and it has worked > > > relatively well. It requires a good remote speaker and it also > > > requires the people in the room to be sensitive to the needs of > > > those who are remote. I.E. planning topics at a time appropriate > > > for the remote attendees, ensuring everyone speaks up, etc. If > > > everyone, however, works to be inclusive with remote participants > > > it works well. > > > > > > We have even managed to make this work between separate mid-cycles > > > (Cinder and Nova) in the past before we did PTGs. > > [...] > > > > I've seen it work okay when the number of remote participants is > > small and all are relatively known to the in-person participants. > > Even so, bridging Doug into the TC discussion at the PTG was > > challenging for all participants. > > I agree, and I'll point out I was just across town (snowed in at a > different hotel). > > The conversation the previous day with just the 5-6 people on the > release team worked a little bit better, but was still challenging > at times because of audio quality issues. > > So, yes, this can be made to work. It's not trivial, though, and > the degree to which it works depends a lot on the participants on > both sides of the connection. I would not expect us to be very > productive with a large number of people trying to be active in the > conversation remotely. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Thu Mar 8 20:13:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 8 Mar 2018 14:13:15 -0600 Subject: [openstack-dev] [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: On 2/5/2018 11:44 PM, Massimo Sgaravatto wrote: > But if I try to specify the long list of projects, I get:a "Value ... is > too long" error message [*]. > > I can see two workarounds for this problem: > > 1) Create an host aggregate per project: > > HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 > HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 > etc > > 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates > and having each flavor visible only by a set of projects, and tagged > with a specific string that should match the value specified in the > correspondent host aggregate > > Is this correct ? Can you see better options ? This problem came up in the public cloud WG meeting at the PTG last week. The issue is that the host aggregate metadata value is limited to 255 characters so you're pretty severely restricted in the number of projects you can isolate to that host aggregate. There were two ideas that I remember getting discussed for possible solutions: 1. The filter could grow support for domains (or some other fancy keystone construct) such that you could nest projects and then just isolate the root project/domain to that host aggregate. I'm not sharp on keystone stuff so would need more input here, but this might not be a great solution if nova has to ask keystone for this information per run through the filters - that could get expensive. If the information is in the user request context (token) then maybe that would work. 2. Dan Smith mentioned another idea such that we could index the aggregate metadata keys like filter_tenant_id0, filter_tenant_id1, ... filter_tenant_idN and then combine those so you have one host aggregate filter_tenant_id* key per tenant. -- Thanks, Matt From dms at danplanet.com Thu Mar 8 20:19:02 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 08 Mar 2018 12:19:02 -0800 Subject: [openstack-dev] [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: (Matt Riedemann's message of "Thu, 8 Mar 2018 14:13:15 -0600") References: Message-ID: > 2. Dan Smith mentioned another idea such that we could index the > aggregate metadata keys like filter_tenant_id0, filter_tenant_id1, > ... filter_tenant_idN and then combine those so you have one host > aggregate filter_tenant_id* key per tenant. Yep, and that's what I've done in my request_filter implementation: https://review.openstack.org/#/c/545002/9/nova/scheduler/request_filter.py Basically it allows any suffix to 'filter_tenant_id' to be processed as a potentially-matching key. Note that I'm hoping we can deprecate/remove the post filter and replace it with this much more efficient version. --Dan From aspiers at suse.com Thu Mar 8 20:20:37 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 8 Mar 2018 20:20:37 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> Message-ID: <20180308202037.7x7oemuqainqf7zu@pacific.linksys.moosehall> Raoul Scarazzini wrote: >On 08/03/2018 17:03, Adam Spiers wrote: >[...] >> Yes agreed again, this is a strong case for collaboration between the >> self-healing and QA SIGs.  In Dublin we also discussed the idea of the >> self-healing and API SIGs collaborating on the related topic of health >> check APIs. > >Guys, thanks a ton for your involvement in the topic, I am +1 to any >kind of meeting we can have to discuss this (like it was proposed by >Adam) so I'll offer my bluejeans channel for whatever kind of meeting we >want to organize. Awesome, thanks - bluejeans would be great. >About the best practices part Georg was mentioning I'm 100% in >agreement, the testing methodologies are the first thing we need to care >about, starting from what we want to achieve. >That said, I'll keep studying Yardstick. > >Hope to hear from you soon, and thanks again! Yep - let's wait for people to catch up with the thread and hopefully we'll get enough volunteers on https://etherpad.openstack.org/p/extreme-testing-contacts for critical mass and then we can start discussing! I think it's especially important that we have the Eris folks on board since they have already been working on this for a while. From emilien at redhat.com Thu Mar 8 20:35:34 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 8 Mar 2018 20:35:34 +0000 Subject: [openstack-dev] [tripleo] Queens retrospective (PTG session) Message-ID: We kicked-off the PTG by a 45 minutes retrospective about our work during Queens cycle. Here is a short summary of what has been said: - Keep doing: weekly updates and encourage squads to update their etherpads; squads (and adjust our squads every cycle when needed); use #tripleo for IRC meetings. - Less of: relying on deployment tests for everything; non-voting CI jobs; complexity (investigate what we could deprecate). - More of: pay more attention to OVB jobs (consolidate CI status?); communication on ML; more tempest usage; more systematic bug triage; more deep dive sessions; more reviews inter-squads - Stop doing: support baremetal deployments. - Start doing: focus on container promotions; Ansible tests (without functional jobs). If you want to know more, please take a look at the full output here: https://etherpad.openstack.org/p/tripleo-ptg-rocky-retro Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Mar 8 20:36:59 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 8 Mar 2018 14:36:59 -0600 Subject: [openstack-dev] [cyborg]No Meeting This Week In-Reply-To: References: Message-ID: <5F883771-AFAD-4B19-BDFF-10818AAFD586@leafe.com> On Mar 5, 2018, at 9:51 PM, Zhipeng Huang wrote: > As most of us are rekubrating from PTG and snowenpstack last week, let's cancel the team meeting this week. At the mean time I have solicitate the meeting summary from topic leads, and will send out a summary of the summaries later :) When is your meeting? I don’t see it listed on http://eavesdrop.openstack.org. -- Ed Leafe From zbitter at redhat.com Thu Mar 8 20:51:05 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 8 Mar 2018 15:51:05 -0500 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: <1520531849-sup-5340@lrrr.local> References: <1520531849-sup-5340@lrrr.local> Message-ID: <11033cf1-d80a-ef35-bf0a-97a048ec94ae@redhat.com> On 08/03/18 12:57, Doug Hellmann wrote: > Why would the repos be owned by anyone other than the original project > team? A few reasons I think it makes sense in this instance: * Not every set of trademark tests will necessarily belong to a single project. Tempest itself is an example of this - in fact that's basically how the QA program came to exist. Vertical-specific trademark programs are another example that we anticipate in the future. * Allowing projects to create their own repos means that there's no co-ordination point to ensure e.g. a consistent naming scheme. Amongst other things, this could potentially cause confusion about which plugins are trademark-candidates-only and which are just regular tempest plugins. * By registering trademark plugins all in one place it makes it easy to determine how many there are, which plugins exist (e.g. are there any extant plugins that are not referenced by refstack? This is a question you can answer in 20s if they're all registered in the same place.) * The goal is for maintenance of these plugins to be a collaborative effort by the project team, the QA team, and RefStack. If the first step for a project establishing a trademark test plugin involves the project team reaching out to the QA team then that's a good foot to start on. If teams create the repos in their own projects and fly under QA's radar then QA folks might not even be aware that they've become core reviewers on the repo. I guess we have examples of both models in the community... e.g. puppet-openstack vs. Horizon plugins. I wonder if there are any lessons we can draw on to see which works better, and when. cheers, Zane. From harlowja at fastmail.com Thu Mar 8 20:53:01 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Thu, 08 Mar 2018 12:53:01 -0800 Subject: [openstack-dev] [oslo] Oslo PTG Summary In-Reply-To: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> References: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> Message-ID: <5AA1A2AD.3050204@fastmail.com> Can we get some of those doc links opened. 'You need permission to access this published document.' I am getting for a few of them :( Ben Nemec wrote: > Hi, > > Here's my summary of the discussions we had in the Oslo room at the PTG. > Please feel free to reply with any additions if I missed something or > correct anything I've misrepresented. > > oslo.config drivers for secret management > ----------------------------------------- > > The oslo.config implementation is in progress, while the Castellan > driver still needs to be written. We want to land this early in Rocky as > it is a significant change in architecture for oslo.config and we want > it to be well-exercised before release. > > There are discussions with the TripleO team around adding support for > this feature to its deployment tooling and there will be a functional > test job for the Castellan driver with Custodia. > > There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 > UTC for discussion of this feature. > > oslo.config driver implementation: https://review.openstack.org/#/c/513844 > spec: > https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html > > Custodia key management support for Castellan: > https://review.openstack.org/#/c/515190/ > > "stable" libraries > ------------------ > > Some of the Oslo libraries are in a mature state where there are very > few, if any, meaningful changes to them. With the removal of the > requirements sync process in Rocky, we may need to change the release > process for these libraries. My understanding was that there were no > immediate action items for this, but it was something we need to be > aware of. > > dropping support for mox3 > ------------------------- > > There was some concern that no one from the Oslo team is actually in a > position to support mox3 if something were to break (such as happened in > some libraries with Python 3.6). Since there is a community goal to > remove mox from all OpenStack projects in Rocky this will hopefully not > be a long-term problem, but there was some discussion that if projects > needed to keep mox for some reason that they would be asked to provide a > maintainer for mox3. This topic is kind of on hold pending the outcome > of the community goal this cycle. > > automatic configuration migration on upgrade > -------------------------------------------- > > There is a desire for oslo.config to provide a mechanism to > automatically migrate deprecated options to their new location on > version upgrades. This is a fairly complex topic that I can't cover > adequately in a summary email, but there is a spec proposed at > https://review.openstack.org/#/c/520043/ and POC changes at > https://review.openstack.org/#/c/526314/ and > https://review.openstack.org/#/c/526261/ > > One outcome of the discussion was that in the initial version we would > not try to handle complex migrations, such as the one that happened when > we combined all of the separate rabbit connection opts into a single > connection string. To start with we will just raise a warning to the > user that they need to handle those manually, but a templated or > hook-based method of automating those migrations could be added as a > follow-up if there is sufficient demand. > > oslo.messaging plans > -------------------- > > There was quite a bit discussed under this topic. I'm going to break it > down into sub-topics for clarity. > > oslo.messaging heartbeats > ========================= > > Everyone seemed to be in favor of this feature, so we anticipate > development moving forward in Rocky. There is an initial patch proposed > at https://review.openstack.org/546763 > > We felt that it should be possible to opt in and out of the feature, and > that the configuration should be done at the application level. This > should _not_ be an operator decision as they do not have the knowledge > to make it sanely. > > There was also a desire to have a TTL for messages. > > bug cleanup > =========== > > There are quite a few launchpad bugs open against oslo.messaging that > were reported against old, now unsupported versions. Since we have the > launchpad bug expirer enabled in Oslo the action item proposed for such > bugs was to mark them incomplete and ask the reporter to confirm that > they still occur against a supported version. This way bugs that don't > reproduce or where the reporter has lost interest will eventually be > closed automatically, but bugs that do still exist can be updated with > more current information. > > deprecations > ============ > > The Pika driver will be deprecated in Rocky. To our knowledge, no one > has ever used it and there are no known benefits over the existing > Rabbit driver. > > Once again, the ZeroMQ driver was proposed for deprecation as well. The > CI jobs for ZMQ have been broken for a while, and there doesn't seem to > be much interest in maintaining them. Furthermore, the breakage seems to > be a fundamental problem with the driver that would require non-trivial > work to fix. > > Given that ZMQ has been a consistent pain point in oslo.messaging over > the past few years, it was proposed that if someone does step forward > and want to maintain it going forward then we should split the driver > off into its own library which could then have its own core team and > iterate independently of oslo.messaging. However, at this time the plan > is to propose the deprecation and start that discussion first. > > CI > == > > Need to migrate oslo.messaging to zuulv3 native jobs. The > openstackclient library was proposed as a good example of how to do so. > > We also want to have voting hybrid messaging jobs (where the > notification and rpc messages are sent via different backends). We will > define a devstack job variant that other projects can turn on if desired. > > We also want to add amqp1 support to pifpaf for functional testing. > > Low level messaging API > ======================= > > A proposal for a new oslo.messaging API to expose lower level messaging > functionality was proposed. There is a presentation at > https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing > > > This seemed to generally be well-received by the room, and dragonflow > and neutron reviewers were suggested for the spec. > > Kafka > ===== > > Andy Smith gave an update on the status of the Kafka driver. Currently > it is still experimental, and intended to be used for notifications > only. There is a presentation with more details in > https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p > > > testing for Edge/FEMDC use cases > ================================ > > Matthieu Simonin gave a presentation about the testing he has done > related to messaging in the Edge/FEMDC scenario where messaging targets > might be widely distributed. The slides can be found at > https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 > > > In short, there is a desire to build clouds that have widely distributed > nodes such that content can be delivered to users from a location as > close as possible. This puts a lot of pressure on the messaging layer as > compute nodes (for example) could be halfway around the world from the > control nodes, which is problematic for a broker-based system such as > Rabbit. There is some very interesting data comparing Rabbit with a more > distributed AMQP1 system based on qpid-dispatch-router. In short, the > distributed system performed much better for this use case, although > there was still some concern raised about the memory usage on the client > side with both drivers. Some followup is needed on the oslo.messaging > side to make sure we aren't leaking/wasting resources in some messaging > scenarios. > > For further details I suggest taking a look at the presentation. > > mutable configuration > --------------------- > > This is also a community goal for Rocky, and Chang Bo is driving its > adoption. There was some discussion of how to test it, and also that we > should provide an example of turning on mutability for the debug option > since that is the target of the community goal. The cinder patch can be > found here: https://review.openstack.org/#/c/464028/ Turns out it's > really simple! > > Nova is also using this functionality for more complex options related > to upgrades, so that would be a good place to look for more advanced use > cases. > > Full documentation for the mutable config options is at > https://docs.openstack.org/oslo.config/latest/reference/mutable.html > > The goal status is being tracked in > https://storyboard.openstack.org/#!/story/2001545 > > Chang Bo was also going to talk to Lance about possibly coming up with a > burndown chart like the one he had for the policy in code work. > > oslo healthcheck middleware > --------------------------- > > As this ended up being the only major topic for the afternoon, the > session was unfortunately lightly attended. However, the self-healing > SIG was talking about related topics at the same time so we ended up > moving to that room and had a good discussion. > > Overall the feature seemed to be well-received. There is some security > concern with exposing service information over an un-authenticated > endpoint, but because there is no authentication supported by the health > checking functionality in things like Kubernetes or HAProxy this is > unavoidable. The feature won't be mandatory, so if this exposure is > unacceptable it can be turned off (with a corresponding loss of > functionality, of course). > > There was also some discussion of dropping the asynchronous nature of > the checks in the initial version in order to keep the complexity to a > minimum. Asynchronous testing can always be added later if it proves > necessary. > > The full spec is at https://review.openstack.org/#/c/531456 > > oslo.config strict validation > ----------------------------- > > I actually had discussions with multiple people about this during the > week. In both cases, they were just looking for a minimal amount of > validation that would catch an error such at "devug=True". Such a > validation might be fairly simple to write now that we have the > YAML-based sample config with (ideally) information about all the > options available to set in a project. It should be possible to compare > the options set in the config file with the ones listed in the sample > config and raise warnings for any that don't exist. > > There is also a more complete validation spec at > http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html > and a patch proposed at https://review.openstack.org/#/c/384559/ > > Unfortunately there has been little movement on that as of late, so it > might be worthwhile to implement something more minimalist initially and > then build from there. The existing patch is quite significant and > difficult to review. > > Conclusion > ---------- > > I feel like there were a lot of good discussions at the PTG and we have > plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) > > Thanks to everyone who participated and I look forward to seeing how > much progress we've made at the next Summit and PTG. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From juliaashleykreger at gmail.com Thu Mar 8 21:07:47 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 8 Mar 2018 13:07:47 -0800 Subject: [openstack-dev] [ironic] PTG Summary Message-ID: The Ironic PTG Summary - The blur(b) from the East In an effort to provide visibility and awareness of all the things related to Ironic, I've typed up a summary below. I've tried to keep this fairly generalized with enough context and convey action items or the instances of consensus where applicable. It goes without saying that the week went by as a complete blur. We had to abruptly change our schedule around, some fine detailed topics were missed. A special thanks to Ruby Loo for taking some time to proof read this for me. -Julia --------- >From our retrospective: As seems to be the norm with retrospectives, we did bring up a number of issues that slowed us down, hindered us, or hindered the ability to move faster. A great deal of this revolved around specifications, and the perceptions that tend to occur. Action Items: * Jroll will bring up for discussion if we can update the theme for rendered specs documentation to highlight that the specs are points in time references for design, and are not final documentation. * TheJulia will revise our specification template to attempt to be more clear about *why* we are asking the questions, also to suggest but not require proof of concept code After our retrospective, we spoke about things that can improve our velocity. This sort of discussion tends to always come up, and focused on community cultural aspects of revising/helping land code. The conclusion we quickly came to was that communication or context of the contributor is required. One of the points raised, that we did not get to, was that we should listen to contributor's perceptions, which really goes back to communication. As time went on, we shifted gears to a high level status of ironic, and there are some items to take away: * Inspector, at a high level, could use some additional work and contributors. Virtual media boot support would be helpful, and we may look at breaking some portions out and moving them into ironic. Additional High Availability work may be needed, at the same time it may not be needed. Entirely to be determined. * Ironic-ui presently has no active contributors, but is stable. Major risk right now is a breaking change coming from Horizon, which was also discussed earlier in the week with Horizon. Will add testing such that horizon's gate triggers ironic-ui testing and raises visibility to breaking changes. * Ironic itself got a lot completed this cycle, and we should expect quite a bit this cycle in terms of clean-up from deprecation. * Networking-baremetal received a good portion of work this cycle due to routed networks support. \o/ * Networking-generic-switch seems to be in a fairly stable state at this point. Some trunk awareness has been added, as well as some new switches and bug fixes. * Bifrost has low activity, but at the same time we're seeing new contributors fix issues or improve things, which is a good sign. * Sushy got authentication and introspection support added this cycle. We discussed that we may want to consider supporting RAID (in terms of client actions), as well as composable hardware. After statuses, we shifted into discussing the future. We started the entire discussion of the future with a visioning exercise to help frame the future, so we were all using the same words and had the same scope in mind when discussing the future of Ironic. One thing worth noting is upfront there was a lot of alignment, but we sometimes were just using slightly different words or concepts. Taking a little more time to reconcile those differences allowed us to relate additional words to the same meaning. Truly this set the stage for all of the other topics, and gave us the common reference point to grasp if what we were talking about made sense. Expect Jroll to send out an email to the mailing list to summarize this further, and from this initial discussion we will likely draft a formal vision document that will allow us to continue having the same reference point for discussions. Maybe one day your light bulb will be provisioned with Ironic! Deploy Steps In terms of the future, we again returned to the concept of breaking up deployments into a series of steps. Without going deep into detail, this is a very large piece of functionality and would help solve many problems and desires that exist today, especially where some operators wish things like deploy-time raid, or to flash firmware as part of the baremetal node provisioning process. This work is also influenced by traits, because traits can map to actions that need to be performed automatically. In the end, we agreed to take a small step, and iterate from there. Specifically adding a deploy steps framework and splitting our current deploy process into two logical steps. Location Awareness "Location awareness" as we are calling it, or possibly better stated as "conductor to node affinity" is a topic that we again revisited. This is important as many operators desire a single pane of glass for their entire baremetal fleet. Some operators would like to isolate conductors per rack, per data center, per customer, per sets of data centers in close proximity, per continent. This is a common problem of creating failure domains that match the environment and have optimal performance, as opposed to deploying across a point-to-point circuit. We agreed this is something that we need to make happen, as it is a very common operational problem. We may further work on this in the future to provide a scoring and anti-affinity system, but right now our focus is hard affinity to clusters of conductors. Graphical Consoles We revisited the topic of graphical consoles, which is one of the topics we made very little progress on this past cycle. This is difficult because there are several different ways to architect and develop this functionality. And then we realized libvirt offers a VNC server that we could very easily leverage as someone was kind enough to stub it out already in our virtualized BMC services. TL;DR We are going to pick this back up and try to reach consensus and try to land the framework this cycle. We know we are likely to want to land a distinct driver interface to support this since our existing console is designed around serial console usage. We also know we can use our virtualized BMC for testing. Going beyond the qcow2 Next up on the topic list was partitioning and getting beyond our current use case. Where this topic came from was several different topics with the same central theme of "what if I don't want to make or deploy a qcow2 file?" Historically, we have resisted this as it is more a pattern of pet management. The reality in that consensus is that we agree pets will happen, and have to be able to happen. So what does this mean for the average user? Not much right now. We still have some things to think about, such as what would be a good way to tell Ironic about disk partitioning? And then what to do with the contents of the image? This also had an interesting shift of "what if we supported a generic TFTP interface?" which gets us towards things like where we can configure new switches and non-traditional devices upon power-up. The possibilities are somewhat endless. The surprising thing... there was not disagreement. We even had consensus that this sort of thing would be useful, and be a step towards deploying that light bulb with Ironic. Action Items: * Jroll to look at ways we could allow for user definable partition data, and what that might look like. Security Interfaces/TPM modules! As a topic which the PTL mainly drove, there was a general consensus amongst the room that it could be useful, but that a greater understanding was required. Our consensus may be in part due to learning that Thursday we would likely have less attendees due to the incoming weather system. As a follow-up note: I was approached by the Cyborg PTL to see if there could be an opportunity to collaborate. At present we are unsure given our use model and workflow, but there may be some more discussions in the future. Action Items: * TheJulia needs to sit down and write a spec and popularize the concept. Reference Architecture One of our goals during the past cycle was to create a set of reference architecture documentation. We didn't quite get to that work. One of the advantages to being on the same page and having the same words was that we quickly determined the challenge that deterred us, which was a lack of clear scope. After some discussion, we were able to refine the scope into smaller logical blocks that would build upon each other to help convey how things fit together and how they can be fit together differently. This also raised some greater visibility on where we have an opportunity to improve our developer documentation. Action Item: * dtantsur and jroll to begin creating high level control plane diagrams covering API -> RabbitMQ -> Conductor communication. With this we intend to iterate. * Sambetts to update the development docs on how the networking works to help developers troubleshooting Cleaning - Firmware versions One topic that has come up a number of times is how to manage firmware efficiently and effectively, since there are substantial barriers to entry, which are compounded by differing vendors and hardware fleets. The ask from the community is to help spur further discussion to lower the bar to entry and make it easier to apply firmware updates to hardware nodes, in a way that also provides some level of visibility in that the process has completed, or that the latest firmware has been applied. This is further complicated even more by the fact that some operators have expressed need to apply firmware updates prior to the deploy being completed. Ultimately this takes us down the road of the deploy steps topic, since we should then be able to determine and handle cases where a BIOS image needs to be soft reset for in-band firmware updates, or turned off prior to out-of-band firmware updates. Action Item: * TheJulia is going to try and spur further community discussion in regards to standardization in two weeks. Cleaning - Burn-in As part of discussing cleaning changes, we discussed supporting a "burn-in" mode where hardware could be left to run load, memory, or other tests for a period of time. We did not have consensus on a generic solution, other than that this should likely involve clean-steps that we already have, and maybe another entry point into cleaning. Since we didn't really have consensus on use cases, we decided the logical thing was to write them down, and then go from there. Action Items: * Community members to document varying burn-in use cases for hardware, as they may vary based upon industry. * Community to try and come up with a couple example clean-steps. Planning for Rocky Rocky Planning was performed in record time, but in part because the ironic community performs the initial on-site prioritization via a poll of the room and then five votes per person. This is in turn transformed into our cycle priorities which is posted into gerrit. This can be viewed at https://review.openstack.org/#/c/550174/. We must stress that due to the notice of the need to vacate the building by2PM on Thursday, we chose to move up our planning session and not everyone was able to attend. Thoughts, feedback, and needs should be communicated via the posted change set for community participants that were not present during the planning process. Due to the abrupt schedule changes and need of contributors to begin re-booking flights, we lost some of our time for a little while on Thursday. This largely resulted in that we were unable to discuss miscellaneous items like communication flow changes, changing the default boot mode, alternative dhcp servers. None of which is contentious. Nova/Ironic Towards the end of Thursday, Ironic was able to convene with the Nova team to discuss topics of interest. Disk Partitioning One of the common asks, especially in large scale deployments, or where things such as RAID is needed, is to be able to define what the machine should look like by the requester. This is not a simple need to fulfill given that it is not a "cloudy" behavior. We discussed various options, and a spec is going to be proposed that will allow nova to pass a pointer of some sort to ironic that would define the disk and file system profile for the node. Action Item: * jroll to write a spec on how to allow user supplied partition/raid configuration to reach Ironic. Virt driver interactions There are several cases where the ironic virt driver in nova does things that are not ideal. Also because of long lived processes, hardware is not immediately freed to the resource tracker which can lead to issues. There is a mutual desire to fix these issues, and largely revolves around ensuring that we provide information correctly and set the state for the resources such that the virt driver does not encounter issues with placement. Action Item: * jroll to fix the nova-compute crash upon start-up if there are issues talking to Ironic such that it raises NotReadyYet. API Version Negotiation One of the biggest headaches that Ironic has encountered as time has gone on is the compliance with testing scenarios within the framework, as right now we force a very particular testing order. One of the things that makes this difficult is that we include a pin with our current API client usage (in nova's ironic virt driver) that locks the version the client speaks, and if the server does not speak it, the nova-compute process fails to start. The solution to this is to begin replacing the use of python-ironicclient in the virt driver with REST statements that explicitly state the API version they need to operate. This provides greater visibility, and maximum flexibility moving forward. Action Item: * TheJulia to work on updating the virt driver to use REST calls instead of the client library. And then there was Friday On Friday, the available team members discussed the bios_interface and how to handle the getting/setting of properties considering what was proposed is very different from how we presently handle RAID. Additionally the team discussed the deprecation of vif port ID's being stored in the port's (and portgroup's) extra field. This was originally how networking information was conveyed from Nova to Ironic, but that mechanism was replaced with the vif-attach and vif-detach APIs in a previous cycle. Additional items (from discussions outside ironic sessions): * Ironic to attempt to implement a CI job triggered in the horizon CI check queue to allow for some level of integration testing to help provide feedback if a horizon change breaks ironic-ui. This is the first logical step to support the future of plugins with Horizon, and lowers effort on our end to maintain. Please blame TheJulia if there are any questions. * Scientific SIG will be creating use cases for ironic as RFEs. Things like kexec from deployment ramdisk for extremely time consuming reboots, and pure booting from a ramdisk. * Scientific SIG will also be exploring things like BFV based cluster booting, so we may receive some interest and RFEs as a result. Joking about deploying a light bulb aside, it was a positive experience to talk about our mutual shared visions and really reach the same page. While last week was a complete blur, this is an exciting time, now onward to seize it! From openstack at nemebean.com Thu Mar 8 22:03:08 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 8 Mar 2018 16:03:08 -0600 Subject: [openstack-dev] [oslo] Oslo PTG Summary In-Reply-To: <5AA1A2AD.3050204@fastmail.com> References: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> <5AA1A2AD.3050204@fastmail.com> Message-ID: On 03/08/2018 02:53 PM, Joshua Harlow wrote: > > Can we get some of those doc links opened. > > 'You need permission to access this published document.' I am getting > for a few of them :( Shoot, I thought we fixed that but I guess we just projected them in the room from a laptop that had access. I've copied the owners of the documents to see if they can open up the permissions. > > Ben Nemec wrote: >> Hi, >> >> Here's my summary of the discussions we had in the Oslo room at the PTG. >> Please feel free to reply with any additions if I missed something or >> correct anything I've misrepresented. >> >> oslo.config drivers for secret management >> ----------------------------------------- >> >> The oslo.config implementation is in progress, while the Castellan >> driver still needs to be written. We want to land this early in Rocky as >> it is a significant change in architecture for oslo.config and we want >> it to be well-exercised before release. >> >> There are discussions with the TripleO team around adding support for >> this feature to its deployment tooling and there will be a functional >> test job for the Castellan driver with Custodia. >> >> There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 >> UTC for discussion of this feature. >> >> oslo.config driver implementation: >> https://review.openstack.org/#/c/513844 >> spec: >> https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html >> >> >> Custodia key management support for Castellan: >> https://review.openstack.org/#/c/515190/ >> >> "stable" libraries >> ------------------ >> >> Some of the Oslo libraries are in a mature state where there are very >> few, if any, meaningful changes to them. With the removal of the >> requirements sync process in Rocky, we may need to change the release >> process for these libraries. My understanding was that there were no >> immediate action items for this, but it was something we need to be >> aware of. >> >> dropping support for mox3 >> ------------------------- >> >> There was some concern that no one from the Oslo team is actually in a >> position to support mox3 if something were to break (such as happened in >> some libraries with Python 3.6). Since there is a community goal to >> remove mox from all OpenStack projects in Rocky this will hopefully not >> be a long-term problem, but there was some discussion that if projects >> needed to keep mox for some reason that they would be asked to provide a >> maintainer for mox3. This topic is kind of on hold pending the outcome >> of the community goal this cycle. >> >> automatic configuration migration on upgrade >> -------------------------------------------- >> >> There is a desire for oslo.config to provide a mechanism to >> automatically migrate deprecated options to their new location on >> version upgrades. This is a fairly complex topic that I can't cover >> adequately in a summary email, but there is a spec proposed at >> https://review.openstack.org/#/c/520043/ and POC changes at >> https://review.openstack.org/#/c/526314/ and >> https://review.openstack.org/#/c/526261/ >> >> One outcome of the discussion was that in the initial version we would >> not try to handle complex migrations, such as the one that happened when >> we combined all of the separate rabbit connection opts into a single >> connection string. To start with we will just raise a warning to the >> user that they need to handle those manually, but a templated or >> hook-based method of automating those migrations could be added as a >> follow-up if there is sufficient demand. >> >> oslo.messaging plans >> -------------------- >> >> There was quite a bit discussed under this topic. I'm going to break it >> down into sub-topics for clarity. >> >> oslo.messaging heartbeats >> ========================= >> >> Everyone seemed to be in favor of this feature, so we anticipate >> development moving forward in Rocky. There is an initial patch proposed >> at https://review.openstack.org/546763 >> >> We felt that it should be possible to opt in and out of the feature, and >> that the configuration should be done at the application level. This >> should _not_ be an operator decision as they do not have the knowledge >> to make it sanely. >> >> There was also a desire to have a TTL for messages. >> >> bug cleanup >> =========== >> >> There are quite a few launchpad bugs open against oslo.messaging that >> were reported against old, now unsupported versions. Since we have the >> launchpad bug expirer enabled in Oslo the action item proposed for such >> bugs was to mark them incomplete and ask the reporter to confirm that >> they still occur against a supported version. This way bugs that don't >> reproduce or where the reporter has lost interest will eventually be >> closed automatically, but bugs that do still exist can be updated with >> more current information. >> >> deprecations >> ============ >> >> The Pika driver will be deprecated in Rocky. To our knowledge, no one >> has ever used it and there are no known benefits over the existing >> Rabbit driver. >> >> Once again, the ZeroMQ driver was proposed for deprecation as well. The >> CI jobs for ZMQ have been broken for a while, and there doesn't seem to >> be much interest in maintaining them. Furthermore, the breakage seems to >> be a fundamental problem with the driver that would require non-trivial >> work to fix. >> >> Given that ZMQ has been a consistent pain point in oslo.messaging over >> the past few years, it was proposed that if someone does step forward >> and want to maintain it going forward then we should split the driver >> off into its own library which could then have its own core team and >> iterate independently of oslo.messaging. However, at this time the plan >> is to propose the deprecation and start that discussion first. >> >> CI >> == >> >> Need to migrate oslo.messaging to zuulv3 native jobs. The >> openstackclient library was proposed as a good example of how to do so. >> >> We also want to have voting hybrid messaging jobs (where the >> notification and rpc messages are sent via different backends). We will >> define a devstack job variant that other projects can turn on if desired. >> >> We also want to add amqp1 support to pifpaf for functional testing. >> >> Low level messaging API >> ======================= >> >> A proposal for a new oslo.messaging API to expose lower level messaging >> functionality was proposed. There is a presentation at >> https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing >> >> >> >> This seemed to generally be well-received by the room, and dragonflow >> and neutron reviewers were suggested for the spec. >> >> Kafka >> ===== >> >> Andy Smith gave an update on the status of the Kafka driver. Currently >> it is still experimental, and intended to be used for notifications >> only. There is a presentation with more details in >> https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p >> >> >> >> testing for Edge/FEMDC use cases >> ================================ >> >> Matthieu Simonin gave a presentation about the testing he has done >> related to messaging in the Edge/FEMDC scenario where messaging targets >> might be widely distributed. The slides can be found at >> https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 >> >> >> >> In short, there is a desire to build clouds that have widely distributed >> nodes such that content can be delivered to users from a location as >> close as possible. This puts a lot of pressure on the messaging layer as >> compute nodes (for example) could be halfway around the world from the >> control nodes, which is problematic for a broker-based system such as >> Rabbit. There is some very interesting data comparing Rabbit with a more >> distributed AMQP1 system based on qpid-dispatch-router. In short, the >> distributed system performed much better for this use case, although >> there was still some concern raised about the memory usage on the client >> side with both drivers. Some followup is needed on the oslo.messaging >> side to make sure we aren't leaking/wasting resources in some messaging >> scenarios. >> >> For further details I suggest taking a look at the presentation. >> >> mutable configuration >> --------------------- >> >> This is also a community goal for Rocky, and Chang Bo is driving its >> adoption. There was some discussion of how to test it, and also that we >> should provide an example of turning on mutability for the debug option >> since that is the target of the community goal. The cinder patch can be >> found here: https://review.openstack.org/#/c/464028/ Turns out it's >> really simple! >> >> Nova is also using this functionality for more complex options related >> to upgrades, so that would be a good place to look for more advanced use >> cases. >> >> Full documentation for the mutable config options is at >> https://docs.openstack.org/oslo.config/latest/reference/mutable.html >> >> The goal status is being tracked in >> https://storyboard.openstack.org/#!/story/2001545 >> >> Chang Bo was also going to talk to Lance about possibly coming up with a >> burndown chart like the one he had for the policy in code work. >> >> oslo healthcheck middleware >> --------------------------- >> >> As this ended up being the only major topic for the afternoon, the >> session was unfortunately lightly attended. However, the self-healing >> SIG was talking about related topics at the same time so we ended up >> moving to that room and had a good discussion. >> >> Overall the feature seemed to be well-received. There is some security >> concern with exposing service information over an un-authenticated >> endpoint, but because there is no authentication supported by the health >> checking functionality in things like Kubernetes or HAProxy this is >> unavoidable. The feature won't be mandatory, so if this exposure is >> unacceptable it can be turned off (with a corresponding loss of >> functionality, of course). >> >> There was also some discussion of dropping the asynchronous nature of >> the checks in the initial version in order to keep the complexity to a >> minimum. Asynchronous testing can always be added later if it proves >> necessary. >> >> The full spec is at https://review.openstack.org/#/c/531456 >> >> oslo.config strict validation >> ----------------------------- >> >> I actually had discussions with multiple people about this during the >> week. In both cases, they were just looking for a minimal amount of >> validation that would catch an error such at "devug=True". Such a >> validation might be fairly simple to write now that we have the >> YAML-based sample config with (ideally) information about all the >> options available to set in a project. It should be possible to compare >> the options set in the config file with the ones listed in the sample >> config and raise warnings for any that don't exist. >> >> There is also a more complete validation spec at >> http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html >> >> and a patch proposed at https://review.openstack.org/#/c/384559/ >> >> Unfortunately there has been little movement on that as of late, so it >> might be worthwhile to implement something more minimalist initially and >> then build from there. The existing patch is quite significant and >> difficult to review. >> >> Conclusion >> ---------- >> >> I feel like there were a lot of good discussions at the PTG and we have >> plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) >> >> Thanks to everyone who participated and I look forward to seeing how >> much progress we've made at the next Summit and PTG. >> >> -Ben >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gael.therond at gmail.com Thu Mar 8 23:11:23 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 08 Mar 2018 23:11:23 +0000 Subject: [openstack-dev] Pros and Cons of face-to-face meetings In-Reply-To: <535bb9dc-a155-d594-8437-026af8996411@anteaya.info> References: <20180308180616.fcf7nbdv3e22nppa@yuggoth.org> <20180308183451.yplvionoenut3j5i@yuggoth.org> <1520535197-sup-8174@lrrr.local> <4F8D01FD-3660-4ACF-8AE1-4B99E6D88451@cern.ch> <535bb9dc-a155-d594-8437-026af8996411@anteaya.info> Message-ID: To simply answer the question: Yes I have experience about this specific matter of things as I’m now working with the exact setup for more than 5 years. I’m even exclusively working remotely with some studios of my employer. It’s not that expensive as long as you get correct rules, hardware and the stream owner physically on the room to act as a conductor. Remember that the 4th mantra of the Openstack project HAVE to be respected if we really want to get people to join us and share. They need to feel at home in order to contribute as they have to feel useful to really be interested to participate. Additionally, Openstack being way more vast than let say Kubernetes it NEED such extra care in order to keep its community strong and passionate at a point where they will naturally advocate and work on it. That’s indeed my first year of participating into the Openstack community even if I’m working on Openstack since the Essex version because I didn’t felt skilled and included enough until now. Why did I change my mind? Because of the sidney summit and presentation of these Open rules plus my communication with you people over irc/emails etc. Le jeu. 8 mars 2018 à 21:06, Anita Kuno a écrit : > On 2018-03-08 02:18 PM, Tim Bell wrote: > > Fully agree with Doug. At CERN, we use video conferencing for 100s, > sometimes >1000 participants for the LHC experiments, the trick we've found > is to fully embrace the chat channels (so remote non-native English > speakers can provide input) and chairs/vectors who can summarise the remote > questions constructively, with appropriate priority. > > > > This is actually very close to the etherpad approach, we benefit from > the local bandwidth if available but do not exclude those who do not have > it (or the language skills to do it in real time). > > Just expanding on the phrase 'the etherpad approach' one instance on the > Friday saw some infra team members discussing the gerrit upgrade in > person and one infra team member (snowed in at the same hotel as Doug) > following along on the etherpad as it was updated and weighing in on the > updates (either via the etherpad or irc, I'm not sure, my laptop was not > open). > > So again echoing the chorus, there are possibilities, but those > possibilities require effort and usually prior knowledge of participants > and their habits. > > Thank you, > Anita > > > > > Tim > > > > -----Original Message----- > > From: Doug Hellmann > > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > > Date: Thursday, 8 March 2018 at 20:00 > > To: openstack-dev > > Subject: Re: [openstack-dev] Pros and Cons of face-to-face meetings > > > > Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +0000: > > > On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote: > > > [...] > > > > Cinder has been doing this for many years and it has worked > > > > relatively well. It requires a good remote speaker and it also > > > > requires the people in the room to be sensitive to the needs of > > > > those who are remote. I.E. planning topics at a time appropriate > > > > for the remote attendees, ensuring everyone speaks up, etc. If > > > > everyone, however, works to be inclusive with remote > participants > > > > it works well. > > > > > > > > We have even managed to make this work between separate > mid-cycles > > > > (Cinder and Nova) in the past before we did PTGs. > > > [...] > > > > > > I've seen it work okay when the number of remote participants is > > > small and all are relatively known to the in-person participants. > > > Even so, bridging Doug into the TC discussion at the PTG was > > > challenging for all participants. > > > > I agree, and I'll point out I was just across town (snowed in at a > > different hotel). > > > > The conversation the previous day with just the 5-6 people on the > > release team worked a little bit better, but was still challenging > > at times because of audio quality issues. > > > > So, yes, this can be made to work. It's not trivial, though, and > > the degree to which it works depends a lot on the participants on > > both sides of the connection. I would not expect us to be very > > productive with a large number of people trying to be active in the > > conversation remotely. > > > > Doug > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Mar 8 23:56:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 9 Mar 2018 07:56:40 +0800 Subject: [openstack-dev] [cyborg]No Meeting This Week In-Reply-To: <5F883771-AFAD-4B19-BDFF-10818AAFD586@leafe.com> References: <5F883771-AFAD-4B19-BDFF-10818AAFD586@leafe.com> Message-ID: Hi Ed, it should be categoried under openstack-cyborg :) Our weekly meeting is Wed UTC1500 on #openstack-cyborg channel. Meeting minutes could be found: https://wiki.openstack.org/wiki/Cyborg/MeetingLogs On Fri, Mar 9, 2018 at 4:36 AM, Ed Leafe wrote: > On Mar 5, 2018, at 9:51 PM, Zhipeng Huang wrote: > > > As most of us are rekubrating from PTG and snowenpstack last week, let's > cancel the team meeting this week. At the mean time I have solicitate the > meeting summary from topic leads, and will send out a summary of the > summaries later :) > > When is your meeting? I don’t see it listed on > http://eavesdrop.openstack.org. > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Mar 9 00:31:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 08 Mar 2018 19:31:00 -0500 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: <11033cf1-d80a-ef35-bf0a-97a048ec94ae@redhat.com> References: <1520531849-sup-5340@lrrr.local> <11033cf1-d80a-ef35-bf0a-97a048ec94ae@redhat.com> Message-ID: <1520554645-sup-648@lrrr.local> Excerpts from Zane Bitter's message of 2018-03-08 15:51:05 -0500: > On 08/03/18 12:57, Doug Hellmann wrote: > > Why would the repos be owned by anyone other than the original project > > team? > > A few reasons I think it makes sense in this instance: > > * Not every set of trademark tests will necessarily belong to a single > project. Tempest itself is an example of this - in fact that's basically > how the QA program came to exist. Vertical-specific trademark programs > are another example that we anticipate in the future. > * Allowing projects to create their own repos means that there's no > co-ordination point to ensure e.g. a consistent naming scheme. Amongst > other things, this could potentially cause confusion about which plugins > are trademark-candidates-only and which are just regular tempest plugins. If these new plugins might contain "candidate" tests and all tests are potentially candidates, how are these new repos different from the existing repos that already contain all of the tests? It seems like at least part of the problem with the current system was triggered by confusion about when to move tests around to satisfy the policy. Can we avoid that problem with the new system? If we're not going to move the tests into Tempest itself and have the QA team manage them, why not simply take the tests from the repos where they already live? > * By registering trademark plugins all in one place it makes it easy to > determine how many there are, which plugins exist (e.g. are there any > extant plugins that are not referenced by refstack? This is a question > you can answer in 20s if they're all registered in the same place.) > * The goal is for maintenance of these plugins to be a collaborative > effort by the project team, the QA team, and RefStack. If the first step > for a project establishing a trademark test plugin involves the project > team reaching out to the QA team then that's a good foot to start on. If > teams create the repos in their own projects and fly under QA's radar > then QA folks might not even be aware that they've become core reviewers > on the repo. I thought the QA team no longer wanted to be responsible for these extra tests. Has that changed again? I've lost track of everyone's positions, I'm afraid. Maybe we could get people to start voting on the actual resolutions so it's easier to keep track of that? As you pointed out earlier, when contributors to a repo are allowed to vote in the election for the team lead that owns the repo. We should think through the implications of that fact when we consider who will own these new repos (if we actually need anything new and we can't just use the existing repos). > I guess we have examples of both models in the community... e.g. > puppet-openstack vs. Horizon plugins. I wonder if there are any lessons > we can draw on to see which works better, and when. > > cheers, > Zane. > From xinni.ge1990 at gmail.com Fri Mar 9 03:05:22 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 9 Mar 2018 12:05:22 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules Message-ID: Hello Horizon Team, I would like to hear about your opinions about how to add new xstatic modules to horizon settings. As for Heat-dashboard project embedded 3rd-party files issue, thanks for your advices in Dublin PTG, we are now removing them and referencing as new xstatic-* libs. So we installed the new xstatic files (not uploaded as openstack official repos yet) in our development environment now, but hesitate to decide how to add the new installed xstatic lib path to STATICFILES_DIRS in openstack_dashboard.settings so that the static files could be automatically collected by *collectstatic* process. Currently Horizon defines BASE_XSTATIC_MODULES in openstack_dashboard/utils/settings.py and the relevant static fils are added to STATICFILES_DIRS before it updates any Horizon plugin dashboard. We may want new plugin setting keywords ( something similar to ADD_JS_FILES) to update horizon XSTATIC_MODULES (or directly update STATICFILES_DIRS). Looking forward to hearing any suggestions from you guys, and Best Regards, Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Mar 9 04:32:02 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 9 Mar 2018 15:32:02 +1100 Subject: [openstack-dev] [murano][Openstack-stable-maint] Stable check of openstack/murano failed In-Reply-To: References: Message-ID: <20180309043157.GA4213@thor.bakeyournoodle.com> On Thu, Mar 08, 2018 at 06:16:27AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/build-openstack-sphinx-docs/8b023b7/html/ : SUCCESS in 4m 44s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/openstack-tox-py27/82d0dae/ : FAILURE in 5m 48s The job is failing on the periodic-stable pipeline which indicates that all changes on pike will hit this same issue. There is fix on master[1] but it's wrong so rather than back porting that pike it'd be great if someone from the murano team could own fixing this properly. Based on my 5mins of poking it seems that reading the test yaml file is generating a list of unicode values rather than the expected list of string_type(). I think the answer is a simple as iterating over the list and using six.string_type to massage the value. I don't knwo what else that will break and I also don't know the details of the contract that allowed pattern is describing. For example making it a simple string value would probably also fix it but that isn't a backwards compatible change. Yours Tony. [1] https://review.openstack.org/#/c/523829/4/murano/tests/unit/packages/hot_package/test_hot_package.py at 114 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From aaronzhu1121 at gmail.com Fri Mar 9 04:52:30 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Fri, 9 Mar 2018 12:52:30 +0800 Subject: [openstack-dev] [murano][Openstack-stable-maint] Stable check of openstack/murano failed Message-ID: Hi, Tony I will fix this in stable pike, thanks for the reminder. On Fri, Mar 9, 2018 at 12:32 PM, Tony Breeds wrote: > On Thu, Mar 08, 2018 at 06:16:27AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: >> Build failed. >> >> - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/build-openstack-sphinx-docs/8b023b7/html/ : SUCCESS in 4m 44s >> - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/openstack-tox-py27/82d0dae/ : FAILURE in 5m 48s > > The job is failing on the periodic-stable pipeline which indicates that > all changes on pike will hit this same issue. > > There is fix on master[1] but it's wrong so rather than back porting > that pike it'd be great if someone from the murano team could own fixing > this properly. > > Based on my 5mins of poking it seems that reading the test yaml file is > generating a list of unicode values rather than the expected list of > string_type(). I think the answer is a simple as iterating over the > list and using six.string_type to massage the value. I don't knwo what > else that will break and I also don't know the details of the contract > that allowed pattern is describing. > > For example making it a simple string value would probably also fix it > but that isn't a backwards compatible change. > > Yours Tony. > > [1] https://review.openstack.org/#/c/523829/4/murano/tests/unit/packages/hot_package/test_hot_package.py at 114 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Rong Zhu From adriant at catalyst.net.nz Fri Mar 9 07:42:14 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 9 Mar 2018 20:42:14 +1300 Subject: [openstack-dev] [Keystone] Weirdness around domain/project scope in role assignments Message-ID: Sooo to follow up from the discussion last night partly with Lance and Adam, I'm still not exactly sure what difference, if any, there is between a domain scoped role assignment, and a project scoped role assignment. And... It appears stuff breaks when you used both, or either actually (more on that further down). My problem/confusion was why the following exists or is possible: http://paste.openstack.org/show/695978/ The amusing part, I now can't remove the above role assignments. They throw a 500: http://paste.openstack.org/show/696013/ The error itself being: http://paste.openstack.org/show/695994/ Then lets look at just project scope: http://paste.openstack.org/show/696007/ I can't seem to do 'include_names' on the project scoped role assignment, but effective works since it doesn't include the project. I have a feeling the error is because keystone isn't including projects with is_domain when doing the names mapping. So... going a little further, does domain scope still act like project scope in regards to effective roles: http://paste.openstack.org/show/695992/ The answer is yes. But again, this is domain scope, not project scope which still results in project scope down the tree. Although here 'include_names' works, this time because keystone internally is directly checking for is_domain I assume. Also worth mentioning that the following works (and maybe shouldn't?): http://paste.openstack.org/show/696006/ Alice has a role on a 'project' that isn't part of her domain. I can't add her to a project that isn't in her domain... but I can add her to another domain? That surely isn't expected behavior... Weird broken stuff aside, I'm still not seeing a difference between domain/project role assignment scope on a project that is a domain. Is there a difference that I'm missing, and where is such a difference used? Looking at the blog post Adam linked (https://adam.younglogic.com/2018/02/openstack-hmt-cloudforms/), he isn't  really making use of domain scope, just project scope on a domain, and inheritance down the tree, which is indeed a valid and useful case, but again, not domain scope assignment. Although domain scope on the same project would probably (as we see above) achieve the same result. Then looking at the policy he linked: http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json#n52 "identity:list_projects": "rule:cloud_admin or rule:admin_and_matching_domain_id",     - "cloud_admin": "role:admin and (is_admin_project:True or domain_id:admin_domain_id)",     - "admin_and_matching_domain_id": "rule:admin_required and domain_id:%(domain_id)s",         -  "admin_required": "role:admin", I can't exactly see how it also uses domain scope. It still seems to be project scope focused. So my question then is why on the role assignment object do we distinguish between a domain/project when it comes to scope when a domain IS a project, and clearly things break when you set both. Can we make it so the following works (a hypothetical example): http://paste.openstack.org/show/696010/ At which point the whole idea of 'domain' scope on a role assignment goes away since and is exactly the same thing as project scope, and also the potential database 500 issues goes away since... there isn't more than 1 row. We can then start phasing out the domain scope stuff and hiding it away unless someone is explicitly still looking for it. Because in reality, right now I think we only have project scope, and system scope. Domain scope == project scope and we should probably make that clear because obviously the code base is confused on that matter. :P From zhipengh512 at gmail.com Fri Mar 9 07:46:06 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 9 Mar 2018 15:46:06 +0800 Subject: [openstack-dev] [cyborg]Dublin Rocky PTG Summary Message-ID: Hi Team, Thanks to our topic leads' efforts, below is the aggregated summary from our dublin ptg session discussion. Please check it out and feel free to feedback any concerns you might have. Queens Cycle Review Etherpad:https://etherpad.openstack.org/p/cyborg-queens-retrospective 1. Adopt MS based release method starting in Rocky to avoid chaos 2. Establish subteams alongside core team that could cover various important aspects - doc team: lead - Li Liu, yumeng - release team: lead - howard, zhuli - driver team: lead - Shaohe, Dutch 3. Intel might consider setup one for its FPGA card for Cyborg 3rd Party CI support 4. Promote Shaohe as the new core reviewer Quota and Multi-tenancy Support Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-quota Slide: https://docs.google.com/presentation/d/1DUKWW2vgqUI3Udl4UDvxgJ53Ve5LmyaBpX4u--rVrCc/edit?usp=sharing 1. Provide project and user level quota support 2. Treat all resources as the reserved resource type 3. Add quota engine and quota driver for the quota support 4. Tables: quotas, quota_usage, reservation 5. Transactions operation: reserve, commit, rollback - Concerns on rollback - Implement a two-stage resevation and rollback - reserve - commit - rollback (if failed) 6. Experiment with oslo.limit for quota/nested quota support from Keystone (maybe slated for MS3) Programability Support Slide: Li Liu: https://docs.google.com/presentation/d/1rzecmKhvjAJcfWHPZb8wPkoW6HDGcEHggxV9rUdYINs/edit?usp=sharing Sundar: https://docs.google.com/presentation/d/1Bc6v_Uis_txxj1awpRuLg5KQsC2xrsqnHb54UttaBQo/edit?usp=sharing 1. Security: 2 dimensions: At-rest/In-use,Authentication and/or encryption.Specific cryptoalgorithms, key lengths and key storage be left to cloud operators and/orvendors. (could consider interaction with Barbican which could be used for keymgmt) - At-rest (storage): CanGlance handle any authentication/encryption algorithm that an implementationwants? - In-use: Transfer fromthe repository to compute mode should be protected. This means the compute nodeor the FPGA itself is doing the decryption. Should the actual auth/decrypt beleft to the vendor driver? 2. Licensing/policies: A cloud operator maywant to set policies on image usage and enforce licenses. I suggest this beleft to the implementation as well. 3. Repository: Glance is presumably thedefault. However, some operators have gone the proprietary way ut may want touse a standardized way in the future. Do we want to enable a migration path forthese folks to come to OpenStack? 4. Overall flow: ComputeNode <-->[IP Policy Engine] <--> IP.Repository Cyborg can define a standard API forComputeNode<-->IPPolicyEngine, and PolicyEngine <--> Repository. 5. A strawman for the API: - Request: Acceleratortype, Region type - Response: Imageproviding the accelerator type matching the region type 6. What if there is more than one image: Amechanism is needed to pick the most suitable images based on users' request.Or just return warnings when there are multiple hits. 7. There is broad consensus (and no objections) to allow for the possibility of an 'IP Policy Engine' between the compute node and IP repository (Glance), with well-defined APIs from Cyborg. This is expected to enable the use cases above. 8. add bitstream_uuid to the kv pair list. This refers to the uuid id generated during sythesis time. More Driver Support 1. Dutch will help lead on the Xilinx driver development in Queens cycle 2. Yumeng will confirm with her team about the clock driver motivation 3. Howard will contact NVIDIA team for their driver support Finishing Up Nova Cyborg Interaction Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-nova-cyborg-interaction 1. tentatively agreed flow: - Cyborg responsble for tracking available FPGA types/hardware and FPGA images/functions - The flavor will define the FPGA type/hardware, while the image/function will be defined on the glance image. The latter can be restricted to prevent users providing their own images. It should be possible to state the required function/image in the flavor extra specs. - It is recommended to add traits for image/function capability for each device/region. This may result in a profusion of traits, but that helps Placement do more filtering up front. Having more traits scales better than having Placement return a large list of hosts which subsequent filters/weighers need to handle. - Placement used to provide the FPGA type/hardware. This will filter out hosts that don't have the required hardware - (Optional)Weighers used to attempt to favour hosts whose FPGAs already have the required image/function. - Once a host has been chosen, the FPGA programming will take place synchronously as part of the instance creation (like VIF, storage creation). os-acc will define the common interface for how nova can do this wiring 2. Cyborg should get "the resource provider UUID" - which will surely always resolve to the resource provider - rather than the compute hostname, which may or may not 3. Cyborg creates the RPs; nova (in the scheduler in the usual way) creates the allocations. This (allocations by nova) is for both the during-spawn and the post-spawn-attach case 4. A ``os-acc`` lib should be created to provide attach/detach ability for accelerators - This needs to work for things other than libvirt, please - Don't assume guest def is XML - Don't assume sysfs exists - Don't assume everything is PCI - something like os-vif - example of Nova glue to os-vif, note that it's not hypervisor specific: https://github.com/openstack/nova/blob/master/nova/network/os_vif_util.py Meta Data Standardization Slide: https://docs.google.com/presentation/d/1rzecmKhvjAJcfWHPZb8wPkoW6HDGcEHggxV9rUdYINs/edit?usp=sharing 1. A standarzied set of metadata need to beassociated with bitstream images 2. Utilize image_properties table in Glance 3. Each metadata will be stored as a row inthis table as key-value pair: column [name] holds the key whereas column[value] holds the value 4. Cyborg will standardize the key-valueconvention as follows:: ======================================================================================== |name | value(example) | nullable |description | ======================================================================================== |bs-name | aes-128 | False | name of thebitstream | |bs-uuid | {uuid} | False | The uuid generated duringsynthesis | |vendor | Xilinx | False | Vendor of thecard | |board | KU115 | False | Board type for this bitstream toload | |shell_id | {uuid} | True | Required shell bs-uuid for this bitstream| |version | 1.0 | False | Device versionnumber | |driver | SDX | False | Type of driver for thisbitstream | | driver_ver | 1.0 | False | Driver version | | driver_path | /path/to/driver| False | Where to retrieve the driverbinary | |topology | {CLOB} | False | FunctionTopology | | description | description | True |Description | ======================================================================================== - [driver_path] specifies the location of thedriver installation package for this bitstream - All the drivers related to the bitstreamshould be packaged in a tarball - There should be an installation scriptalso packed in this tarball - The bitstream metadata will specify wherethis tarball file is located and send it to the Cyborg - Vendor driver will untar the file and runthe installation script - [shell_id] This field is a uuid pointing tothe required shell bitstream uuid for loading this user logic bitstream. If itis null, this bitstream is a shell bitstream. - [topology] This field describes thetopology of function structures after the bitstream is loaded on the FPGA. Inparticular, it uses JSON format to visualize how physical functions, virtualfunctions are co-related to each other. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Mar 9 08:29:13 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 9 Mar 2018 09:29:13 +0100 Subject: [openstack-dev] [cue] Retiring completely Message-ID: <59901da7-171d-1d66-ff30-3402fd437fae@suse.com> cue has been retired as official project in mid 2016 [1], and talking with previous PTL, it's time to retire the project completely. I'm proposing the needed changes now, Andreas [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/legacy.yaml#n110 -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From tommylikehu at gmail.com Fri Mar 9 09:26:35 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Fri, 09 Mar 2018 09:26:35 +0000 Subject: [openstack-dev] [cinder] [manila] Performance concern on new quota system Message-ID: Hey all, During the Cinder and Manila's cross project discussion on quota system last week, I raise our concern of performance impact on the count resource feature, and Gorka pointed out we might miss some of the indexes when testing. So I reshaped the environment and share some test results here as below: *Test patch:* Basically this [1] is borrowed from Nova for Cinder, and only the create volume process has been updated to use the new system. *Test Environment on AWS:* 1. 3 EC2 t2.large(2 vCPU, 8GiB,80G SSD) each one deployed cinder API service with 10 api workers (coordinated by haproxy) 2. 1 RDS db.m4.xlarge(4 vCPU, 16GiB,200G SSD) MySQL 5.7.19 *Database upgrade:* Composite index has been added to volume table (volumes__composite_index): +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | .....other index....... | | volumes | 1 | volumes__composite_index | 1 | project_id | A | 2 | NULL | NULL | YES | BTREE | | | | volumes | 1 | volumes__compoite_index | 2 | deleted | A | 2 | NULL | NULL | YES | BTREE | | | | volumes | 1 | volumes__composite_index | 3 | volume_type_id | A | 2 | NULL | NULL | YES | BTREE | | | +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Explain result for one of the sql statements for new quota system (*explain select count(*), sum(size) from volumes where project_id={project_id} and volume_type_id={type_id} and deleted=false*): +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ | 1 | SIMPLE | volumes | NULL | ref | volumes__composite_index | volumes__composite_index | 881 | const,const,const | 1 | 100.00 | NULL | +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ *Time comparsion between two system (in Seconds, * *Conc = Concurrency**):* *NOTE**:* 1. *QUOTAS.check_deltas* stands for the total time consumed including two sql statements as below when creating single volume: 1. SELECT count(id), sum(size) from volumes where project_id=$(project_id) and deleted=False 2. SELECT count(id), sum(size) from volumes where project_id=$(project_id) and deleted=False and volume_type=$(volume_type) 2. *Quota Reserve/Quota Commit* stands for total time consumed when executing QUOTA.reserve and QUOTA.commit. 1. Create 1000 volumes in tenant which has *10000* records in database and *180000 *undeleted records in total: [image: image.png] 2. Create 1000 volumes in tenant which has *20000* records in database and *180000 *undeleted records in total: [image: image.png] 3. Create 1000 volumes in tenant which has *30000* records in database and *180000 *undeleted records in total: [image: image.png] 4. Create 1000 volumes in tenant which has* 40000* records in database and *180000 *undeleted records in total: [image: image.png] 5. Create 1000 volumes in tenant which has* 60000* records in database and *180000 *undeleted records in total: [image: image.png] I only posted some of the test results here, but in general, the new system will become slower when the amount of concurrency or existing volumes in tenant keeps raising. Also it seems our current quota system will always beat the new one in performance when there are about *30000 *volumes in tenant. I am a little worried about the performance impact if we replace our current design with count resource feature, and I could be wrong, maybe I missed something important during testing, please let me know if you have any idea or suggestion. Thanks TommyLike [1]: https://review.openstack.org/#/c/536341/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 138510 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 113838 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 116220 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 100314 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 90341 bytes Desc: not available URL: From liu.xuefeng1 at zte.com.cn Fri Mar 9 09:28:06 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Fri, 9 Mar 2018 17:28:06 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?IFtTZW5saW5dwqBOb21pbmF0ZcKgY2hhbmdp?= =?utf-8?q?ng_in_Senlin_core_team?= Message-ID: <201803091728063531016@zte.com.cn> Hi team, I would like to propose adding chenyb and DucTruong to the Senlin core team. Chenyb has been working on Openstack more than 3 years, with the responsibility of intergation Nova, Senlin and Ceilometer cloud production. He has finished many features and bugs for Senlin project, now he is the most active non-core contributor on Senlin group projects. DucTruong works for Blizzard Entertainment, Blizzard company is an active user of Senlin project. Duc and his colleagues have finished some useful features for Senlin, from this feautres they also got a good understand about Senlin. Now Duc is a active code reviewer on Senlin. -- ThanksXueFeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Mar 9 10:28:25 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 9 Mar 2018 11:28:25 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, March 9th Message-ID: <6c9a19a7-fa90-a5ca-4fd3-18a17029a1e3@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == * Add PowerStackers project team [1] * Add resolution about CI for external projects [2] * Add naming poll info for S release [3] * Fix small oversight in Python PTI for tests [4] * Goal updates: glance * New repos: openstack-ansible-nspawn_hosts, openstack-ansible-nspawn_container_create, puppet-monasca, openstack-ansible-os_panko, rally-openstack, ansible-role-redhat-subscription * Removed repo: horizon-cisco-ui [1] https://review.openstack.org/540165 [2] https://review.openstack.org/545065 [3] https://review.openstack.org/545010 [4] https://review.openstack.org/545639 Major news here is the final approval of the PowerStackers project team, dedicated to providing OpenStack support for the POWER CPU architecture: https://governance.openstack.org/tc/reference/projects/powerstackers.html The Technical Committee also approved a resolution to define acceptable usage of OpenStack CI resources to the testing of other projects: https://governance.openstack.org/tc/resolutions/20180215-third-party-check.html == TC track at the PTG == We held a track on Friday at the PTG to discuss Technical Committee initiatives and more generally OpenStack-wide and governance issues. Here is a quick summary of what happened there. We started with a retrospective of how we got things done so far with this membership. Async collaborations worked well, and the weekly reports were seen as helpful. We should probably try to steer the topic of discussions in office hours a bit more. We also need to more consciously track progress on TC initiatives, with something to stand between the vision and the individual changes posted to implement it. Using a task tracker (StoryBoard) was suggested. We should also make sure to have more than one member on the TC being able to apply approval rules on the governance repository, so that the chair does not block everything while they take time off. After that we reviewed progress against the vision. Most progress has been made on the "engaging with adjacent communities" front. Constellations are still waiting for their first practical application. Dims volunteered to drive one around containers. Situation on the diversity front has been improving, although we could always do better. We should encourage everyone to push people they see as great candidates to run. As far as growing new leaders goes, we had some success getting new people to step up and champion goals. We need to encourage those (thanks email from Foundation, success story articles). Next topic was reviewing community goals, and how to improve the selection process and wider community engagement around them, as well as how to provide more active support to those who step up to champion them. From there we switched to discussing base services. The addition of etcd to the OpenStack base services did not exactly trigger wide adoption of it yet, for various reasons. We discussed various ways to make it finally pass that critical mass that would make everyone more comfortable leveraging it. Then we switched to reviewing the concept of maintenance mode and our usage of team diversity tags. Maintenance mode was still seen as a useful status to communicate to consumers of OpenStack, and we discussed ways to sustainably maintain "stable" projects over the long run. The team diversity tags, on the other hand, were designed with metrics that fit the 2014 growth pattern, but are not so great in a landscape with more stable components maintained by smaller teams. We might want to replace them with regular (qualitative rather than quantitative) organizational diversity reports that would provide better insights. Final topic of the morning was the Python 2 deprecation process, with next-gen operating systems more and more likely to drop Py2 earlier rather than later. We discussed the current state of the transition and decided to come up with a clearer timeline (some mentioned the need for all of OpenStack to support Python 3 by the T release, Q3 2019). On the afternoon we discussed the resolution about stable branch EOL and "extended maintenance" that was discussed earlier in the week. In particular we discussed the ability for project stable maint teams to retain control over their stable maint story, and avoid having two completely separate teams producing the same component. The discussion on this continued on the review (see below). Then we discussed the tc-approved-release tag. It's a technical tag used to communicate to the board the set of projects that we are fine with them potentially including in trademark programs. However the name of the tag makes people read too much into it, and makes it difficult to remove things from it. Creating a new tag more specifically around trademark program admissibility might be a way to fix it. From there the discussion moved to discuss the Interop tests location issue (more on that below). Before our brains turned into complete mush, we also discussed the impact on OpenStack of the OpenStack Foundation support to new strategic focus areas. It creates an opportunity to focus OpenStack on the open cloud infrastructure use case (calling other by-products like our CI/CD system under its own name). However we need to proactively engage with other technical leaders in those areas (like the Kata Containers Arch committee) in order to paint a good complementarity story. For more details, you can find detailed notes on the following etherpad: https://etherpad.openstack.org/p/PTG-Dublin-TC-topics == Under discussion == The PTG rebooted the discussion to clarify how the testing of interoperability programs should be organized in the age of add-on trademark programs. During the TC track there was a new strawman proposal (with agreement from some InteropWG, some QA, some Heat and and Designate team members present) to have interop-specific Tempest plugins co-owned by QA/Interop/add-on project team. mugsie amended his proposal accordingly: https://review.openstack.org/#/c/521602/ cdent did post his own simplified variant of the same strawman: https://review.openstack.org/#/c/550571/ An alternative solution is to just say that the InteropWG should be able to pick tests wherever they see fit. The environment has changed over the past 4 years, so strong guidance from the TC as to where to find tests might no longer be needed. mugsie posted this alternate option as: https://review.openstack.org/#/c/550863/ The other hot topic under discussion is mriedem's resolution defining "extended maintenance", as a result of the discussions on Tuesday afternoon's "release cycles vs. downstream consumption models" track. This resolution is trying to strike the right trade-off between encouraging new resources to step up to maintain branches for a longer period of time, avoiding schisms between project stable maint teams and new extended maintenance resources, the need for a common understanding of what we mean by stable branches (no feature backport) and making sure we still test things and do not introduce regressions. Please comment at: https://review.openstack.org/#/c/548916/ == TC member actions for the coming week(s) == We should establish an etherpad to discuss potential Forum sessions we'd like to file for the Vancouver Summit. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays For the coming week, I expect discussions to be around the Interop test location resolution(s) and the Extended maintenance proposal. Cheers, -- Thierry Carrez (ttx) From tpb at dyncloud.net Fri Mar 9 10:45:08 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 9 Mar 2018 05:45:08 -0500 Subject: [openstack-dev] [manila][ptg] team photos Message-ID: <20180309104508.fjvijpigbw5uhsv7@barron.net> Please find attached some manila team photos from the Dublin PTG. -------------- next part -------------- A non-text attachment was scrubbed... Name: PTGDublin.JPG Type: image/jpeg Size: 104488 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: DSC_4342.jpeg Type: image/jpeg Size: 572682 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dublin-ptg-2018.jpg Type: image/jpeg Size: 1106477 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From bdobreli at redhat.com Fri Mar 9 11:16:34 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 9 Mar 2018 12:16:34 +0100 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> Message-ID: On 3/8/18 6:44 PM, Raoul Scarazzini wrote: > On 08/03/2018 17:03, Adam Spiers wrote: > [...] >> Yes agreed again, this is a strong case for collaboration between the >> self-healing and QA SIGs.  In Dublin we also discussed the idea of the >> self-healing and API SIGs collaborating on the related topic of health >> check APIs. > > Guys, thanks a ton for your involvement in the topic, I am +1 to any > kind of meeting we can have to discuss this (like it was proposed by Please count me in as well. I can't stop dreaming of Jepsen's Nemesis [0] hammering openstack to make it stronger :D Jokes off, let's do the best to consolidate on frameworks and tools and ditching NIH syndrome! [0] https://github.com/jepsen-io/jepsen/blob/master/jepsen/src/jepsen/nemesis.clj > Adam) so I'll offer my bluejeans channel for whatever kind of meeting we > want to organize. > About the best practices part Georg was mentioning I'm 100% in > agreement, the testing methodologies are the first thing we need to care > about, starting from what we want to achieve. > That said, I'll keep studying Yardstick. > > Hope to hear from you soon, and thanks again! > -- Best regards, Bogdan Dobrelya, Irc #bogdando From gmann at ghanshyammann.com Fri Mar 9 11:21:44 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 9 Mar 2018 20:21:44 +0900 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: References: <1520531849-sup-5340@lrrr.local> Message-ID: On Fri, Mar 9, 2018 at 4:34 AM, Rico Lin wrote: > >> >> Why would the repos be owned by anyone other than the original project >> team? >> > For normal tempest tests, which owned and maintained by original projects. > I think there were discussions in that PTG QA session about interop tests > should be maintained by QA team. > >> > > In the new resolution, we can make sure QA team and project teams will stay > in their obligation to interop testing structure together (isn't that just > like how current tempest plugin structure works). > And allow interop team to focus on interop structure (ideally not the tests > itself). > > I agree with Zane, that we really want all 3 teams to contribute to reviews, > since they each bring different expertise to format this interop structure. Big +1. QA team will always be there to review and spend time on setup the guidelines and best practice for interop tests. We maintained such guidelines for tempest interop tests since 2-3 year or longer and they work perfectly. -gmann > > > > > Rico Lin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From balazs.gibizer at ericsson.com Fri Mar 9 11:27:16 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 9 Mar 2018 12:27:16 +0100 Subject: [openstack-dev] [nova][placement] PTG Summary and Rocky Priorities In-Reply-To: References: <9b6b4b7e-02d7-28e0-8d6d-53e1849827f8@gmail.com> Message-ID: <1520594836.7809.5@smtp.office365.com> > >> >> - Multiple agreements about strict minimum bandwidth support feature >> in nova - Spec has already been updated accordingly: >> https://review.openstack.org/#/c/502306/ >> >> - For now we keep the hostname as the information connecting the >> nova-compute and the neutron-agent on the same host but we are >> aiming for having the hostname as an FQDN to avoid possible >> ambiguity. >> >> - We agreed not to make this feature dependent on moving the nova >> port create to the conductor. The current scope is to support >> pre-created neutron port only. > > I could rat-hole in the spec, but figured it would be good to also > mention it here. When we were talking about this in Dublin, someone > also mentioned that depending on the network on which nova-compute > creates a port, the port could have a QoS policy applied to it for > bandwidth, and then nova-compute would need to allocate resources in > Placement for that port (with the instance as the consumer). So then > we'd be doing allocations both in the scheduler for pre-created ports > and in the compute for ports that nova creates. So the scope > statement here isn't entirely true, and leaves us with some technical > debt until we move port creation to conductor. Or am I missing > something? > I was sloppy and did not include all the details here. The spec goes into a lot more detail about what and how needs to be supported in the first iteration[1]. I still think that moving the port creation to the conductor is not a hard dependency of the first iteration of this feature. I also feel that we agreed on this on the PTG. Cheers, gibi [1] https://review.openstack.org/#/c/502306/15/specs/rocky/approved/bandwidth-resource-provider.rst at 111 From gmann at ghanshyammann.com Fri Mar 9 11:47:26 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 9 Mar 2018 20:47:26 +0900 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: <1520554645-sup-648@lrrr.local> References: <1520531849-sup-5340@lrrr.local> <11033cf1-d80a-ef35-bf0a-97a048ec94ae@redhat.com> <1520554645-sup-648@lrrr.local> Message-ID: On Fri, Mar 9, 2018 at 9:31 AM, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-03-08 15:51:05 -0500: >> On 08/03/18 12:57, Doug Hellmann wrote: >> > Why would the repos be owned by anyone other than the original project >> > team? >> >> A few reasons I think it makes sense in this instance: >> >> * Not every set of trademark tests will necessarily belong to a single >> project. Tempest itself is an example of this - in fact that's basically >> how the QA program came to exist. Vertical-specific trademark programs >> are another example that we anticipate in the future. >> * Allowing projects to create their own repos means that there's no >> co-ordination point to ensure e.g. a consistent naming scheme. Amongst >> other things, this could potentially cause confusion about which plugins >> are trademark-candidates-only and which are just regular tempest plugins. > > If these new plugins might contain "candidate" tests and all tests > are potentially candidates, how are these new repos different from > the existing repos that already contain all of the tests? It seems > like at least part of the problem with the current system was > triggered by confusion about when to move tests around to satisfy > the policy. Can we avoid that problem with the new system? If we're > not going to move the tests into Tempest itself and have the QA > team manage them, why not simply take the tests from the repos where > they already live? I completely agree on this. If tests are going to move in Tempest then, its all QA things and we own them completely but this is not case now as not all projects ok to do that. Otherwise if interop going to use tests from plugins then just consume tests from their current location. For other future interop program like nfv, HA etc, tests can be in new repo or if interop find any QA projects where they can consume tests then use those QA projects example - Extreme testing [1] which is under review though. > >> * By registering trademark plugins all in one place it makes it easy to >> determine how many there are, which plugins exist (e.g. are there any >> extant plugins that are not referenced by refstack? This is a question >> you can answer in 20s if they're all registered in the same place.) >> * The goal is for maintenance of these plugins to be a collaborative >> effort by the project team, the QA team, and RefStack. If the first step >> for a project establishing a trademark test plugin involves the project >> team reaching out to the QA team then that's a good foot to start on. If >> teams create the repos in their own projects and fly under QA's radar >> then QA folks might not even be aware that they've become core reviewers >> on the repo. > > I thought the QA team no longer wanted to be responsible for these > extra tests. Has that changed again? I've lost track of everyone's > positions, I'm afraid. Maybe we could get people to start voting > on the actual resolutions so it's easier to keep track of that? > > As you pointed out earlier, when contributors to a repo are allowed > to vote in the election for the team lead that owns the repo. We > should think through the implications of that fact when we consider > who will own these new repos (if we actually need anything new and > we can't just use the existing repos). For me, voting things does not matter much. What i see as overall is that interop is ready to consume tests from different places and QA team is all ready to share their review bandwidth with interop team, projects team to review the interop tests irrespective of test location and help to build process/guidelines about consistency, do-not-change-tests etc. And we will make sure that happens. Currently also we do such practice for tempest plugin tests (fix them, use right interface there) and in Rocky we are going to start a program for Tempest plugins to stabilize them where we will grep-ing all plugins for best practice & right way setup and use interfaces. Anyways current proposed version of resolution looks ok to me - https://review.openstack.org/#/c/443504/ > >> I guess we have examples of both models in the community... e.g. >> puppet-openstack vs. Horizon plugins. I wonder if there are any lessons >> we can draw on to see which works better, and when. >> >> cheers, >> Zane. >> ..1 https://review.openstack.org/#/c/443504/ -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From geguileo at redhat.com Fri Mar 9 12:11:02 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 9 Mar 2018 13:11:02 +0100 Subject: [openstack-dev] [cinder] [manila] Performance concern on new quota system In-Reply-To: References: Message-ID: <20180309121102.by2qgasphnz3lvyq@localhost> On 09/03, TommyLike Hu wrote: > Hey all, > During the Cinder and Manila's cross project discussion on quota system > last week, I raise our concern of performance impact on the count resource > feature, and Gorka pointed out we might miss some of the indexes when > testing. So I reshaped the environment and share some test results here as > below: > > *Test patch:* > Basically this [1] is borrowed from Nova for Cinder, and only the > create volume process has been updated to use the new system. > > *Test Environment on AWS:* > 1. 3 EC2 t2.large(2 vCPU, 8GiB,80G SSD) each one deployed cinder API > service with 10 api workers (coordinated by haproxy) > 2. 1 RDS db.m4.xlarge(4 vCPU, 16GiB,200G SSD) MySQL 5.7.19 > > *Database upgrade:* > Composite index has been added to volume table (volumes__composite_index): > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > | Table | Non_unique | Key_name | Seq_in_index | > Column_name | Collation | Cardinality | Sub_part | Packed | Null | > Index_type | Comment | Index_comment | > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > | .....other index....... | > | volumes | 1 | volumes__composite_index | 1 | > project_id | A | 2 | NULL | NULL | YES | > BTREE | | | > | volumes | 1 | volumes__compoite_index | 2 | > deleted | A | 2 | NULL | NULL | YES | > BTREE | | | > | volumes | 1 | volumes__composite_index | 3 | > volume_type_id | A | 2 | NULL | NULL | YES | > BTREE | | | > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > Explain result for one of the sql statements for new quota system (*explain > select count(*), sum(size) from volumes where project_id={project_id} and > volume_type_id={type_id} and deleted=false*): > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > | id | select_type | table | partitions | type | possible_keys > | key | key_len | ref | rows | filtered > | Extra | > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > | 1 | SIMPLE | volumes | NULL | ref | volumes__composite_index > | volumes__composite_index | 881 | const,const,const | 1 | 100.00 > | NULL | > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > > *Time comparsion between two system (in Seconds, * *Conc = Concurrency**):* > > *NOTE**:* > > 1. *QUOTAS.check_deltas* stands for the total time consumed > including two sql statements as below when creating single volume: > > 1. SELECT count(id), sum(size) from volumes where > project_id=$(project_id) and deleted=False > > 2. SELECT count(id), sum(size) from volumes where > project_id=$(project_id) and deleted=False and volume_type=$(volume_type) > Hi, As I see it there are 3 questions here: 1. Do we need to change the quota system? I believe we all agree that the Quota system in Cinder is bad, so bad that we can divide Cinder deployments in 2 clear categories, those that don't use quotas and those that have out of sync quotas. 2. Will the DB implementation used by Nova solve our current problems? As the Nova team kindly explained us, the new quotas may not be perfect for limiting (we may allow going slightly above allowed quota or just get short), but it will at least solve our problems of having out of sync quotas that require manually changing the DB to fix it. 3. Will the new solution introduce new problems? To me introducing a small performance impact on resource creation is an acceptable trade-off compared with the alternative, moreover considering that most resource creation procedures are somewhat slow operations. And let's not forget that we can always look for more efficient ways of doing the counting: - Using a single query to retrieve both counts and sums instead of 2 queries. - DB triggers to do the actual counting. To me comparing the performance of something that doesn't work with something that does doesn't seem fair. Cheers, Gorka. > 2. *Quota Reserve/Quota Commit* stands for total time consumed when > executing QUOTA.reserve and QUOTA.commit. > > 1. Create 1000 volumes in tenant which has *10000* records in database > and *180000 > *undeleted records in total: > [image: image.png] > 2. Create 1000 volumes in tenant which has *20000* records in > database and *180000 > *undeleted records in total: > > [image: image.png] > 3. Create 1000 volumes in tenant which has *30000* records in > database and *180000 > *undeleted records in total: > [image: image.png] > 4. Create 1000 volumes in tenant which has* 40000* records in > database and *180000 > *undeleted records in total: > > [image: image.png] > 5. Create 1000 volumes in tenant which has* 60000* records in > database and *180000 > *undeleted records in total: > [image: image.png] > I only posted some of the test results here, but in general, the new system > will become slower when the amount of concurrency or existing volumes in > tenant keeps raising. Also it seems our current quota system will always > beat the new one in performance when there are about *30000 *volumes in > tenant. > > I am a little worried about the performance impact if we replace our > current design with count resource feature, and I could be wrong, maybe I > missed something important during testing, please let me know if you have > any idea or suggestion. > > Thanks > TommyLike > > [1]: https://review.openstack.org/#/c/536341/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Fri Mar 9 12:26:28 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 9 Mar 2018 13:26:28 +0100 Subject: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload Message-ID: <1520598388.7809.6@smtp.office365.com> Hi, On the PTG a question was raised that why don't we have the full traceback in the versioned error notifications as the legacy notifications has the full traceback. I dig into the past and found out that this difference was intentional. During the original versioned notification spec review [2] there was couple of back and forth what to add to the ExceptionPayload and what not. I think the main reasons not to add the full traceback was that it cannot be well defined what goes in that field (it would have been a single serialized string) and possible security implications. Then in the review we ended up agreing on the ExceptionPayload structure [3] that was later implemented and merged. The instance-action REST API has already provide the traceback to the user (to the admin by default) and the notifications are also admin only things as they are emitted to the message bus by default. So I assume that security is not a bigger concern for the notification than for the REST API. So I think the only issue we have to accept is that the traceback object in the ExceptionPayload will not be a well defined field but a simple string containing a serialized traceback. If there is no objection then Kevin or I can file a specless bp to extend the ExceptionPayload. Cheers, gibi [1] L387 in https://etherpad.openstack.org/p/nova-ptg-rocky [2] https://review.openstack.org/#/c/286675/ [3] https://review.openstack.org/#/c/286675/12/specs/newton/approved/versioned-notification-transformation.rst at 405 From cdent+os at anticdent.org Fri Mar 9 12:46:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 9 Mar 2018 12:46:12 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] placement update 18-10 Message-ID: Welcome back to your regularly scheduled placement update. I'm still gathering loose ends from the PTG, so plenty of this will be context setting, but I'll try to provide the usual too many links. As mentioned in the last update I'm going to adjust these reports a bit to make them a bit more focused and also (I hope) shrink the amount of time it takes to create them. The main visible changes are: * The name: it will just be "placement update" now as resource providers doesn't really cover it. * I'm going to focus more on reviews and specs that are directly focused on how placement is changing or used, and less on those that are "nova using placement as it is". This is not to suggest that that work is in any way less important, simply an acceptance that there's a lot of work in progress and this can't cover all of it. Also there's a useful boundary there that we want to keep as strong as possible. So, for example, the way in which minimum bandwidth requirements will use traits is not in, but the fact that that work may require a way to merge traits is. * I'm adding a section on placement extraction. This is one of my primary occupations at the moment, so I'd like to keep track of it somewhere, here seems good. That work also informs the "useful boundary" stuff above. * A questions section has been added for things that seem important but I don't know about, discovered while creating these things. # Most Important Jay posted a good review of what happened at the PTG and how it will impact priorities: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128041.html There are few specs that came out of that, or were already in progress, listed below. Some of the items in Jay's message are TODOs that need a volunteer to blueprint and spec. In the meantime many things are dependent on the update provider tree work, so getting that merged sooner than later is important. # What's Changed A big new concept from the PTG is the idea of consumer uuid's getting a generation so that allocations for a single consumer can be managed from multiple parties and those parties can "confirm their view". There's also code in progress such that a generation is available immediately when first creating a resource provider. And code to ensure that generations are used when managing aggregates. There's a bit of theme here. # Questions What's the status of shared resource providers? Did we even talk about that in Dublin? # Bugs * Placement related bugs without owners: https://goo.gl/TgiPXb * In progress placement bugs: https://goo.gl/vzGGDQ # Specs * https://review.openstack.org/#/c/497733/2 Report CPU features to placement service by traits API * https://review.openstack.org/#/c/548903/ Return Generation from Resource Provider Creation * https://review.openstack.org/#/c/550244/ Propose standardized provider descriptor file * https://review.openstack.org/#/c/548915/ Express forbidden traits in placement API * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/549184/ Spec: report client placement version discovery * https://review.openstack.org/#/c/548237/ Update placement aggregates spec to clarify generation handling * https://review.openstack.org/#/c/418393/ Provide error codes for placement API * https://review.openstack.org/#/c/502575/ WIP:Specification for using cache as a resource using cache allocation support (This is something that probably should be more placement oriented than it currently is.) * https://review.openstack.org/#/c/545057/ mirror nova host aggregates to placement API * https://review.openstack.org/#/c/541507/ Support traits in Glance # Main Themes ## Update Provider Tree The ability of virt drivers to represent what resource providers they know about--whether that be numa, or clustered resources--is supported by the update_provider_tree method. Part of it is done, but some details remain: https://review.openstack.org/#/q/topic:bp/update-provider-tree ## Request Filters These are a way for the nova scheduler to doctor the request being sent to placement, using a sane interface. https://review.openstack.org/#/q/topic:bp/placement-req-filter ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates ## Forbidden Traits A way of expressing "I'd live resources that do _not_ have trait X". Just a spec so far: https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits ## Consumer Generations Not yet started. # Extraction I wrote up an email with the current state of and plan for extracting placement to its own project: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128004.html There's plenty of volunteer opportunities in there. One fairly major task is to create an os-resource-classes lib, akin to os-traits. Related code: * move resource provider objects https://review.openstack.org/#/c/540049/ (The base of that stack needs to be split into smaller pieces.) # Other * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/c/545729/ Set [scheduler]workers=$API_WORKERS * https://review.openstack.org/#/c/159382/ Scheduler multiple workers support * https://review.openstack.org/#/c/547812/ Migrate legacy-osc-placement-dsvm-functional job in-tree * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/q/topic:bp/generation-from-create-provider Get a generation when posting to create a new rp * https://review.openstack.org/#/c/548249/ placement: generation in provider aggregate APIs * https://review.openstack.org/#/c/548983/ report client: placement API version discovery * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/550873/ Add HW_NIC_SRIOV_TRUSTED trait * https://review.openstack.org/#/c/532924/ Add default values for allocation ratios * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes # End Feh, that certainly didn't end up any smaller. Mostly because of specs. Go read some specs! -- Chris Dent (⊙_⊙') https://anticdent.org/ freenode: cdent tw: @anticdent From liyi8611 at gmail.com Fri Mar 9 12:44:53 2018 From: liyi8611 at gmail.com (Lee Yi) Date: Fri, 9 Mar 2018 20:44:53 +0800 Subject: [openstack-dev] =?utf-8?b?W1Nlbmxpbl3CoE5vbWluYXRlwqBjaGFuZ2lu?= =?utf-8?q?g_in_Senlin_core_team?= In-Reply-To: <201803091728063531016@zte.com.cn> References: <201803091728063531016@zte.com.cn> Message-ID: <8B5AC789-162E-4F4B-9821-86734D615CC4@gmail.com> +1 Thank you chenyb and DucTruong for your amazing contribution! ----------------------------------- Lee Yi / Fiberhome Corp. liyi8611 at gmail.com > On 9 Mar 2018, at 5:28 PM, liu.xuefeng1 at zte.com.cn wrote: > > > > Hi team, > > I would like to propose adding chenyb and DucTruong to the Senlin core team. > > Chenyb has been working on Openstack more than 3 years, with the responsibility of intergation Nova, Senlin and Ceilometer cloud production. He has finished many features and bugs for Senlin project, now he is the most active non-core contributor on Senlin group projects. > > DucTruong works for Blizzard Entertainment, Blizzard company is an active user of Senlin project. Duc and his colleagues have finished some useful features for Senlin, from this feautres they also got a good understand about Senlin. Now Duc is a active code reviewer on Senlin. > > > > > -- > Thanks > XueFeng > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Fri Mar 9 13:37:43 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Fri, 09 Mar 2018 13:37:43 +0000 Subject: [openstack-dev] [cinder] [manila] Performance concern on new quota system In-Reply-To: <20180309121102.by2qgasphnz3lvyq@localhost> References: <20180309121102.by2qgasphnz3lvyq@localhost> Message-ID: Thanks Gorka, To be clear, I started this discussion not because I reject this feature, instead I like it as it's much more clean and simple, compared with performance impact it solves several other issues which we hate badly. I wrote this is to point out we may have this issue, and to see whether we could improve it before it's actually landed. Better is better:) * - Using a single query to retrieve both counts and sums instead of 2 queries.* For this advice, I think I already combined count and sum into single query. - DB triggers to do the actual counting. This seems a good idea, but not sure whether it could cover all of the cases we have in our quota system and whether can be easily integrated into cinder, can you share more detail on this? Thanks TommyLike Gorka Eguileor 于2018年3月9日周五 下午8:11写道: > On 09/03, TommyLike Hu wrote: > > Hey all, > > During the Cinder and Manila's cross project discussion on quota > system > > last week, I raise our concern of performance impact on the count > resource > > feature, and Gorka pointed out we might miss some of the indexes when > > testing. So I reshaped the environment and share some test results here > as > > below: > > > > *Test patch:* > > Basically this [1] is borrowed from Nova for Cinder, and only the > > create volume process has been updated to use the new system. > > > > *Test Environment on AWS:* > > 1. 3 EC2 t2.large(2 vCPU, 8GiB,80G SSD) each one deployed cinder API > > service with 10 api workers (coordinated by haproxy) > > 2. 1 RDS db.m4.xlarge(4 vCPU, 16GiB,200G SSD) MySQL 5.7.19 > > > > *Database upgrade:* > > Composite index has been added to volume table > (volumes__composite_index): > > > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > > | Table | Non_unique | Key_name | Seq_in_index | > > Column_name | Collation | Cardinality | Sub_part | Packed | Null > | > > Index_type | Comment | Index_comment | > > > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > > | .....other index....... | > > | volumes | 1 | volumes__composite_index | 1 > | > > project_id | A | 2 | NULL | NULL | YES > | > > BTREE | | | > > | volumes | 1 | volumes__compoite_index | > 2 | > > deleted | A | 2 | NULL | NULL | YES > | > > BTREE | | | > > | volumes | 1 | volumes__composite_index | 3 > | > > volume_type_id | A | 2 | NULL | NULL | YES > | > > BTREE | | | > > > +---------+------------+--------------------------+--------------+---------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ > > Explain result for one of the sql statements for new quota system > (*explain > > select count(*), sum(size) from volumes where project_id={project_id} and > > volume_type_id={type_id} and deleted=false*): > > > > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > > | id | select_type | table | partitions | type | possible_keys > > | key | key_len | ref | rows | > filtered > > | Extra | > > > > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > > | 1 | SIMPLE | volumes | NULL | ref | > volumes__composite_index > > | volumes__composite_index | 881 | const,const,const | 1 | > 100.00 > > | NULL | > > > > > +----+-------------+---------+------------+------+--------------------------+--------------------------+---------+-------------------+------+----------+-------+ > > > > *Time comparsion between two system (in Seconds, * *Conc = > Concurrency**):* > > > > *NOTE**:* > > > > 1. *QUOTAS.check_deltas* stands for the total time consumed > > including two sql statements as below when creating single volume: > > > > 1. SELECT count(id), sum(size) from volumes where > > project_id=$(project_id) and deleted=False > > > > 2. SELECT count(id), sum(size) from volumes where > > project_id=$(project_id) and deleted=False and volume_type=$(volume_type) > > > > Hi, > > As I see it there are 3 questions here: > > 1. Do we need to change the quota system? > > I believe we all agree that the Quota system in Cinder is bad, so bad > that we can divide Cinder deployments in 2 clear categories, those > that don't use quotas and those that have out of sync quotas. > > 2. Will the DB implementation used by Nova solve our current problems? > > As the Nova team kindly explained us, the new quotas may not be > perfect for limiting (we may allow going slightly above allowed quota > or just get short), but it will at least solve our problems of having > out of sync quotas that require manually changing the DB to fix it. > > 3. Will the new solution introduce new problems? > > To me introducing a small performance impact on resource creation is > an acceptable trade-off compared with the alternative, moreover > considering that most resource creation procedures are somewhat slow > operations. > > And let's not forget that we can always look for more efficient ways > of doing the counting: > > - Using a single query to retrieve both counts and sums instead of 2 > queries. > > - DB triggers to do the actual counting. > > To me comparing the performance of something that doesn't work with > something that does doesn't seem fair. > > Cheers, > Gorka. > > > > > 2. *Quota Reserve/Quota Commit* stands for total time consumed when > > executing QUOTA.reserve and QUOTA.commit. > > > > 1. Create 1000 volumes in tenant which has *10000* records in database > > and *180000 > > *undeleted records in total: > > [image: image.png] > > 2. Create 1000 volumes in tenant which has *20000* records in > > database and *180000 > > *undeleted records in total: > > > > [image: image.png] > > 3. Create 1000 volumes in tenant which has *30000* records in > > database and *180000 > > *undeleted records in total: > > [image: image.png] > > 4. Create 1000 volumes in tenant which has* 40000* records in > > database and *180000 > > *undeleted records in total: > > > > [image: image.png] > > 5. Create 1000 volumes in tenant which has* 60000* records in > > database and *180000 > > *undeleted records in total: > > [image: image.png] > > I only posted some of the test results here, but in general, the new > system > > will become slower when the amount of concurrency or existing volumes in > > tenant keeps raising. Also it seems our current quota system will always > > beat the new one in performance when there are about *30000 *volumes in > > tenant. > > > > I am a little worried about the performance impact if we replace our > > current design with count resource feature, and I could be wrong, maybe I > > missed something important during testing, please let me know if you have > > any idea or suggestion. > > > > Thanks > > TommyLike > > > > [1]: https://review.openstack.org/#/c/536341/ > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.ames at canonical.com Fri Mar 9 13:46:28 2018 From: david.ames at canonical.com (David Ames) Date: Fri, 9 Mar 2018 14:46:28 +0100 Subject: [openstack-dev] [charms] 18.02 OpenStack Charms release Message-ID: Announcing the 18.02 release of the OpenStack Charms. The 18.02 charms have full support for the Queens OpenStack release. 112 bugs have been fixed and released across the OpenStack charms. For full details of the release, please refer to the release notes: https://docs.openstack.org/charm-guide/latest/18.02.html Thanks go to the following contributors for this release: Anton Kremenetsky Billy Olsen Corey Bryant David Ames Dmitrii Shcherbakov Edward Hope-Morley Felipe Reyes Frode Nordahl James Page Jason Hobbs Jill Rouleau Liam Young Nobuto Murata Ryan Beisner Seyeong Kim Tytus Kurek Xav Paice From major at mhtx.net Fri Mar 9 13:54:24 2018 From: major at mhtx.net (Major Hayden) Date: Fri, 9 Mar 2018 07:54:24 -0600 Subject: [openstack-dev] Going but not gone Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello there, I'm leaving my current role for a new opportunity and, unfortunately, this means I won't be as involved in OpenStack as much in the near future. I've spoken with our fearless OpenStack-Ansible PTL and I let JP know that I will resign from the core reviewers group immediately if I feel that I cannot meet the obligations of the role. With that said, the OpenStack community has been truly amazing. My first humble contribution[0] was a fix for broken glance tests back in 2011. I've done a little more since then and I'm proud to be a tiny part of what OpenStack has become today. I'd like to thank everyone who has reviewed one of my patches, fixed one of the bugs I created with my patches, and fixed the gate jobs that I broke with my patches. Thanks to everyone who has attended one of my talks at the Summits and thanks to everyone who has put up with my oddball suggestions at Design Summits, Forums, and PTGs. I have learned an *incredible* amount about OpenStack, Python, Linux, open source, communities, and how to be a better human. Thanks to the leaders of the OpenStack Foundation as well for their continued support. They have been excellent listeners and they took lots of time to consider my suggestions for improvements. I love you all and working in this community has been one of the best experiences in my professional career. :) [0] https://review.openstack.org/#/c/2652/ - -- Major Hayden -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlqikg0ACgkQc3BR4MEB H7HN9Q/+PKC0TpfosAcZwotuVoSncoJc5D3RDL6RgO09Vm1xbI84BWkv6b6tJz4/ SvBmiqR7LtXUQDN1yiDg1g8Bq8gNKJO7E0hW7WqRE5rJmXAX2Gpx80pQ04mO0LBv 21OaeJSGElT5MdQYu/wz6oP8iNwjAqUaU7b/BZFXcGgpA+S9qDMaQCMK/EXnrodd hsDbBxtOridNk9j7SefgwIGZKOr4gdPCxvqnTfj0/X5Cjb+OfMU4rU6dRSIoVaiz JVrwZr7DVVyvJmF5JFtpsOJGS9SF7YkOJKia3BsmCnJWeNm9+r1n2XjSXHY240tQ gjNfqgvWbyaLddm+8ZMC77zsZu3Kaf4M2ta9F95K0/PlsShoZYBCDso23aDRsjps czR3RjT51bdGdEDNhpJkimHQLLFqrvO6NRfg6Azf+Wii3/POrtez60Nx49SQgBul PTB/i+mHl44Yn9R2VpWgqKM+WMixRxD75SRyOlDXrU0setUv/91Hz+x32cqeeiX0 C8mWOPh9POOdQPLeIalR2E4F9//CFv4nWZNSjpwIEEeXLd/Mlkyf2ue7ye+1s/5U JYo2wygRLEiLimacaoEyTRguR5/QsKtMieqKKfIYQglQDQkulWhhxOeqJmkpP10p xQp11b/GIwrXA4wVi5KA3hQEB/ST/2ENvTO76e/oGW41RK9S0gw= =5+cM -----END PGP SIGNATURE----- From mnaser at vexxhost.com Fri Mar 9 14:00:39 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 9 Mar 2018 09:00:39 -0500 Subject: [openstack-dev] Going but not gone In-Reply-To: References: Message-ID: Major, I've only recently had the chance to work with you more recently on several projects but it's always been great to see the type of work that you do. From seeing the work that you do inside OpenStack, to your extremely informative blog and the talks you've given. You'll be greatly missed in the OpenStack community and I (and surely believe the majority of OpenStackers) hope that we cross paths again in the future. :) Best of luck! Regards, Mohammed On Fri, Mar 9, 2018 at 8:54 AM, Major Hayden wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hello there, > > I'm leaving my current role for a new opportunity and, unfortunately, this means I won't be as involved in OpenStack as much in the near future. I've spoken with our fearless OpenStack-Ansible PTL and I let JP know that I will resign from the core reviewers group immediately if I feel that I cannot meet the obligations of the role. > > With that said, the OpenStack community has been truly amazing. My first humble contribution[0] was a fix for broken glance tests back in 2011. I've done a little more since then and I'm proud to be a tiny part of what OpenStack has become today. > > I'd like to thank everyone who has reviewed one of my patches, fixed one of the bugs I created with my patches, and fixed the gate jobs that I broke with my patches. Thanks to everyone who has attended one of my talks at the Summits and thanks to everyone who has put up with my oddball suggestions at Design Summits, Forums, and PTGs. I have learned an *incredible* amount about OpenStack, Python, Linux, open source, communities, and how to be a better human. > > Thanks to the leaders of the OpenStack Foundation as well for their continued support. They have been excellent listeners and they took lots of time to consider my suggestions for improvements. > > I love you all and working in this community has been one of the best experiences in my professional career. :) > > [0] https://review.openstack.org/#/c/2652/ > > - -- > Major Hayden > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlqikg0ACgkQc3BR4MEB > H7HN9Q/+PKC0TpfosAcZwotuVoSncoJc5D3RDL6RgO09Vm1xbI84BWkv6b6tJz4/ > SvBmiqR7LtXUQDN1yiDg1g8Bq8gNKJO7E0hW7WqRE5rJmXAX2Gpx80pQ04mO0LBv > 21OaeJSGElT5MdQYu/wz6oP8iNwjAqUaU7b/BZFXcGgpA+S9qDMaQCMK/EXnrodd > hsDbBxtOridNk9j7SefgwIGZKOr4gdPCxvqnTfj0/X5Cjb+OfMU4rU6dRSIoVaiz > JVrwZr7DVVyvJmF5JFtpsOJGS9SF7YkOJKia3BsmCnJWeNm9+r1n2XjSXHY240tQ > gjNfqgvWbyaLddm+8ZMC77zsZu3Kaf4M2ta9F95K0/PlsShoZYBCDso23aDRsjps > czR3RjT51bdGdEDNhpJkimHQLLFqrvO6NRfg6Azf+Wii3/POrtez60Nx49SQgBul > PTB/i+mHl44Yn9R2VpWgqKM+WMixRxD75SRyOlDXrU0setUv/91Hz+x32cqeeiX0 > C8mWOPh9POOdQPLeIalR2E4F9//CFv4nWZNSjpwIEEeXLd/Mlkyf2ue7ye+1s/5U > JYo2wygRLEiLimacaoEyTRguR5/QsKtMieqKKfIYQglQDQkulWhhxOeqJmkpP10p > xQp11b/GIwrXA4wVi5KA3hQEB/ST/2ENvTO76e/oGW41RK9S0gw= > =5+cM > -----END PGP SIGNATURE----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Fri Mar 9 14:12:01 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 9 Mar 2018 08:12:01 -0600 Subject: [openstack-dev] [keystone] changing meeting time In-Reply-To: References: Message-ID: <65b8de95-a940-c180-07c8-dd3168eee49c@gmail.com> The proposed time merged, so it's official. New meeting time and ical are available on eavesdrop [0]. [0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting On 03/06/2018 03:20 PM, Lance Bragstad wrote: > Hey all, > > Per one of the outcomes from the PTG, I've proposed a new time slot for > the keystone weekly meeting [0]. Note that it requires us to move > meeting rooms as well. I'd like to get +1/-1s on the review from people > looking to attend before asking a core to review. > > Let's discuss in review. > > Thanks, > > Lance > > [0] https://review.openstack.org/#/c/550260/ > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Fri Mar 9 14:46:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 9 Mar 2018 08:46:37 -0600 Subject: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload In-Reply-To: <1520598388.7809.6@smtp.office365.com> References: <1520598388.7809.6@smtp.office365.com> Message-ID: On 3/9/2018 6:26 AM, Balázs Gibizer wrote: > The instance-action REST API has already provide the traceback to the > user (to the admin by default) and the notifications are also admin only > things as they are emitted to the message bus by default. So I assume > that security is not a bigger concern for the notification than for the > REST API. So I think the only issue we have to accept is that the > traceback object in the ExceptionPayload will not be a well defined > field but a simple string containing a serialized traceback. > > If there is no objection then Kevin or I can file a specless bp to > extend the ExceptionPayload. I think that's probably fine. As you said, if we already provide tracebacks in instance action event details (and faults), then the serialized traceback in the error notification payload also seems fine, and is what the legacy notifications did so it's not like there wasn't precedent. I don't think we need a blueprint for this, it's just a bug. -- Thanks, Matt From amotoki at gmail.com Fri Mar 9 14:47:21 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 9 Mar 2018 14:47:21 +0000 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Xinni, 2018-03-09 12:05 GMT+09:00 Xinni Ge : > Hello Horizon Team, > > I would like to hear about your opinions about how to add new xstatic > modules to horizon settings. > > As for Heat-dashboard project embedded 3rd-party files issue, thanks for > your advices in Dublin PTG, we are now removing them and referencing as new > xstatic-* libs. Thanks for moving this forward. > So we installed the new xstatic files (not uploaded as openstack official > repos yet) in our development environment now, but hesitate to decide how to > add the new installed xstatic lib path to STATICFILES_DIRS in > openstack_dashboard.settings so that the static files could be automatically > collected by *collectstatic* process. > > Currently Horizon defines BASE_XSTATIC_MODULES in > openstack_dashboard/utils/settings.py and the relevant static fils are added > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. > We may want new plugin setting keywords ( something similar to ADD_JS_FILES) > to update horizon XSTATIC_MODULES (or directly update STATICFILES_DIRS). IMHO it is better to allow horizon plugins to add xstatic modules through horizon plugin settings. I don't think it is a good idea to add a new entry in BASE_XSTATIC_MODULES based on horizon plugin usages. It makes difficult to track why and where a xstatic module in BASE_XSTATIC_MODULES is used. Multiple horizon plugins can add a same entry, so horizon code to handle plugin settings should merge multiple entries to a single one hopefully. My vote is to enhance the horizon plugin settings. Akihiro > > Looking forward to hearing any suggestions from you guys, and > Best Regards, > > Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Fri Mar 9 14:58:03 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 9 Mar 2018 08:58:03 -0600 Subject: [openstack-dev] [Keystone] Weirdness around domain/project scope in role assignments In-Reply-To: References: Message-ID: On 03/09/2018 01:42 AM, Adrian Turjak wrote: > Sooo to follow up from the discussion last night partly with Lance and > Adam, I'm still not exactly sure what difference, if any, there is > between a domain scoped role assignment, and a project scoped role > assignment. And... It appears stuff breaks when you used both, or either > actually (more on that further down). > > My problem/confusion was why the following exists or is possible: > http://paste.openstack.org/show/695978/ > The amusing part, I now can't remove the above role assignments. They > throw a 500: > http://paste.openstack.org/show/696013/ > The error itself being: > http://paste.openstack.org/show/695994/ Thanks for the traces, I've opened a bug [0]. We should find a way to handle the ambiguity. [0] https://bugs.launchpad.net/keystone/+bug/1754677 > > Then lets look at just project scope: > http://paste.openstack.org/show/696007/ > I can't seem to do 'include_names' on the project scoped role > assignment, but effective works since it doesn't include the project. I > have a feeling the error is because keystone isn't including projects > with is_domain when doing the names mapping. > > So... going a little further, does domain scope still act like project > scope in regards to effective roles: > http://paste.openstack.org/show/695992/ > The answer is yes. But again, this is domain scope, not project scope > which still results in project scope down the tree. Although here > 'include_names' works, this time because keystone internally is directly > checking for is_domain I assume. > > Also worth mentioning that the following works (and maybe shouldn't?): > http://paste.openstack.org/show/696006/ > Alice has a role on a 'project' that isn't part of her domain. I can't > add her to a project that isn't in her domain... but I can add her to > another domain? That surely isn't expected behavior... > > > Weird broken stuff aside, I'm still not seeing a difference between > domain/project role assignment scope on a project that is a domain. Is > there a difference that I'm missing, and where is such a difference used? There used to be a more distinct different between projects and domains before we decided to chase reseller use cases [1]. Projects and domains were two distinct things prior to that. Domains ended up becoming the root of the tree and they were munged with projects. The goal of reseller was to try and make it easier for a customer to setup domains under the domain they got from the provider. With that came a slew of security problems that resulted in punting the ability to have a domain anywhere but the root of the tree. The reseller case hasn't been picked up since. [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html > > Looking at the blog post Adam linked > (https://adam.younglogic.com/2018/02/openstack-hmt-cloudforms/), he > isn't  really making use of domain scope, just project scope on a > domain, and inheritance down the tree, which is indeed a valid and > useful case, but again, not domain scope assignment. Although domain > scope on the same project would probably (as we see above) achieve the > same result. > > Then looking at the policy he linked: > http://git.openstack.org/cgit/openstack/keystone/tree/etc/policy.v3cloudsample.json#n52 > "identity:list_projects": "rule:cloud_admin or > rule:admin_and_matching_domain_id", >     - "cloud_admin": "role:admin and (is_admin_project:True or > domain_id:admin_domain_id)", >     - "admin_and_matching_domain_id": "rule:admin_required and > domain_id:%(domain_id)s", >         -  "admin_required": "role:admin", > > I can't exactly see how it also uses domain scope. It still seems to be > project scope focused. > > So my question then is why on the role assignment object do we > distinguish between a domain/project when it comes to scope when a > domain IS a project, and clearly things break when you set both. > > Can we make it so the following works (a hypothetical example): > http://paste.openstack.org/show/696010/ > At which point the whole idea of 'domain' scope on a role assignment > goes away since and is exactly the same thing as project scope, and also > the potential database 500 issues goes away since... there isn't more > than 1 row. We can then start phasing out the domain scope stuff and > hiding it away unless someone is explicitly still looking for it. > > Because in reality, right now I think we only have project scope, and > system scope. Domain scope == project scope and we should probably make > that clear because obviously the code base is confused on that matter. :P > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zbitter at redhat.com Fri Mar 9 15:06:25 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 9 Mar 2018 10:06:25 -0500 Subject: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project In-Reply-To: <1520554645-sup-648@lrrr.local> References: <1520531849-sup-5340@lrrr.local> <11033cf1-d80a-ef35-bf0a-97a048ec94ae@redhat.com> <1520554645-sup-648@lrrr.local> Message-ID: <502cd4b5-64b9-a882-7b8e-d7a8ccee0b77@redhat.com> On 08/03/18 19:31, Doug Hellmann wrote: > If these new plugins might contain "candidate" tests and all tests > are potentially candidates, how are these new repos different from > the existing repos that already contain all of the tests? We have to allow candidate tests because otherwise there's a chicken-and-egg type problem where we can never add tests to an existing trademark program. But the definition of 'candidate' that I'm thinking of is along the lines of "under active discussion to be added to the next iteration of refstack". Perhaps that should be documented. To answer your question directly, the main way they're different is that by putting tests in one of these repos, teams are committing to not really change them, and certainly to ~never break backwards compat. > It seems > like at least part of the problem with the current system was > triggered by confusion about when to move tests around to satisfy > the policy. Can we avoid that problem with the new system? If we're > not going to move the tests into Tempest itself and have the QA > team manage them, why not simply take the tests from the repos where > they already live? I think a lesson we've learned from Tempest is that there are two parts to successfully reviewing a change: 1) Determine whether the change affects a test that is part of the trademark program. 2) Don't make the change if it does. 2 is easy, but 1 is hard. By separating the trademark tests into a separate repo, we make 1 easy as well. This reduces risk and increases review throughput. > I thought the QA team no longer wanted to be responsible for these > extra tests. Has that changed again? I've lost track of everyone's > positions, I'm afraid. Their position as I understand it is: * They're not going to write tests * They're happy to document the process, offer advice, and review tests as time allows * They don't want tests to be thrown over the transom and made their problem for the rest of time That doesn't conflict with anybody's goals here, so it's mostly a matter of documenting it clearly so that somebody reading without all this context won't get the wrong idea. cheers, Zane. From ryan.beisner at canonical.com Fri Mar 9 15:25:36 2018 From: ryan.beisner at canonical.com (Ryan Beisner) Date: Fri, 9 Mar 2018 16:25:36 +0100 Subject: [openstack-dev] [charms] 18.02 OpenStack Charms release In-Reply-To: References: Message-ID: URL correction for 18.02 OpenStack Charms release notes: https://docs.openstack.org/charm-guide/latest/1802.html Cheers, Ryan On Fri, Mar 9, 2018 at 2:46 PM, David Ames wrote: > Announcing the 18.02 release of the OpenStack Charms. > > The 18.02 charms have full support for the Queens OpenStack release. > 112 bugs have been fixed and released across the OpenStack charms. > > For full details of the release, please refer to the release notes: > > https://docs.openstack.org/charm-guide/latest/18.02.html > > Thanks go to the following contributors for this release: > > Anton Kremenetsky > Billy Olsen > Corey Bryant > David Ames > Dmitrii Shcherbakov > Edward Hope-Morley > Felipe Reyes > Frode Nordahl > James Page > Jason Hobbs > Jill Rouleau > Liam Young > Nobuto Murata > Ryan Beisner > Seyeong Kim > Tytus Kurek > Xav Paice > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Fri Mar 9 15:30:46 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 9 Mar 2018 17:30:46 +0200 Subject: [openstack-dev] [neutron] route metrics inside VR Message-ID: Dear colleagues, for some reasons (see below explanation) , I'm trying to deploy the following network configuration:                   Network +-------------------------------------------+  Subnet-1                         Subnet-2 +---+----+--+                   +----+------+     |    |        +----+             |     |    |        |    |             |     |    +--------+ VR +-------------+     |             |    |  +--+-+           +----+  |    |  | VM |  |    |  +----+ where VR is Neutron's virtual router, connected to two subnets, which belong to same network: Subnet-1 is "LAN" interface (25.0.0.1/8) connected to qr-64c53cf8-d9 Subnet-2 is external gateway (51.x.x.x) connected to qg-16bdddb1-d5 with SNAT enabled The reason why I'm trying to use this configuration is pretty simple - this allows to switch VM between diffrent address scopes (e.g. "grey" and "white") while preserving port/MAC (which is created in the "Network" and remains there while I'm switching VM between different subnets). Such configuration produces the following commands list when creating VR: 14:45:18.043 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'addr', 'add', '25.0.0.1/8', 'scope', 'global', 'dev', 'qr-64c53cf8-d9', 'brd', '25.255.255.255'] 14:45:19.815 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'addr', 'add', '51.x.x.x/24', 'scope', 'global', 'dev', 'qg-16bdddb1-d5', 'brd', '51.x.x.255'] 14:45:20.283 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'route', 'replace', '25.0.0.0/8', 'dev', 'qg-16bdddb1-d5', 'scope', 'link'] 14:45:20.919 Running command: ['ip', 'netns', 'exec', 'qrouter-UUID', 'ip', '-4', 'route', 'replace', 'default', 'via', '51.x.x.254', 'dev', 'qg-16bdddb1-d5'] Since 25/8 is extra subnet of "Network",  Neutron installs this entry (by using 'ip route replace') despite the fact that there should be connected route (via qr-64c53cf8-d9). Due to current implementation, all traffic from VR to directly connected "subnet-1" goes over "subnet-2" (through NAT) and, thus, VM in Subnet-1 can't access VR - it "pings" local address (25.0.0.1) while replies return from another (NAT) address. Whether this behaviour can be safely changed by using "ip route add [...] metric " instead of "ip route replace"? Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From ayoung at redhat.com Fri Mar 9 15:37:13 2018 From: ayoung at redhat.com (Adam Young) Date: Fri, 9 Mar 2018 10:37:13 -0500 Subject: [openstack-dev] [Keystone] Weirdness around domain/project scope in role assignments In-Reply-To: References: Message-ID: On Fri, Mar 9, 2018 at 2:42 AM, Adrian Turjak wrote: > Sooo to follow up from the discussion last night partly with Lance and > Adam, I'm still not exactly sure what difference, if any, there is > between a domain scoped role assignment, and a project scoped role > assignment. And... It appears stuff breaks when you used both, or either > actually (more on that further down). > > My problem/confusion was why the following exists or is possible: > http://paste.openstack.org/show/695978/ > The amusing part, I now can't remove the above role assignments. They > throw a 500: > http://paste.openstack.org/show/696013/ > The error itself being: > http://paste.openstack.org/show/695994/ This is a bug. It looks like the one() assumes there is only every one record that comes back that matches, and this matches multiple. A 500 is never appropriate. > > > Then lets look at just project scope: > http://paste.openstack.org/show/696007/ > I can't seem to do 'include_names' on the project scoped role > assignment, but effective works since it doesn't include the project. I > have a feeling the error is because keystone isn't including projects > with is_domain when doing the names mapping. > Probably correct. Bug on this, too. > > So... going a little further, does domain scope still act like project > scope in regards to effective roles: > http://paste.openstack.org/show/695992/ > The answer is yes. But again, this is domain scope, not project scope > which still results in project scope down the tree. Although here > 'include_names' works, this time because keystone internally is directly > checking for is_domain I assume. > Interesting. That might have been a "works as designed" with the idea that assigning a role on a domain that is inherited is used by anything underneath it. It actually makes sense, as domains can't nest, so this may be intentional syntactic sugar on top of the format I used: > > Also worth mentioning that the following works (and maybe shouldn't?): > http://paste.openstack.org/show/696006/ > Alice has a role on a 'project' that isn't part of her domain. I can't > add her to a project that isn't in her domain... but I can add her to > another domain? That surely isn't expected behavior... > > That is a typo. You added an additional character at the end of the ID: 86a8b3dc1b8844fd8c2af8dd50cc21386 86a8b3dc1b8844fd8c2af8dd50cc2138 > > Weird broken stuff aside, I'm still not seeing a difference between > domain/project role assignment scope on a project that is a domain. Is > there a difference that I'm missing, and where is such a difference used? > > Looking at the blog post Adam linked > (https://adam.younglogic.com/2018/02/openstack-hmt-cloudforms/), he > isn't really making use of domain scope, just project scope on a > domain, and inheritance down the tree, which is indeed a valid and > useful case, but again, not domain scope assignment. Although domain > scope on the same project would probably (as we see above) achieve the > same result. > > Then looking at the policy he linked: > http://git.openstack.org/cgit/openstack/keystone/tree/etc/ > policy.v3cloudsample.json#n52 > "identity:list_projects": "rule:cloud_admin or > rule:admin_and_matching_domain_id", > - "cloud_admin": "role:admin and (is_admin_project:True or > domain_id:admin_domain_id)", > - "admin_and_matching_domain_id": "rule:admin_required and > domain_id:%(domain_id)s", > - "admin_required": "role:admin", > > I can't exactly see how it also uses domain scope. It still seems to be > project scope focused. > > It is subtle. domain_id:admin_domain_id Means that the token has a domain_id, which means it is a domain scoped token. > So my question then is why on the role assignment object do we > distinguish between a domain/project when it comes to scope when a > domain IS a project, and clearly things break when you set both. > > Can we make it so the following works (a hypothetical example): > http://paste.openstack.org/show/696010/ > At which point the whole idea of 'domain' scope on a role assignment > goes away since and is exactly the same thing as project scope, and also > the potential database 500 issues goes away since... there isn't more > than 1 row. We can then start phasing out the domain scope stuff and > hiding it away unless someone is explicitly still looking for it. > > Because in reality, right now I think we only have project scope, and > system scope. Domain scope == project scope and we should probably make > that clear because obviously the code base is confused on that matter. :P > I'd love it if Domains went away and we only had projects. We'd have to find a way to implement such that people using domains today don't get broken. We could also add a 3 value toggle on the inheritance: none, children_only, both to get it down to a single entry.that would be an implementation detail that the end users would not see. The one potential benefit to domain ,which is not used today, is to say: this applies to all of the projects inside this domain. Each project knows both its parent and its domain, so it is a way to jump levels of the tree. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Mar 9 16:07:55 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 9 Mar 2018 10:07:55 -0600 Subject: [openstack-dev] Going but not gone In-Reply-To: References: Message-ID: <96ad2442-1cc0-cf5b-76bd-a975f9ed242e@gmail.com> I'll be sad to see you go, Major. It's been a pleasure working with you across departments, companies, and communities. Thanks for being a fantastic mentor! Best of luck in your new adventure. Here's to having the chance to work with you again in the future. Lance On 03/09/2018 07:54 AM, Major Hayden wrote: > Hello there, > > I'm leaving my current role for a new opportunity and, unfortunately, > this means I won't be as involved in OpenStack as much in the near > future. I've spoken with our fearless OpenStack-Ansible PTL and I let > JP know that I will resign from the core reviewers group immediately > if I feel that I cannot meet the obligations of the role. > > With that said, the OpenStack community has been truly amazing. My > first humble contribution[0] was a fix for broken glance tests back in > 2011. I've done a little more since then and I'm proud to be a tiny > part of what OpenStack has become today. > > I'd like to thank everyone who has reviewed one of my patches, fixed > one of the bugs I created with my patches, and fixed the gate jobs > that I broke with my patches. Thanks to everyone who has attended one > of my talks at the Summits and thanks to everyone who has put up with > my oddball suggestions at Design Summits, Forums, and PTGs. I have > learned an *incredible* amount about OpenStack, Python, Linux, open > source, communities, and how to be a better human. > > Thanks to the leaders of the OpenStack Foundation as well for their > continued support. They have been excellent listeners and they took > lots of time to consider my suggestions for improvements. > > I love you all and working in this community has been one of the best > experiences in my professional career. :) > > [0] https://review.openstack.org/#/c/2652/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From balazs.gibizer at ericsson.com Fri Mar 9 16:08:12 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 9 Mar 2018 17:08:12 +0100 Subject: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload In-Reply-To: References: <1520598388.7809.6@smtp.office365.com> Message-ID: <1520611692.7809.8@smtp.office365.com> On Fri, Mar 9, 2018 at 3:46 PM, Matt Riedemann wrote: > On 3/9/2018 6:26 AM, Balázs Gibizer wrote: >> The instance-action REST API has already provide the traceback to >> the user (to the admin by default) and the notifications are also >> admin only things as they are emitted to the message bus by >> default. So I assume that security is not a bigger concern for the >> notification than for the REST API. So I think the only issue we >> have to accept is that the traceback object in the ExceptionPayload >> will not be a well defined field but a simple string containing a >> serialized traceback. >> >> If there is no objection then Kevin or I can file a specless bp to >> extend the ExceptionPayload. > > I think that's probably fine. As you said, if we already provide > tracebacks in instance action event details (and faults), then the > serialized traceback in the error notification payload also seems > fine, and is what the legacy notifications did so it's not like there > wasn't precedent. > > I don't think we need a blueprint for this, it's just a bug. I thought about a bp because it was explicitly defined in the original spec not have traceback so for me it does not feels like a bug. Cheers, gibi > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From marios at redhat.com Fri Mar 9 16:14:22 2018 From: marios at redhat.com (Marios Andreou) Date: Fri, 9 Mar 2018 18:14:22 +0200 Subject: [openstack-dev] [tripleo] Upgrades squad Rocky PTG session Message-ID: Hi all, Sofer (chem), Jirka (jistr), Lucas (social) and myself (marios) ran the Upgrades squad session during the Rocky PTG in Dublin. Here is a brief summary: * session etherpad at https://etherpad.openstack.org/p/tripleo-ptg-rocky-upgrades * current/ongoing + more in Rocky... improvements in the ci - tech debt (moving to using the tripleo-upgrade role now), * containerized undercloud upgrade is coming in Rocky (emilien investigating), * Rocky will be a stabilization cycle with focus on improvements to the operator experience including validations, backup/restore, documentation and cli/ui. * Integration with UI might be considered during Rocky to be revisitied with UI squad. FWIW I also made a brief blog post summarising the sessions I was at including the first day of TripleO sessions - there are links to the session etherpads there if useful. You can find it here from March 6th http://tripleo.org/planet.html thanks, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Fri Mar 9 16:22:10 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 9 Mar 2018 10:22:10 -0600 Subject: [openstack-dev] Going but not gone In-Reply-To: References: Message-ID: I've already said this but don't be a stranger. You've been a great influence and mentor to a lot of people, me included. You'll do great in the new gig and you'll be a real asset to your new team! Amy (spotz) On Fri, Mar 9, 2018 at 7:54 AM, Major Hayden wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hello there, > > I'm leaving my current role for a new opportunity and, unfortunately, this > means I won't be as involved in OpenStack as much in the near future. I've > spoken with our fearless OpenStack-Ansible PTL and I let JP know that I > will resign from the core reviewers group immediately if I feel that I > cannot meet the obligations of the role. > > With that said, the OpenStack community has been truly amazing. My first > humble contribution[0] was a fix for broken glance tests back in 2011. I've > done a little more since then and I'm proud to be a tiny part of what > OpenStack has become today. > > I'd like to thank everyone who has reviewed one of my patches, fixed one > of the bugs I created with my patches, and fixed the gate jobs that I broke > with my patches. Thanks to everyone who has attended one of my talks at the > Summits and thanks to everyone who has put up with my oddball suggestions > at Design Summits, Forums, and PTGs. I have learned an *incredible* amount > about OpenStack, Python, Linux, open source, communities, and how to be a > better human. > > Thanks to the leaders of the OpenStack Foundation as well for their > continued support. They have been excellent listeners and they took lots of > time to consider my suggestions for improvements. > > I love you all and working in this community has been one of the best > experiences in my professional career. :) > > [0] https://review.openstack.org/#/c/2652/ > > - -- > Major Hayden > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlqikg0ACgkQc3BR4MEB > H7HN9Q/+PKC0TpfosAcZwotuVoSncoJc5D3RDL6RgO09Vm1xbI84BWkv6b6tJz4/ > SvBmiqR7LtXUQDN1yiDg1g8Bq8gNKJO7E0hW7WqRE5rJmXAX2Gpx80pQ04mO0LBv > 21OaeJSGElT5MdQYu/wz6oP8iNwjAqUaU7b/BZFXcGgpA+S9qDMaQCMK/EXnrodd > hsDbBxtOridNk9j7SefgwIGZKOr4gdPCxvqnTfj0/X5Cjb+OfMU4rU6dRSIoVaiz > JVrwZr7DVVyvJmF5JFtpsOJGS9SF7YkOJKia3BsmCnJWeNm9+r1n2XjSXHY240tQ > gjNfqgvWbyaLddm+8ZMC77zsZu3Kaf4M2ta9F95K0/PlsShoZYBCDso23aDRsjps > czR3RjT51bdGdEDNhpJkimHQLLFqrvO6NRfg6Azf+Wii3/POrtez60Nx49SQgBul > PTB/i+mHl44Yn9R2VpWgqKM+WMixRxD75SRyOlDXrU0setUv/91Hz+x32cqeeiX0 > C8mWOPh9POOdQPLeIalR2E4F9//CFv4nWZNSjpwIEEeXLd/Mlkyf2ue7ye+1s/5U > JYo2wygRLEiLimacaoEyTRguR5/QsKtMieqKKfIYQglQDQkulWhhxOeqJmkpP10p > xQp11b/GIwrXA4wVi5KA3hQEB/ST/2ENvTO76e/oGW41RK9S0gw= > =5+cM > -----END PGP SIGNATURE----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Mar 9 16:38:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 9 Mar 2018 16:38:47 +0000 Subject: [openstack-dev] Going but not gone In-Reply-To: References: Message-ID: <20180309163847.gderefpubd53btt2@yuggoth.org> On 2018-03-09 07:54:24 -0600 (-0600), Major Hayden wrote: > I'm leaving my current role for a new opportunity and, > unfortunately, this means I won't be as involved in OpenStack as > much in the near future. I've spoken with our fearless > OpenStack-Ansible PTL and I let JP know that I will resign from > the core reviewers group immediately if I feel that I cannot meet > the obligations of the role. [...] Remember: it's only a non-OpenStack job until you can convince your employer it should be OpenStack-related! ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mchandras at suse.de Fri Mar 9 16:51:14 2018 From: mchandras at suse.de (Markos Chandras) Date: Fri, 9 Mar 2018 16:51:14 +0000 Subject: [openstack-dev] Going but not gone In-Reply-To: References: Message-ID: <99e0bf07-97ef-f09f-6970-55b48b67176e@suse.de> On 03/09/2018 01:54 PM, Major Hayden wrote: > Hello there, > > I'm leaving my current role for a new opportunity and, unfortunately, this means I won't be as involved in OpenStack as much in the near future. I've spoken with our fearless OpenStack-Ansible PTL and I let JP know that I will resign from the core reviewers group immediately if I feel that I cannot meet the obligations of the role. > > With that said, the OpenStack community has been truly amazing. My first humble contribution[0] was a fix for broken glance tests back in 2011. I've done a little more since then and I'm proud to be a tiny part of what OpenStack has become today. > > I'd like to thank everyone who has reviewed one of my patches, fixed one of the bugs I created with my patches, and fixed the gate jobs that I broke with my patches. Thanks to everyone who has attended one of my talks at the Summits and thanks to everyone who has put up with my oddball suggestions at Design Summits, Forums, and PTGs. I have learned an *incredible* amount about OpenStack, Python, Linux, open source, communities, and how to be a better human. > > Thanks to the leaders of the OpenStack Foundation as well for their continued support. They have been excellent listeners and they took lots of time to consider my suggestions for improvements. > > I love you all and working in this community has been one of the best experiences in my professional career. :) > > [0] https://review.openstack.org/#/c/2652/ Hello Major, I wish you all the best on your new adventure! It's been a real pleasure working with you (virtually or during the PTG) fixing all sort of strange bugs. I hope you will still find some time to come and fix CentOS in OSA whenever necessary :) -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From colleen at gazlene.net Fri Mar 9 17:59:04 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 09 Mar 2018 18:59:04 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 5 March 2018 Message-ID: <1520618344.4170534.1297613800.035C2037@webmail.messagingengine.com> # Keystone Team Update - Week of 5 March 2018 ## News ### PTG Summaries Last week many of us attended the PTG in Dublin and made significant progress on a lot of keystone topics. Here are some recaps: - https://www.lbragstad.com/blog/keystone-rocky-ptg-summary - http://www.gazlene.net/dublin-ptg.html ### URL whitelisting for application credentials One of the major topics at the PTG was the next steps for application credentials. To make them truly useful we need to enable finer-grained access control than what we can currently provide with our traditional "scope RBAC" system. It turns out we already had a spec proposed[1] that predated application credentials but that largely fills the gaps here. A lot of the elements in this proposal were very similar to the RBAC in middleware proposal[2] and Adam had major concerns that the approach taken here would conflict with the path to eventually properly fixing RBAC in keystone. We were able to get on a call together and come to a compromise, which is that operators must be able to pre-approve allowed API paths that a user can add to their application credential whitelists, but allowing wildcards in the pre-approved list is acceptable. This can enable a safety net for users to avoid them accidentally enabling something they didn't intend, and it will put us on a path toward fully managed policy mappings in keystone eventually. ### Unified Limits next steps Lance proposed creating a new Oslo library[3] to continue the next stage of work of unifying quota implementations in keystone. We will also need to propose an Oslo spec[4] to coordinate this work with the Oslo team. We're also trying to work out some of the oddities in the current API implementation and hoping to come out with a consistent and useful interface[5]. ### Changing meeting time We proposed changing the meeting time[6] to make it easier for one of our newer contributors to join. The meeting change was merged[7] so next week's meeting will be at 1600 UTC in #openstack-meeting-alt. ### Domain and Project scope Adrian brought us a fun puzzle[8][9][10] involving ambiguity between how role assignments are handled between domains and projects. Some bugs were opened to correct some logic errors but the open question is what kind of future we see for domains and projects. [1] https://review.openstack.org/#/c/396331/ [2] https://review.openstack.org/#/c/391624/ [3] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128006.html [4] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128032.html [5] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128027.html [6] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127970.html [7] https://review.openstack.org/#/c/550260/ [8] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-08.log.html#t2018-03-08T23:43:31 [9] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-09.log.html#t2018-03-09T02:49:24 [10] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128093.html ## Open Specs Search query: https://goo.gl/eyTktx We have four specs proposed for the Rocky cycle so far. ### Repropose JWT specification for Rocky[11] We already wrote a "this would be nice" spec about implementing JSON Web Tokens as a new token format, and this cycle we have some of the token provider refactoring far enough along that we're ready to commit to implementing it. ### Add whitelist-extension-for-app-creds[12] As discussed above, this was a major topic at the PTG and the next logical step in making application credentials useful. ### Add specification for a capabilities API[13] Another topic we discussed at the PTG was expanding on our JSON-home document to provide a way for users to query what they have permissions to do within keystone. ### Hierarchical Unified Limits[14] With our initial limtis API supporting a flat project structure, the next step is supporting hierarchical project models. [11] https://review.openstack.org/541903 [12] https://review.openstack.org/396331 [13] https://review.openstack.org/547162 [14] https://review.openstack.org/540803 ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 4 changes this week. Might be a bit unfair to count this week since many of us are still recovering from travel and digesting the events of the PTG. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 41 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Welcome to the new cycle! We haven't proposed deadlines yet, but at the PTG we discussed moving our feature freeze deadline up to avoid the rush, as well as aiming for finishing client work earlier in order to avoid pressuring the OSC team at the end of the cycle. ## Shout-outs Thanks to Johannes Grassler for stepping up to work on the application credentials whitelist effort after we failed to give adequate attention to his proposal in earlier cycles. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From cdent+os at anticdent.org Fri Mar 9 18:05:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 9 Mar 2018 18:05:13 +0000 (GMT) Subject: [openstack-dev] [keystone] batch processing with unified limits In-Reply-To: <0a90f7be-1764-fa50-269a-91b2f252f05f@gmail.com> References: <0a90f7be-1764-fa50-269a-91b2f252f05f@gmail.com> Message-ID: On Wed, 7 Mar 2018, Lance Bragstad wrote: > Currently, the create and update APIs support batch processing. So > specifying a list of limits is valid for both. This was a part of the > original proposal as a way to make it easier for operators to set all > their registered limits with a single API call. The API also has unique > IDs for each limit reference. The consensus was that this felt a bit > weird with a resource that contains a unique set of attributes that can > make up a constraints (service, resource type, and optionally a region). > We're discussing ways to make this API more consistent with how the rest > of keystone works while maintaining usability for operators. Does anyone > see issues with supporting batch creation for limits and individual > updates? In other words, removing the ability to update a set of limits > in a single API call, but keeping the ability to create them in batches? Lance and I spoke about this in IRC [1], he was after some input from the api-sig. We agreed that PUT to /v3/limits (and its friends) is probably not ideal because: * It's not aligned with how PUT is used elsewhere in keystone. * In order for the PUT to be correct (according to HTTP rules) it would have to be an entire set updating all the limits, even those that haven't changed, and this could be large and annoying. PUT to update one limit is cleaner. * While POSTs to create all the limits are quite voluminous, any changes to existing limits are likely to be smaller, thus the cost of non-batch updates by individual PUTs is not as severe. * PATCH is still available if batch updates are desired, but doing PATCH well and correctly is challenging. So we agreed that given that the API is currently experimental, changing it to not support batch PUT is probably a right thing to do. [1] http://p.anticdent.org/1G54 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From Louie.Kwan at windriver.com Fri Mar 9 18:37:54 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Fri, 9 Mar 2018 18:37:54 +0000 Subject: [openstack-dev] [python-masakariclient] Installation issues Message-ID: <47EFB32CD8770A4D9590812EE28C977E96279856@ALA-MBD.corp.ad.wrs.com> Two issues: 1. Just download the latest and got some issues, cannot ? ubuntu at yow-tic-demo1:~$ masakari segment-list ('Problem with auth parameters', ', mode 'w' at 0x7fe5188681e0>) 2. Documentation: http://docs.openstack.org/developer/python-masakariclient · Not Found · The documentation is no long valid Checked the bug list, it seems new issues? Any info will be much appreciated. Thanks. Louie Note: sudo su - stack cd /home/stack git clone https://github.com/openstack/python-masakariclient.git cd python-masakariclient/ sudo python setup.py install source ~/admin-openrc.sh # To check the cli is working or not masakari segment-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Fri Mar 9 19:16:35 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Fri, 9 Mar 2018 19:16:35 +0000 Subject: [openstack-dev] [python-masakariclient] Installation issues In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E96279856@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E96279856@ALA-MBD.corp.ad.wrs.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E962798A6@ALA-MBD.corp.ad.wrs.com> It may be related to the profile changes? > /usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/connection.py(49)create_connection() -> LOG.debug('Connection: %s', conn) (Pdb) n > /usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/connection.py(50)create_connection() -> LOG.debug('masakari client initialized: %s', conn.ha) (Pdb) print LOG (Pdb) n ConfigException: ConfigEx...meters',) > /usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/connection.py(50)create_connection() -> LOG.debug('masakari client initialized: %s', conn.ha) (Pdb) n > /usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/connection.py(51)create_connection() -> except Exception as e: (Pdb) print e *** NameError: name 'e' is not defined (Pdb) n > /usr/local/lib/python2.7/dist-packages/masakariclient/sdk/ha/connection.py(52)create_connection() -> raise e (Pdb) print e Problem with auth parameters (Pdb) ________________________________ From: Kwan, Louie Sent: Friday, March 09, 2018 1:37 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [python-masakariclient] Installation issues Two issues: 1. Just download the latest and got some issues, cannot ? ubuntu at yow-tic-demo1:~$ masakari segment-list ('Problem with auth parameters', ', mode 'w' at 0x7fe5188681e0>) 2. Documentation: http://docs.openstack.org/developer/python-masakariclient · Not Found · The documentation is no long valid Checked the bug list, it seems new issues? Any info will be much appreciated. Thanks. Louie Note: sudo su - stack cd /home/stack git clone https://github.com/openstack/python-masakariclient.git cd python-masakariclient/ sudo python setup.py install source ~/admin-openrc.sh # To check the cli is working or not masakari segment-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Mar 9 20:45:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Mar 2018 14:45:37 -0600 Subject: [openstack-dev] [murano][Openstack-stable-maint] Stable check of openstack/murano failed In-Reply-To: References: Message-ID: <8DDF0D73-C107-448B-9A3D-F6B86F8A6DB6@gmx.com> > On Mar 8, 2018, at 22:52, Rong Zhu wrote: > > Hi, Tony > > I will fix this in stable pike, thanks for the reminder. > > Just to make sure the point wasn’t missed - this needs to be fixed in master yet. The change that was made there is not correct, so rather than backporting that just to get the tests to pass, it needs to be properly fixed, and only then backported. > > > On Fri, Mar 9, 2018 at 12:32 PM, Tony Breeds wrote: >> On Thu, Mar 08, 2018 at 06:16:27AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: >>> Build failed. >>> >>> - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/build-openstack-sphinx-docs/8b023b7/html/ : SUCCESS in 4m 44s >>> - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/openstack-tox-py27/82d0dae/ : FAILURE in 5m 48s >> >> The job is failing on the periodic-stable pipeline which indicates that >> all changes on pike will hit this same issue. >> >> There is fix on master[1] but it's wrong so rather than back porting >> that pike it'd be great if someone from the murano team could own fixing >> this properly. >> >> Based on my 5mins of poking it seems that reading the test yaml file is >> generating a list of unicode values rather than the expected list of >> string_type(). I think the answer is a simple as iterating over the >> list and using six.string_type to massage the value. I don't knwo what >> else that will break and I also don't know the details of the contract >> that allowed pattern is describing. >> >> For example making it a simple string value would probably also fix it >> but that isn't a backwards compatible change. >> >> Yours Tony. >> >> [1] https://review.openstack.org/#/c/523829/4/murano/tests/unit/packages/hot_package/test_hot_package.py at 114 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Thanks, > Rong Zhu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Fri Mar 9 20:50:08 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 9 Mar 2018 14:50:08 -0600 Subject: [openstack-dev] [cinder] [manila] Performance concern on new quota system In-Reply-To: References: <20180309121102.by2qgasphnz3lvyq@localhost> Message-ID: <93200DF9-062E-4B8F-9108-0E46AFBB4127@gmx.com> > On Mar 9, 2018, at 07:37, TommyLike Hu wrote: > > Thanks Gorka, > To be clear, I started this discussion not because I reject this feature, instead I like it as it's much more clean and simple, compared with performance impact it solves several other issues which we hate badly. I wrote this is to point out we may have this issue, and to see whether we could improve it before it's actually landed. Better is better:) > > > - Using a single query to retrieve both counts and sums instead of 2 > queries. > > For this advice, I think I already combined count and sum into single query. > > - DB triggers to do the actual counting. > Please, no DB triggers. :) > This seems a good idea, but not sure whether it could cover all of the cases we have in our quota system and whether can be easily integrated into cinder, can you share more detail on this? > > Thanks > TommyLike > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Mar 9 21:31:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 09 Mar 2018 16:31:27 -0500 Subject: [openstack-dev] [tc] Technical Committee Status update, March 9th In-Reply-To: <6c9a19a7-fa90-a5ca-4fd3-18a17029a1e3@openstack.org> References: <6c9a19a7-fa90-a5ca-4fd3-18a17029a1e3@openstack.org> Message-ID: <1520630999-sup-3503@lrrr.local> Excerpts from Thierry Carrez's message of 2018-03-09 11:28:25 +0100: [...] > We started with a retrospective of how we got things done so far with > this membership. Async collaborations worked well, and the weekly > reports were seen as helpful. We should probably try to steer the topic > of discussions in office hours a bit more. We also need to more I created https://etherpad.openstack.org/p/tc-office-hour-conversation-starters and seeded it with one topic. Please feel free to add others. Doug From thingee at gmail.com Sat Mar 10 03:42:55 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 9 Mar 2018 19:42:55 -0800 Subject: [openstack-dev] Developer Mailing List Digest March 3-9th Message-ID: <20180310034255.GG32596@gmail.com> HTML version: https://www.openstack.org/blog/?p=8361 Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Success Bot Says ================ * kong: Qinling now supports Node.js runtime(experimental) * AJaeger: Jenkins user and jenkins directory on images are gone. /usr/local/jenkins is only created for legacy jobs * eumel8: Zanata 4 is now here [0] * smcginnis: Queens has been released!! * kong: welcome openstackstatus to #openstack-qinling channel! * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - https://www.translate.openstack.org Thanks Bot Says =============== * Thanks dhellmann for setting up community wide goals + good use of storyboard [0] * Thanks ianw for kind help on upgrading to Zanata 4 which has much better UI and improved APIs! * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks [0] - https://storyboard.openstack.org/#!/project/923 Community Summaries =================== * Release countdown [0] * TC report [1] * Technical Committee status update [2] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128036.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127991.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128098.html PTG Summaries ============= Here's some summaries that people wrote for their project at the PTG: * Documentation and i18n [0] * First Contact SIG [1] * Cinder [2] * Mistral [3] * Interop [4] * QA [5] * Release cycle versus downstream consuming models [6] * Nova Placements [7] * Kolla [8] * Oslo [9] * Ironic [10] * Cyborg [11] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127936.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127937.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127968.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127988.html [4] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/127994.html [5] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128002.html [6] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128005.html [7] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128041.html [8] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128044.html [9] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128055.html [10] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128085.html [11] - http://lists.openstack.org/pipermail/openstack-dev/2018-March/128094.html OpenStack Queens is Officially Released! ======================================== Congratulations to all the teams who contributed to this release! Release notes of different projects for Queens are available [0] and a list of projects [1] that still need to approve their release note patches! [0] - https://releases.openstack.org/queens/ [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127813.html Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127812.html Release Cycles vs. Downstream consumers PTG Summary =================================================== Notes can be found on the original etherpad [0]. A TC resolution is in review [1] TLDR summary: * No consensus on longer / shorter release cycles * Focus on FFU to make upgrades less painful * Longer stable branch maintenance time (18 months for Ocata) * Bootstrap the "extended maintenance" concept with common policy * Group most impacted by release cadence are packagers/distros/vendors * Need for finer user survey questions on upgrade models * Need more data and more discussion, next discussion at Vancouver forum * Release Management team tracks it between events [0] - https://etherpad.openstack.org/p/release-cycles-ptg-rocky [1] - https://review.openstack.org/#/c/548916/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128005.html Pros and Cons of face-to-face Meetings ====================================== Some contributors might not be able to attend the PTG for various reasons: * Health issues * Privilege issues (like not getting visa or travel permits) * Caretaking responsibilities (children, other family, animals, plants) * Environmental concerns There is a concern if this is preventing us from meeting our four opens [1] if people are not able to attend the events. The PTG sessions are not recorded, but there is a super user article on how teams can do this themselves [0]. At the PTG in Denver, the OpenStack Foundation provided bluetooth speakers for teams to help with remote participation. Consensus is this may not be trivial for everyone and it could still be a challenge for remote participants due to things like audio quality. Some people at the PTG in Dublin due to the weather had to participate remotely from their hotel room and felt it challenging to partipate. [0] - http://superuser.openstack.org/articles/community-participation-remote/ Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-March/thread.html#128043 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From tengqim at cn.ibm.com Sat Mar 10 14:38:55 2018 From: tengqim at cn.ibm.com (Qiming Teng) Date: Sat, 10 Mar 2018 22:38:55 +0800 Subject: [openstack-dev] [Senlin]??Nominate??changing in Senlin core team In-Reply-To: <201803091728063531016@zte.com.cn> References: <201803091728063531016@zte.com.cn> Message-ID: <20180310143854.GB38701@node2> +1 to both. - Qiming On Fri, Mar 09, 2018 at 05:28:06PM +0800, liu.xuefeng1 at zte.com.cn wrote: > Hi team, I would like to propose adding chenyb and DucTruong to the Senlin core team. > > > Chenyb has been working on Openstack more than 3 years, with the responsibility of intergation Nova, Senlin and Ceilometer cloud production. He has finished many features and bugs for Senlin project, now he is the most active non-core contributor on Senlin group projects. > > > DucTruong works for Blizzard Entertainment, Blizzard company is an active user of Senlin project. Duc and his colleagues have finished some useful features for Senlin, from this feautres they also got a good understand about Senlin. Now Duc is a active code reviewer on Senlin. > > > > > > > -- ThanksXueFeng > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From giuseppe.decandia at gmail.com Sat Mar 10 20:54:52 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Sat, 10 Mar 2018 14:54:52 -0600 Subject: [openstack-dev] [tatu] Tatu devstack plugin ready Message-ID: Hi Folks, Tatu's devstack plugin is finally working (but I've only tested it on Ubuntu 16 with Fedora25 VM image). Try it out with this local.conf: https://github.com/openstack/tatu/blob/master/devstack/local.conf And then follow these (minimal) steps to set up your client's ssh certificate and known_hosts file: https://github.com/openstack/tatu/blob/master/TRY_IT.rst I've also: - updated the top-level README.rst to provide a lot more details about what's happening under the hood - added a top-level INSTALLATION.rst that explains how to do a manual install. cheers, Pino -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Mar 11 11:09:35 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 11 Mar 2018 12:09:35 +0100 Subject: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi Message-ID: <824a18a9-be00-53c2-5929-82026f973224@debian.org> Hi, I've attempted to switch Keystone to using uwsgi instead of Apache in the Debian packages for Queens. Unfortunately, I had random failure with error 104 in both output of the client and keystone logs. 104 is in fact "TCP connection reset by peer" (and this shows in the logs). So I've switched back, but I'd prefer using uwsgi if possible. Here's the parameters I had in the .ini for uwsgi: http-socket = :35357 wsgi-file = /usr/bin/keystone-wsgi-admin buffer-size = 65535 master = true enable-threads = true processes = 12 thunder-lock = true plugins = python3 lazy-apps = true paste-logger = true logto = /var/log/keystone/keystone-admin.log name = keystone-admin uid = keystone gid = keystone chdir = /var/lib/keystone die-on-term = true Has this happened to anyone else? Is there one option above which is wrong? Why is this happening? Cheers, Thomas Goirand (zigo) From aakashkt0 at gmail.com Sun Mar 11 19:00:51 2018 From: aakashkt0 at gmail.com (Aakash Kt) Date: Mon, 12 Mar 2018 00:30:51 +0530 Subject: [openstack-dev] [openstack][charms] Openstack + OVN Message-ID: Hi, I had previously put in a mail about the development for openstack-ovn charm. Sorry it took me this long to get back, was involved in other projects. I have submitted a charm spec for the above charm. Here is the review link : https://review.openstack.org/#/c/551800/ Please look in to it and we can further discuss how to proceed. Thank you, Aakash OVN4NFV (OPNFV) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Sun Mar 11 19:12:34 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 11 Mar 2018 14:12:34 -0500 Subject: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi In-Reply-To: <824a18a9-be00-53c2-5929-82026f973224@debian.org> References: <824a18a9-be00-53c2-5929-82026f973224@debian.org> Message-ID: Hey Thomas, Outside of the uwsgi config, are you following a specific guide for your install? I'd like to try and recreate the issue. Do you happen to have any more logging information? Thanks On Mar 11, 2018 06:10, "Thomas Goirand" wrote: > Hi, > > I've attempted to switch Keystone to using uwsgi instead of Apache in > the Debian packages for Queens. Unfortunately, I had random failure with > error 104 in both output of the client and keystone logs. 104 is in fact > "TCP connection reset by peer" (and this shows in the logs). So I've > switched back, but I'd prefer using uwsgi if possible. > > Here's the parameters I had in the .ini for uwsgi: > > http-socket = :35357 > wsgi-file = /usr/bin/keystone-wsgi-admin > buffer-size = 65535 > master = true > enable-threads = true > processes = 12 > thunder-lock = true > plugins = python3 > lazy-apps = true > paste-logger = true > logto = /var/log/keystone/keystone-admin.log > name = keystone-admin > uid = keystone > gid = keystone > chdir = /var/lib/keystone > die-on-term = true > > Has this happened to anyone else? Is there one option above which is > wrong? Why is this happening? > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuanlangjian at gmail.com Sun Mar 11 22:55:35 2018 From: xuanlangjian at gmail.com (x Lyn) Date: Mon, 12 Mar 2018 06:55:35 +0800 Subject: [openstack-dev] =?utf-8?b?W1Nlbmxpbl3CoE5vbWluYXRlwqBjaGFuZ2lu?= =?utf-8?q?g_in_Senlin_core_team?= In-Reply-To: <201803091728063531016@zte.com.cn> References: <201803091728063531016@zte.com.cn> Message-ID: +1 to both! Ethan > On 9 Mar 2018, at 5:28 PM, liu.xuefeng1 at zte.com.cn wrote: > > > > Hi team, > > I would like to propose adding chenyb and DucTruong to the Senlin core team. > > Chenyb has been working on Openstack more than 3 years, with the responsibility of intergation Nova, Senlin and Ceilometer cloud production. He has finished many features and bugs for Senlin project, now he is the most active non-core contributor on Senlin group projects. > > DucTruong works for Blizzard Entertainment, Blizzard company is an active user of Senlin project. Duc and his colleagues have finished some useful features for Senlin, from this feautres they also got a good understand about Senlin. Now Duc is a active code reviewer on Senlin. > > > > > -- > Thanks > XueFeng > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun Mar 11 23:09:11 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Mar 2018 00:09:11 +0100 Subject: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi In-Reply-To: References: <824a18a9-be00-53c2-5929-82026f973224@debian.org> Message-ID: On 03/11/2018 08:12 PM, Lance Bragstad wrote: > Hey Thomas,  > > Outside of the uwsgi config, are you following a specific guide for your > install? Under the packages that I maintain in Debian, there's nothing more to do than "apt-get install keystone", reply to a few Debconf questions, and you get a working installation. That is to say, I don't think I did any mistake here. > I'd like to try and recreate the issue. If you wish, I can build a package for you to try, if you're ok with that. Would that be ok? Would you prefer to use Sid or Stretch? It's rather easy to do, as the revert to Apache is just a single git commit. > Do you happen to have any more logging information? That's what was really frustrating: no log at all on the server side, just the client... Cheers, Thomas Goirand (zigo) From tnakamura.openstack at gmail.com Mon Mar 12 00:54:25 2018 From: tnakamura.openstack at gmail.com (Tetsuro Nakamura) Date: Mon, 12 Mar 2018 09:54:25 +0900 Subject: [openstack-dev] [nova] [placement] placement update 18-10 In-Reply-To: References: Message-ID: # Questions > What's the status of shared resource providers? Did we even talk > about that in Dublin? In terms of bug fixes related to allocation candidates, I'll try to answer that question :) Most of the bugs that have been reported in https://bugs.launchpad.net/nova/+bug/1731072 are sorted out and already fixed in Queens. But we have some items left. * https://review.openstack.org/#/c/533396 AllocationCandidates.get_by_filters ignores shared RPs when the RC exists in both places * https://review.openstack.org/#/c/519601/ * https://review.openstack.org/#/c/533437/ AllocationCandidates.get_by_filters does not handle indirectly connected sharing RPs -> In the PTG, we discussed if we need “anchor” RPs in the response of the API, and if I get it correctly the agreement was "let’s re-open this once we face a concrete use case." I have updated the patches according to that conclusion. * https://review.openstack.org/#/c/533195/ Placement returns no allocation candidate for request that needs both compute resources and custom shared resources -> This is already fixed, and trivial comment fix is left and ready for review. * No fix proposed - https://bugs.launchpad.net/nova/+bug/1724633 AllocationCandidates.get_by_filters hits incorrectly when traits are split across the main RP and aggregates -> This is hard to fix as long as traits belong not to resource classes but to resource providers. While the current design allows a consumer to pick resource classes from multiple resource providers (in the same aggregate), we have no way to know which trait corresponds to which resource class. Besides these bugs, how we collaborate and merge existing logic of shared resource provider and now being constructed logic of nested resource provider remains one of the challenges in Rocky in my understanding. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From luckyvega.g at gmail.com Mon Mar 12 01:04:41 2018 From: luckyvega.g at gmail.com (Vega Cai) Date: Mon, 12 Mar 2018 01:04:41 +0000 Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Message-ID: Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehuang at huawei.com Mon Mar 12 01:12:48 2018 From: joehuang at huawei.com (joehuang) Date: Mon, 12 Mar 2018 01:12:48 +0000 Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team In-Reply-To: References: Message-ID: <5E7A3D1BF5FD014E86E5F971CF446EFF565770EC@DGGEML501-MBS.china.huawei.com> +1. Baisen has contributed lots of patches in Tricircle. Best Regards Chaoyi Huang (joehuang) ________________________________ From: Vega Cai [luckyvega.g at gmail.com] Sent: 12 March 2018 9:04 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Mon Mar 12 01:54:21 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Mon, 12 Mar 2018 10:54:21 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi, Akihiro Thanks for the quick reply. I agree with your opinion that BASE_XSTATIC_MODULES should not be modified. It is much better to enhance horizon plugin settings, and I think maybe there could be one option like ADD_XSTATIC_MODULES. This option adds the plugin's xstatic files in STATICFILES_DIRS. I am considering to add a bug report to describe it at first, and give a patch later maybe. Is that ok with the Horizon team? Best Regards. Xinni On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki wrote: > Hi Xinni, > > 2018-03-09 12:05 GMT+09:00 Xinni Ge : > > Hello Horizon Team, > > > > I would like to hear about your opinions about how to add new xstatic > > modules to horizon settings. > > > > As for Heat-dashboard project embedded 3rd-party files issue, thanks for > > your advices in Dublin PTG, we are now removing them and referencing as > new > > xstatic-* libs. > > Thanks for moving this forward. > > > So we installed the new xstatic files (not uploaded as openstack official > > repos yet) in our development environment now, but hesitate to decide > how to > > add the new installed xstatic lib path to STATICFILES_DIRS in > > openstack_dashboard.settings so that the static files could be > automatically > > collected by *collectstatic* process. > > > > Currently Horizon defines BASE_XSTATIC_MODULES in > > openstack_dashboard/utils/settings.py and the relevant static fils are > added > > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. > > We may want new plugin setting keywords ( something similar to > ADD_JS_FILES) > > to update horizon XSTATIC_MODULES (or directly update STATICFILES_DIRS). > > IMHO it is better to allow horizon plugins to add xstatic modules > through horizon plugin settings. I don't think it is a good idea to > add a new entry in BASE_XSTATIC_MODULES based on horizon plugin > usages. It makes difficult to track why and where a xstatic module in > BASE_XSTATIC_MODULES is used. > Multiple horizon plugins can add a same entry, so horizon code to > handle plugin settings should merge multiple entries to a single one > hopefully. > My vote is to enhance the horizon plugin settings. > > Akihiro > > > > > Looking forward to hearing any suggestions from you guys, and > > Best Regards, > > > > Xinni Ge > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Mon Mar 12 02:06:58 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 12 Mar 2018 10:06:58 +0800 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan Message-ID: ​​Kolla core reviewer team, It is my pleasure to nominate caoyuan for kolla core team. caoyuan's output is fantastic over the last cycle. And he is the most active non-core contributor on Kolla project for last 180 days[1]. He focuses on configuration optimize and improve the pre-checks feature. Consider this nomination a +1 vote from me. A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 is a veto. Voting is open for 7 days until Mar 12th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/180 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Mon Mar 12 02:12:49 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 12 Mar 2018 10:12:49 +0800 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: sorry for a typo. The vote is open for 7 days until Mar 19th. On Mon, Mar 12, 2018 at 10:06 AM, Jeffrey Zhang wrote: > ​​Kolla core reviewer team, > > It is my pleasure to nominate caoyuan for kolla core team. > > caoyuan's output is fantastic over the last cycle. And he is the most > active non-core contributor on Kolla project for last 180 days[1]. He > focuses on configuration optimize and improve the pre-checks feature. > > Consider this nomination a +1 vote from me. > > A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 > is a veto. Voting is open for 7 days until Mar 12th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/180 > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Mar 12 04:40:57 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 12 Mar 2018 11:40:57 +0700 Subject: [openstack-dev] [mistral] PTG Summary In-Reply-To: References: Message-ID: On 7 Mar 2018, 16:29 +0700, Dougal Matthews , wrote: > Hey Mistralites (maybe?), > > I have been through the etherpad from the PTG and attempted to expand on the topics with details that I remember. If I have missed anything or you have any questions, please get in touch. I want to update it while the memory is as fresh as possible. > > For each main topic I have added a "champion" and a "goal". These are not all complete yet and can be adjusted. I did add names next to champion for people that discussed that topic at the PTG. The goal should summarise what we need to do. > > Note: "Champion" does not mean you need to do all the work - just you are leading that effort and helping rally people around the issue. Essentially it is a collaboration role, but you can still lead the implementation if that makes sense. For example, I put myself as the Documentation champion. I do not plan on writing all the documentation, rather I want to setup better foundations and a better process for writing documentation. This will likely be a team effort I need to coordinate. > > Etherpad: > https://etherpad.openstack.org/p/mistral-ptg-rocky > Thanks Dougal, looks nice ) > It was unfortunate that the "Beast from the East" (the weather, not Renat!) stopped things a bit early on Thursday. Haha :) I probably wouldn’t be even offended if you meant me ;) Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Mar 12 08:17:19 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 12 Mar 2018 09:17:19 +0100 Subject: [openstack-dev] [cinder] [oslo] cinder.conf generation is broken for my_ip, building non-reproducibly Message-ID: <54b240eb-fa3b-1f61-5022-09b3c2e92a84@debian.org> Hi, When inspecting Cinder's (Queens release) cinder.conf, I can see: # Warning: Failed to format sample for my_ip # unhashable type: 'HostAddress' So it seems there's an issue in either Cinder or Oslo. How can I investigate and fix this? It's very likely that I'm once more the only person in the OpenStack community that is really checking config file generation (it used to be like that for past releases), and therefore the only one who noticed it. Also, looking at the code, this seems to be yet-another-instance of "package cannot be built reproducible" [1] with the build host config leaking in the configuration (well, once that's fixed...). Indeed, in the code I can read: cfg.HostAddressOpt('my_ip', default=netutils.get_my_ipv4(), help='IP address of this host'), This means that, when that's repaired, build Cinder will write something like this: #my_ip = 1.2.3.4 With 1.2.3.4 being the value of netutils.get_my_ipv4(). This is easily fixed by adding something like this: sample-default='' I'm writing this here for Cinder, but there's been numerous cases like this already. The most common mistake being the hostname of the build host leaking in the configuration. While this is easily fixed at the packaging level fixing the config file after generating it with oslo.config, often that config file is also built with the sphinx doc, and then that file isn't built reproducibly. That's harder to detect, and easier fixed upstream. Cheers, Thomas Goirand (zigo) [1] https://reproducible-builds.org/ From zhipengh512 at gmail.com Mon Mar 12 08:27:49 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 12 Mar 2018 16:27:49 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda (No Time Change For US) Message-ID: Hi Team, We will resume the team meeting this week. The meeting starting time is still ET 10:00am/PT 7:00am, whereas in China it is moved one hour early to 10:00pm. For Europe please refer to UTC1400 as the baseline. This week we will have a special 2 hour meeting. In the first one hour we will have Shaohe demo the PoC Intel dev team had conducted, and in the second half we will confirm the task and milestones for Rocky based upon the PTG discussion (summary sent out last Friday). ZOOM link will be provided before the meeting :) If there are any other topics anyone would like to propose, feel free to reply to this email thread. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Mon Mar 12 09:08:02 2018 From: james.page at ubuntu.com (James Page) Date: Mon, 12 Mar 2018 09:08:02 +0000 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hi Aakash On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote: > Hi, > > I had previously put in a mail about the development for openstack-ovn > charm. Sorry it took me this long to get back, was involved in other > projects. > > I have submitted a charm spec for the above charm. > Here is the review link : https://review.openstack.org/#/c/551800/ > > Please look in to it and we can further discuss how to proceed. > I'll feedback directly on the review. Thanks! James -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Mon Mar 12 09:09:19 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 12 Mar 2018 10:09:19 +0100 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: <3350cdb8-36b2-b0d1-6fe8-8191ba3e3d14@linaro.org> W dniu 12.03.2018 o 03:06, Jeffrey Zhang pisze: > It is my pleasure to nominate caoyuan for kolla core team. +1 From gael.therond at gmail.com Mon Mar 12 09:11:09 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Mon, 12 Mar 2018 09:11:09 +0000 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: <3350cdb8-36b2-b0d1-6fe8-8191ba3e3d14@linaro.org> References: <3350cdb8-36b2-b0d1-6fe8-8191ba3e3d14@linaro.org> Message-ID: +1 Le lun. 12 mars 2018 à 10:09, Marcin Juszkiewicz < marcin.juszkiewicz at linaro.org> a écrit : > W dniu 12.03.2018 o 03:06, Jeffrey Zhang pisze: > > It is my pleasure to nominate caoyuan for kolla core team. > > +1 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Mon Mar 12 09:39:23 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Mon, 12 Mar 2018 10:39:23 +0100 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: <99E24501-9521-4F77-A83D-44C35A172731@betacloud-solutions.de> +1 > On 12. Mar 2018, at 03:06, Jeffrey Zhang wrote: > > It is my pleasure to nominate caoyuan for kolla core team. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From openstack at sheep.art.pl Mon Mar 12 11:00:26 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 12 Mar 2018 12:00:26 +0100 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Yes, please do that. We can then discuss in the review about technical details. On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge wrote: > Hi, Akihiro > > Thanks for the quick reply. > > I agree with your opinion that BASE_XSTATIC_MODULES should not be > modified. > It is much better to enhance horizon plugin settings, > and I think maybe there could be one option like ADD_XSTATIC_MODULES. > This option adds the plugin's xstatic files in STATICFILES_DIRS. > I am considering to add a bug report to describe it at first, and give a > patch later maybe. > Is that ok with the Horizon team? > > Best Regards. > Xinni > > On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki wrote: > >> Hi Xinni, >> >> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >> > Hello Horizon Team, >> > >> > I would like to hear about your opinions about how to add new xstatic >> > modules to horizon settings. >> > >> > As for Heat-dashboard project embedded 3rd-party files issue, thanks for >> > your advices in Dublin PTG, we are now removing them and referencing as >> new >> > xstatic-* libs. >> >> Thanks for moving this forward. >> >> > So we installed the new xstatic files (not uploaded as openstack >> official >> > repos yet) in our development environment now, but hesitate to decide >> how to >> > add the new installed xstatic lib path to STATICFILES_DIRS in >> > openstack_dashboard.settings so that the static files could be >> automatically >> > collected by *collectstatic* process. >> > >> > Currently Horizon defines BASE_XSTATIC_MODULES in >> > openstack_dashboard/utils/settings.py and the relevant static fils are >> added >> > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. >> > We may want new plugin setting keywords ( something similar to >> ADD_JS_FILES) >> > to update horizon XSTATIC_MODULES (or directly update STATICFILES_DIRS). >> >> IMHO it is better to allow horizon plugins to add xstatic modules >> through horizon plugin settings. I don't think it is a good idea to >> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >> usages. It makes difficult to track why and where a xstatic module in >> BASE_XSTATIC_MODULES is used. >> Multiple horizon plugins can add a same entry, so horizon code to >> handle plugin settings should merge multiple entries to a single one >> hopefully. >> My vote is to enhance the horizon plugin settings. >> >> Akihiro >> >> > >> > Looking forward to hearing any suggestions from you guys, and >> > Best Regards, >> > >> > Xinni Ge >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.oh95 at gmail.com Mon Mar 12 11:11:16 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Mon, 12 Mar 2018 20:11:16 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished Message-ID: Hello, this is Jaewook from Korea. Today I reinstalled devstack, but something weird dashboard was displayed. Dashboard shows panels everything. Please looking at the image. For example, Create Network panel shows 'Network', 'Subnet', 'Subnet Details'. *But every menus are in Network tab, no distinguished at all. And when I click the 'Subnet' or 'Subnet Details', nothing happen.* And also when I click the dropdown menu such as 'Select a project', it shows the projects, but I cannot not select it. *Even though I clicked it, it still shows 'Select a project'.* The OpenStack version is 3.14.0 and Queens release. I installed it with devstack master version. What I suspect is* 'heat-dashboard'.* Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. But after adding it, this error happened. I have no idea but to reinstall it. Is this error already known issue? I would very appreciate if somebody help me.. Best Regards, Jaewook. ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon_error.png Type: image/png Size: 168494 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon_error.png Type: image/png Size: 168494 bytes Desc: not available URL: From ekuvaja at redhat.com Mon Mar 12 11:29:03 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 12 Mar 2018 11:29:03 +0000 Subject: [openstack-dev] [glance] Priorities for WC 12th of March Message-ID: Hi all, Queens is released, we had good and productive PTG so lets get the ball rolling and start working through our Rocky priorities. This week I'd like the focus to be, as discussed in our weekly meeting at Thu, on python-glanceclient for 'web-download' import method and any specs that still would needs review on the PTG topic list [0]. You can review the topic etherpads for your convenience. Lets get those nailed down so the people committed to those features can start working on the implementation. Another announcement for Glance (Core) reviewers so that you are aware. We've had unenforced (to keep other changes to spec repo being not depending on one person) policy in Glance that only PTL +W's any specs so we can ensure that all our cores and invested parties are aware and on board. This is clearly something that was forgotten and we failed to communicate to our new cores so we did change the repo acl accordingly [1] [0] https://etherpad.openstack.org/p/glance-rocky-ptg [1]https://review.openstack.org/#/c/551268/ I wish you all great and productive week! Best, Erno jokke Kuvaja From Tim.Bell at cern.ch Mon Mar 12 11:33:45 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Mon, 12 Mar 2018 11:33:45 +0000 Subject: [openstack-dev] [ironic] PTG Summary In-Reply-To: References: Message-ID: <5FA9BB80-FE73-49F3-BCCF-C1E43CD4BDE3@cern.ch> Julia, A basic summary of CERN does burn-in is at http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html Given that the burn in takes weeks to run, we'd see it as a different step to cleaning (with some parts in common such as firmware upgrades to latest levels) Tim -----Original Message----- From: Julia Kreger Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 8 March 2018 at 22:10 To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [ironic] PTG Summary ... Cleaning - Burn-in As part of discussing cleaning changes, we discussed supporting a "burn-in" mode where hardware could be left to run load, memory, or other tests for a period of time. We did not have consensus on a generic solution, other than that this should likely involve clean-steps that we already have, and maybe another entry point into cleaning. Since we didn't really have consensus on use cases, we decided the logical thing was to write them down, and then go from there. Action Items: * Community members to document varying burn-in use cases for hardware, as they may vary based upon industry. * Community to try and come up with a couple example clean-steps. From dtantsur at redhat.com Mon Mar 12 11:45:43 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Mar 2018 12:45:43 +0100 Subject: [openstack-dev] [ironic] PTG Summary In-Reply-To: <5FA9BB80-FE73-49F3-BCCF-C1E43CD4BDE3@cern.ch> References: <5FA9BB80-FE73-49F3-BCCF-C1E43CD4BDE3@cern.ch> Message-ID: Hi Tim, Thanks for the information. I personally don't see problems with cleaning running weeks, when needed. What I'd avoid is replicating the same cleaning machinery but with a different name. I think we should try to make cleaning work for this case instead. Dmitry On 03/12/2018 12:33 PM, Tim Bell wrote: > Julia, > > A basic summary of CERN does burn-in is at http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html > > Given that the burn in takes weeks to run, we'd see it as a different step to cleaning (with some parts in common such as firmware upgrades to latest levels) > > Tim > > -----Original Message----- > From: Julia Kreger > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 8 March 2018 at 22:10 > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: [openstack-dev] [ironic] PTG Summary > > ... > Cleaning - Burn-in > > As part of discussing cleaning changes, we discussed supporting a > "burn-in" mode where hardware could be left to run load, memory, or > other tests for a period of time. We did not have consensus on a > generic solution, other than that this should likely involve > clean-steps that we already have, and maybe another entry point into > cleaning. Since we didn't really have consensus on use cases, we > decided the logical thing was to write them down, and then go from > there. > > Action Items: > * Community members to document varying burn-in use cases for > hardware, as they may vary based upon industry. > * Community to try and come up with a couple example clean-steps. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Tim.Bell at cern.ch Mon Mar 12 12:00:19 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Mon, 12 Mar 2018 12:00:19 +0000 Subject: [openstack-dev] [ironic] PTG Summary In-Reply-To: References: <5FA9BB80-FE73-49F3-BCCF-C1E43CD4BDE3@cern.ch> Message-ID: <58567272-D678-4129-A416-7F96C36792A2@cern.ch> My worry with re-running the burn-in every time we do cleaning is for resource utilisation. When the machines are running the burn-in, they're not doing useful physics so I would want to minimise the number of times this is run over the life time of a machine. It may be possible to do something like the burn in with a dedicated set of steps but still use the cleaning state machine. Having a cleaning step set (i.e. burn-in means cpuburn,memtest,badblocks,benchmark) would make it more friendly for the administrator. Similarly, retirement could be done with additional steps such as reset2factory. Tim -----Original Message----- From: Dmitry Tantsur Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 12 March 2018 at 12:47 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [ironic] PTG Summary Hi Tim, Thanks for the information. I personally don't see problems with cleaning running weeks, when needed. What I'd avoid is replicating the same cleaning machinery but with a different name. I think we should try to make cleaning work for this case instead. Dmitry On 03/12/2018 12:33 PM, Tim Bell wrote: > Julia, > > A basic summary of CERN does burn-in is at http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html > > Given that the burn in takes weeks to run, we'd see it as a different step to cleaning (with some parts in common such as firmware upgrades to latest levels) > > Tim > > -----Original Message----- > From: Julia Kreger > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 8 March 2018 at 22:10 > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: [openstack-dev] [ironic] PTG Summary > > ... > Cleaning - Burn-in > > As part of discussing cleaning changes, we discussed supporting a > "burn-in" mode where hardware could be left to run load, memory, or > other tests for a period of time. We did not have consensus on a > generic solution, other than that this should likely involve > clean-steps that we already have, and maybe another entry point into > cleaning. Since we didn't really have consensus on use cases, we > decided the logical thing was to write them down, and then go from > there. > > Action Items: > * Community members to document varying burn-in use cases for > hardware, as they may vary based upon industry. > * Community to try and come up with a couple example clean-steps. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Mon Mar 12 12:08:57 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 12 Mar 2018 13:08:57 +0100 Subject: [openstack-dev] [ironic] PTG Summary In-Reply-To: <58567272-D678-4129-A416-7F96C36792A2@cern.ch> References: <5FA9BB80-FE73-49F3-BCCF-C1E43CD4BDE3@cern.ch> <58567272-D678-4129-A416-7F96C36792A2@cern.ch> Message-ID: Inline. On 03/12/2018 01:00 PM, Tim Bell wrote: > My worry with re-running the burn-in every time we do cleaning is for resource utilisation. When the machines are running the burn-in, they're not doing useful physics so I would want to minimise the number of times this is run over the life time of a machine. You only have to run it every time if you put the step into automated cleaning. However, we also have manual cleaning, which is run explicitly. > > It may be possible to do something like the burn in with a dedicated set of steps but still use the cleaning state machine. Yep, this is what manual cleaning is about: an operator explicitly requests it with a given set of steps. See https://docs.openstack.org/ironic/latest/admin/cleaning.html#manual-cleaning > > Having a cleaning step set (i.e. burn-in means cpuburn,memtest,badblocks,benchmark) would make it more friendly for the administrator. Similarly, retirement could be done with additional steps such as reset2factory. ++ We may even add a reference set of clean steps to IPA, but we'll need your help implementing them. I am personally not familiar with how to do burn-in right (though IIRC Julia is). > > Tim > > -----Original Message----- > From: Dmitry Tantsur > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 12 March 2018 at 12:47 > To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [ironic] PTG Summary > > Hi Tim, > > Thanks for the information. > > I personally don't see problems with cleaning running weeks, when needed. What > I'd avoid is replicating the same cleaning machinery but with a different name. > I think we should try to make cleaning work for this case instead. > > Dmitry > > On 03/12/2018 12:33 PM, Tim Bell wrote: > > Julia, > > > > A basic summary of CERN does burn-in is at http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html > > > > Given that the burn in takes weeks to run, we'd see it as a different step to cleaning (with some parts in common such as firmware upgrades to latest levels) > > > > Tim > > > > -----Original Message----- > > From: Julia Kreger > > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Thursday, 8 March 2018 at 22:10 > > To: "OpenStack Development Mailing List (not for usage questions)" > > Subject: [openstack-dev] [ironic] PTG Summary > > > > ... > > Cleaning - Burn-in > > > > As part of discussing cleaning changes, we discussed supporting a > > "burn-in" mode where hardware could be left to run load, memory, or > > other tests for a period of time. We did not have consensus on a > > generic solution, other than that this should likely involve > > clean-steps that we already have, and maybe another entry point into > > cleaning. Since we didn't really have consensus on use cases, we > > decided the logical thing was to write them down, and then go from > > there. > > > > Action Items: > > * Community members to document varying burn-in use cases for > > hardware, as they may vary based upon industry. > > * Community to try and come up with a couple example clean-steps. > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From paul.bourke at oracle.com Mon Mar 12 12:34:28 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 12 Mar 2018 12:34:28 +0000 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: +1 On 12/03/18 02:06, Jeffrey Zhang wrote: > ​​Kolla core reviewer team, > > It is my pleasure to nominate caoyuan for kolla core team. > > caoyuan's output is fantastic over the last cycle. And he is the most > active non-core contributor on Kolla project for last 180 days[1]. He > focuses on configuration optimize and improve the pre-checks feature. > > Consider this nomination a +1 vote from me. > > A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 > is a veto. Voting is open for 7 days until Mar 12th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/180 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at sheep.art.pl Mon Mar 12 12:48:03 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 12 Mar 2018 13:48:03 +0100 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: Message-ID: Do you get any errors in the JavaScript console or in the network tab of the inspector? On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh wrote: > Hello, this is Jaewook from Korea. > > Today I reinstalled devstack, but something weird dashboard was displayed. > > Dashboard shows panels everything. > > Please looking at the image. > > > > > For example, Create Network panel shows 'Network', 'Subnet', 'Subnet > Details'. > > *But every menus are in Network tab, no distinguished at all. And when I > click the 'Subnet' or 'Subnet Details', nothing happen.* > > And also when I click the dropdown menu such as 'Select a project', it > shows the projects, but I cannot not select it. *Even though I clicked > it, it still shows 'Select a project'.* > > The OpenStack version is 3.14.0 and Queens release. > I installed it with devstack master version. > > What I suspect is* 'heat-dashboard'.* > Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. > But after adding it, this error happened. > > I have no idea but to reinstall it. > > Is this error already known issue? > > I would very appreciate if somebody help me.. > > Best Regards, > Jaewook. > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > ​ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon_error.png Type: image/png Size: 168494 bytes Desc: not available URL: From doug at doughellmann.com Mon Mar 12 12:54:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 12 Mar 2018 08:54:55 -0400 Subject: [openstack-dev] [cinder] [oslo] cinder.conf generation is broken for my_ip, building non-reproducibly In-Reply-To: <54b240eb-fa3b-1f61-5022-09b3c2e92a84@debian.org> References: <54b240eb-fa3b-1f61-5022-09b3c2e92a84@debian.org> Message-ID: <1520858854-sup-4309@lrrr.local> Excerpts from Thomas Goirand's message of 2018-03-12 09:17:19 +0100: > Hi, > > When inspecting Cinder's (Queens release) cinder.conf, I can see: > > # Warning: Failed to format sample for my_ip > # unhashable type: 'HostAddress' This part sounds like it might be a bug in oslo.config, which does define a HostAddress class, but that class appears to have the needed method. Please file a bug. > > So it seems there's an issue in either Cinder or Oslo. How can I > investigate and fix this? > > It's very likely that I'm once more the only person in the OpenStack > community that is really checking config file generation (it used to be > like that for past releases), and therefore the only one who noticed it. > > Also, looking at the code, this seems to be yet-another-instance of > "package cannot be built reproducible" [1] with the build host config > leaking in the configuration (well, once that's fixed...). Indeed, in > the code I can read: > > cfg.HostAddressOpt('my_ip', > default=netutils.get_my_ipv4(), > help='IP address of this host'), > > This means that, when that's repaired, build Cinder will write something > like this: > > #my_ip = 1.2.3.4 > > With 1.2.3.4 being the value of netutils.get_my_ipv4(). This is easily > fixed by adding something like this: > > sample-default='' > > I'm writing this here for Cinder, but there's been numerous cases like > this already. The most common mistake being the hostname of the build > host leaking in the configuration. While this is easily fixed at the > packaging level fixing the config file after generating it with > oslo.config, often that config file is also built with the sphinx doc, > and then that file isn't built reproducibly. That's harder to detect, > and easier fixed upstream. These sorts of issues are bugs in the consumers of oslo.config, and should be filed there, since the library can't choose reasonable default values. Doug > > Cheers, > > Thomas Goirand (zigo) > > [1] https://reproducible-builds.org/ > From kyle.oh95 at gmail.com Mon Mar 12 12:55:07 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Mon, 12 Mar 2018 21:55:07 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: Message-ID: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Thanks for feedback! As you said, I got errors in the JavaScript console. Below is the error log : 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' of undefined at Object.25../arrows (fddd6f634ef8.js:2299) at s (fddd6f634ef8.js:2252) at fddd6f634ef8.js:2252 at Object.1../lib/dagre (fddd6f634ef8.js:2252) at s (fddd6f634ef8.js:2252) at e (fddd6f634ef8.js:2252) at fddd6f634ef8.js:2252 at fddd6f634ef8.js:2252 at fddd6f634ef8.js:2252 25../arrows @ fddd6f634ef8.js:2299 s @ fddd6f634ef8.js:2252 (anonymous) @ fddd6f634ef8.js:2252 1../lib/dagre @ fddd6f634ef8.js:2252 s @ fddd6f634ef8.js:2252 e @ fddd6f634ef8.js:2252 (anonymous) @ fddd6f634ef8.js:2252 (anonymous) @ fddd6f634ef8.js:2252 (anonymous) @ fddd6f634ef8.js:2252 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to instantiate module horizon.app due to: Error: [$injector:modulerr] Failed to instantiate module horizon.dashboard.project.heat_dashboard.template_generator due to: Error: [$injector:nomod] Module 'horizon.dashboard.project.heat_dashboard.template_generator' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon.dashboard.project.heat_dashboard.template_generator at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:59 at ensure (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:816:320) at module (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:8) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:35 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.dashboard.project.heat_dashboard.template_generator&p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36) at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58) http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed%20to%20instantiate%20module%20horizon.dashboard.project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard.template_generator'%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project.heat_dashboard.template_generator%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520module%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:59 at ensure (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:816:320) at module (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:8) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:35 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.dashboard.project.heat_dashboard.template_generator&p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36) at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58) http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed%20to%20instantiate%20module%20horizon.dashboard.project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard.template_generator'%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project.heat_dashboard.template_generator%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520module%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400) at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156) at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36) at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58) at angularInit (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:789:556) at HTMLDocument. (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:1846:1383) at fire (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:208:299) (anonymous) @ 3bf910c7ae4c.js:699 (anonymous) @ 3bf910c7ae4c.js:927 forEach @ 3bf910c7ae4c.js:703 loadModules @ 3bf910c7ae4c.js:924 createInjector @ 3bf910c7ae4c.js:913 doBootstrap @ 3bf910c7ae4c.js:792 bootstrap @ 3bf910c7ae4c.js:793 angularInit @ 3bf910c7ae4c.js:789 (anonymous) @ 3bf910c7ae4c.js:1846 fire @ 3bf910c7ae4c.js:208 fireWith @ 3bf910c7ae4c.js:213 ready @ 3bf910c7ae4c.js:32 completed @ 3bf910c7ae4c.js:14 I don't know exactly what error do I have to search.. Best Regards, Jaewook. > 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: > > Do you get any errors in the JavaScript console or in the network tab of the inspector? > > On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh > wrote: > Hello, this is Jaewook from Korea. > > Today I reinstalled devstack, but something weird dashboard was displayed. > > Dashboard shows panels everything. > > Please looking at the image. > > > > > For example, Create Network panel shows 'Network', 'Subnet', 'Subnet Details'. > > But every menus are in Network tab, no distinguished at all. And when I click the 'Subnet' or 'Subnet Details', nothing happen. > > And also when I click the dropdown menu such as 'Select a project', it shows the projects, but I cannot not select it. Even though I clicked it, it still shows 'Select a project'. > > The OpenStack version is 3.14.0 and Queens release. > I installed it with devstack master version. > > What I suspect is 'heat-dashboard'. > Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. > But after adding it, this error happened. > > I have no idea but to reinstall it. > > Is this error already known issue? > > I would very appreciate if somebody help me.. > > Best Regards, > Jaewook. > ================================================ > Jaewook Oh (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > ​ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Mar 12 13:55:47 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 12 Mar 2018 08:55:47 -0500 Subject: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi In-Reply-To: References: <824a18a9-be00-53c2-5929-82026f973224@debian.org> Message-ID: On 03/11/2018 06:09 PM, Thomas Goirand wrote: > On 03/11/2018 08:12 PM, Lance Bragstad wrote: >> Hey Thomas,  >> >> Outside of the uwsgi config, are you following a specific guide for your >> install? > Under the packages that I maintain in Debian, there's nothing more to do > than "apt-get install keystone", reply to a few Debconf questions, and > you get a working installation. That is to say, I don't think I did any > mistake here. Yeah, that's kind of what I figured, but thought I should ask in the event there was anything suspect in our installation guide. > >> I'd like to try and recreate the issue. > If you wish, I can build a package for you to try, if you're ok with > that. Would that be ok? Would you prefer to use Sid or Stretch? It's > rather easy to do, as the revert to Apache is just a single git commit. If you have a package for Stretch, that'd be great. > >> Do you happen to have any more logging information? > That's what was really frustrating: no log at all on the server side, > just the client... > > Cheers, > > Thomas Goirand (zigo) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From balazs.gibizer at ericsson.com Mon Mar 12 13:57:23 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 12 Mar 2018 14:57:23 +0100 Subject: [openstack-dev] [nova] Notification update week 11 Message-ID: <1520863043.5767.1@smtp.office365.com> Hi, Here is the status update / focus settings mail for w11. Bugs ---- No new bug and no changes in the existing bugs from last week report http://lists.openstack.org/pipermail/openstack-dev/2018-March/127992.html Versioned notification transformation ------------------------------------- We already have some patches proposed to the rock bp. Let's go and review them. https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances is approved. Implementation patch exists but still needs work https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- The bp https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications is approved. Implementation patch exists but still needs work https://review.openstack.org/#/c/536243/ Add request_id to the InstanceAction versioned notifications ------------------------------------------------------------ The bp https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications is approved and assigned to Keving_Zheng. Patch has been proposed https://review.openstack.org/#/c/551982/ and needs review. Sending full traceback in versioned notifications ------------------------------------------------- Based on a short investigation it seems that it was a conscious decision not to include the full traceback. See details in the ML post http://lists.openstack.org/pipermail/openstack-dev/2018-March/128105.html I will file a specless bp to add the full traceback if nobody objects in the ML thread. Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications is proposed and it looks good to me. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open No open patches, but I would like to progress with this through the Rocky cycle. Weekly meeting -------------- The next meeting will be held on 13th of Marc on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180313T170000 Cheers, gibi From andrea.frittoli at gmail.com Mon Mar 12 14:09:00 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 12 Mar 2018 14:09:00 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: Dear all, post-PTG updates: - the devstack patches for multinode support are now merged on master. You can now build your multinode zuulv3 native devstack/tempest test jobs using the same base jobs as for single node, and setting a multinode nodeset. Documentation landed as well, so you can now find docs on roles [0], jobs [1] and a migration guide [2] which will show you which base jobs to start with and how to migrate those devstack-gate flags from legacy jobs to the zuul v3 jobs. - the multinode patches including switching of test-matrix (on master) and start including the list of devstack services in the base jobs. In doing so I used the new neutron service names. That may be causing issues to devstack-plugins looking for old service names, so if you encounter an issue please reach out in the openstack-qa / openstack-infra rooms. We could still roll back to the old names, however the beginning of the cycle is probably the best time to sort out issues related to the new names and new logic in the neutron - devstack code. Coming up next: - backport of devstack patches to stable (queens and pike), so we can switch the Tempest job devstack multinode mode and develop grenade zuulv3 native jobs. I do not plan on backporting the new neutron names to any stable branch, let me know if there is any reason to do otherwise. - work on grenade is at very early stages [3], so far I got devstack running successfully on stable/queens from the /opt/stack/old folder using the zuulv3 roles. Next up is actually doing the migration and running all relevant checks. Andrea Frittoli (andreaf) [0] https://docs.openstack.org/devstack/latest/zuul_roles.html [1] https://docs.openstack.org/devstack/latest/zuul_jobs.html [2] https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html [3] https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3 On Tue, Feb 20, 2018 at 9:22 PM Andrea Frittoli wrote: > Dear all, > > updates: > > - host/group vars: zuul now supports declaring host and group vars in the > job definition [0][1] - thanks corvus and infra team! > This is a great help towards writing the devstack and tempest base > multinode jobs [2][3] > * NOTE: zuul merges dict variables through job inheritance. Variables in > host/group_vars override global ones. I will write some examples further > clarify this. > > - stable/pike: devstack ansible changes have been backported to > stable/pike, so we can now run zuulv3 jobs against stable/pike too - thank > you tosky! > next change in progress related to pike is to provide tempest-full-pike > for branchless repositories [4] > > - documentation: devstack now publishes documentation on its ansible roles > [5]. > More devstack documentation patches are in progress to provide jobs > reference, examples and a job migration how-to [6]. > > > Andrea Frittoli (andreaf) > > [0] > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host_vars > [1] > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.group_vars > > [2] https://review.openstack.org/#/c/545696/ > [3] https://review.openstack.org/#/c/545724/ > [4] https://review.openstack.org/#/c/546196/ > [5] https://docs.openstack.org/devstack/latest/roles.html > [6] https://review.openstack.org/#/c/545992/ > > > On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli > wrote: > >> Dear all, >> >> updates: >> - tempest-full-queens and tempest-full-py3-queens are now available for >> testing of branchless repositories [0]. They are used for tempest and >> devstack-gate. If you own a tempest plugin in a branchless repo, you may >> consider adding similar jobs to your plugin if you use it for tests on >> stable/queen as well. >> - if you have migrated jobs based on devstack-tempest please let me know, >> I'm building reference docs and I'd like to include as many examples as >> possible >> - work on multi-node is in progress, but not ready still - you can follow >> the patches in the multinode branch [1] >> - updates on some of the points from my previous email are inline below >> >> Andrea Frittoli (andreaf) >> >> [0] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n73 >> [1] >> https://review.openstack.org/#/q/status:open++branch:master+topic:multinode >> >> >> >> On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli < >> andrea.frittoli at gmail.com> wrote: >> >>> Dear all, >>> >>> this is the first or a series of ~regular updates on the migration of >>> Tempest / Grenade jobs to Zuul v3 native. >>> >>> The QA team together with the infra team are working on providing the >>> OpenStack community with a set of base Tempest / Grenade jobs that can be >>> used as a basis to write new CI jobs / migrate existing legacy ones with a >>> minimal effort and very little or no Ansible knowledge as a precondition. >>> >>> The effort is tracked in an etherpad [0]; I'm trying to keep the >>> etherpad up to date but it may not always be a source of truth. >>> >>> Useful jobs available so far: >>> - devstack-tempest [0] is a simple tempest/devstack job that runs >>> keystone glance nova cinder neutron swift and tempest *smoke* filter >>> - tempest-full [1] is similar but runs a full test run - it replaces the >>> legacy tempest-dsvm-neutron-full from the integrated gate >>> - tempest-full-py3 [2] runs a full test run on python3 - it replaces the >>> legacy tempest-dsvm-py35 >>> >> >> Some more details on this topic: what I did not mention in my previous >> email is that the autogenerated Tempest / Grenade CI jobs (legacy-* >> playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To >> create Zuul V3 Tempest / Grenade native jobs for your projects you need to >> through away the legacy playbooks and defined new jobs in .zuul.yaml, as >> documented in the zuul v3 docs [2]. >> The parent job for a single node Tempest job will usually be >> devstack-tempest. Example migrated jobs are avilable, for instance: [3] [4]. >> >> [2] >> https://docs.openstack.org/infra/manual/zuulv3.html#howto-update-legacy-jobs >> >> [3] >> http://git.openstack.org/cgit/openstack/sahara-tests/tree/.zuul.yaml#n21 >> [4] https://review.openstack.org/#/c/543048/5 >> >> >>> >>> Both tempest-full and tempest-full-py3 are part of integrated-gate >>> templates, starting from stable/queens on. >>> The other stable branches still run the legacy jobs, since >>> devstack ansible changes have not been backported (yet). If we do backport >>> it will be up to pike maximum. >>> >>> Those jobs work in single node mode only at the moment. Enabling >>> multinode via job configuration only require a new Zuul feature [4][5] that >>> should be available soon; the new feature allows defining host/group >>> variables in the job definition, which means setting variables which are >>> specific to one host or a group of hosts. >>> Multinode DVR and Ironic jobs will require migration of the ovs-* roles >>> form devstack-gate to devstack as well. >>> >>> Grenade jobs (single and multinode) are still legacy, even if the >>> *legacy* word has been removed from the name. >>> They are currently temporarily hosted in the neutron repository. They >>> are going to be implemented as Zuul v3 native in the grenade repository. >>> >>> Roles are documented, and a couple of migration tips for DEVSTACK_GATE >>> flags is available in the etherpad [0]; more comprehensive examples / >>> docs will be available as soon as possible. >>> >>> Please let me know if you find this update useful and / or if you would >>> like to see different information in it. >>> I will send further updates as soon as significant changes / new >>> features become available. >>> >>> Andrea Frittoli (andreaf) >>> >>> [0] https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs >>> >>> [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 >>> [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 >>> [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 >>> [4] https://etherpad.openstack.org/p/zuulv3-group-variables >>> [5] https://review.openstack.org/#/c/544562/ >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Mar 12 14:39:52 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 12 Mar 2018 14:39:52 +0000 Subject: [openstack-dev] [sahara] PTG Summary Message-ID: Hello Saharans and interested folks, This PTG was a very interesting experience, for those not familiar with what happened there, here goes a quick summary. The event became known as SnowOpenStack PTG due to some snow that got in the way of event. But that didn't get in the way of determined people that wanted to do good work anyways. This PTG for Sahara was different, we only had 2 people present so a few topics couldn't be fully discussed. We started our week as Sahara joining the Storyboard room in order to understand better their status and what are the needed steps for us to migrate from launchpad to storyboard. The outcome of that meeting was better than expected. There are only a few things needed from our side to fully migrate. Tosky got ahead on this and already updated the migration script to support sahara and other projects needs and tested the migration with success. Now we only need to wait on the storyboard team to test it as well and then we need to document that we are migrating to storyboard. Action item on this for all involved in the project: Take some time to read on Storyboard and get familiar with it. Than we started Sahara meetings: We started with the traditional retrospective, we looked over the work list from Queens, marking them with Done, Partially Done and Unstarted. Please all take a look at that and let me know if anything not prioritized from that list is necessary. Now to specific discussion topics. Plugins upgrades, deprecation and removals ------------------------------------------------------------ CDH: - Upgrade to 5.12, 5.13 and 5.14 - Deprecate 5.7, 5.9 if we are able to upgrade all three version or two of them - Remove 5.5.0 MapR: - Upgrade to 6.0 and update latest packages for 5.2 - Remove all references to 5.1 Ambari: - Upgrade Ambari to 2.6 and HDP to 2.6 as well - Use the Ambari 2.6 image with both HDP 2.6 and 2.5 and 2.4 - Deprecate HDP version 2.3 and work the removal of ambari 2.4 for S Spark: - Upgrade to 2.3.0 and check 2.2.1 as subversion as 2.2.0 - Deprecate 1.6, 2.1 (see move out of trusty discussion) - Remove 1.3 Storm: - Upgrade to 1.2.1 and 1.2.0 (under the same tag 1.2) - Deprecate 1.0.1 - Remove 0.9.2 Vanilla: - Upgrade to vanilla 3.0.0 and check if we can add 2.7.5 Sahara CI ------------- We are still facing issues with our third-party CI. We plan to have nightly jobs running each day for a plugin so we won't need too much resources, that seems possible with zuulv3. Also there is an issue with experimental queue, now we run all experimental jobs even when not necessary, we plan to split this in separate queues so we can run specific tests. Python 3 support ----------------------- Python 2 support is closing and we need to fully migrate to Python 3. Right now we have unit tests in place, tempest should be easily resolved; Scenarios jobs are there but failing with swift issues. Once we have all of those we will have a better grasp of what we need to do and how much work will be. In any case, we need to get a jump on these soon. SSL/TLS everywhere ---------------------------- We need to check our status of secure communication with other projects as well as between Sahara and its clusters. Right now Sahara should be able to communicate with CDH using SSL but it is hard coded and we need to change that and test how it goes. Also we need to check certificate management. For Ambari, CA is generated but not reconized on RHEL/CentOS>=7.4, we need to make sure this works. Move out of trusty ------------------------ Trusty support will be dropped soon and we need to make sure we don't have any dependencies on our side. For that we need to work on removing plugins versions that are only supported on trusty. Plugins outside Sahara ------------------------------- After discussion with Doug Hellman, seems like we have a good plan to have this finally done. The goal is the have a new project (sahara-plugins) that will require stuff from sahara and sahara itself will load plugins from sahara-plugins project dinamically with stevedore, this way we don't have a circular dependency. The bulk of the work is done, spec should be on its way soon. APIv2 -------- We have it available as experimental, but we miss CLI, tempest client, tempest tests and scenarios tests. Boot from volume ----------------------- This spec has been standing there for a long time and we need to get this done. Bugs ------- Spark amount of resources: - We need to check if we can define on config file - Set the default to 50% of flavor capacity - If we can get this info from args we won't need to change much, just add some conditions to make sure we get this info and it makes into command line File copy time out when file is too big: - Paramiko PUT is not the ideal solution because it will need to write file locally - Pipelined + buffer also proved an inefficient solution - We will test now putfo + StringIO for file-like object - And if that doesn't work either we will fall back to scp This email ran longer than expected, more details can be found at etherpad[1] along side with our priority list for Rocky. Let me know if you need help understanding anything at the etherpad. Thanks for all present, and those who tried their best to help even though weren't there. [1] https://etherpad.openstack.org/p/sahara-rocky-ptg -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Mon Mar 12 15:39:19 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 12 Mar 2018 11:39:19 -0400 Subject: [openstack-dev] [oslo] Oslo PTG Summary In-Reply-To: <5AA1A2AD.3050204@fastmail.com> References: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> <5AA1A2AD.3050204@fastmail.com> Message-ID: Hi Josh - I'm able to view all of them, but I probably have special google powers ;) Which links are broken for you? thanks, On Thu, Mar 8, 2018 at 3:53 PM, Joshua Harlow wrote: > > Can we get some of those doc links opened. > > 'You need permission to access this published document.' I am getting for a > few of them :( > > > Ben Nemec wrote: >> >> Hi, >> >> Here's my summary of the discussions we had in the Oslo room at the PTG. >> Please feel free to reply with any additions if I missed something or >> correct anything I've misrepresented. >> >> oslo.config drivers for secret management >> ----------------------------------------- >> >> The oslo.config implementation is in progress, while the Castellan >> driver still needs to be written. We want to land this early in Rocky as >> it is a significant change in architecture for oslo.config and we want >> it to be well-exercised before release. >> >> There are discussions with the TripleO team around adding support for >> this feature to its deployment tooling and there will be a functional >> test job for the Castellan driver with Custodia. >> >> There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 >> UTC for discussion of this feature. >> >> oslo.config driver implementation: https://review.openstack.org/#/c/513844 >> spec: >> >> https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html >> >> Custodia key management support for Castellan: >> https://review.openstack.org/#/c/515190/ >> >> "stable" libraries >> ------------------ >> >> Some of the Oslo libraries are in a mature state where there are very >> few, if any, meaningful changes to them. With the removal of the >> requirements sync process in Rocky, we may need to change the release >> process for these libraries. My understanding was that there were no >> immediate action items for this, but it was something we need to be >> aware of. >> >> dropping support for mox3 >> ------------------------- >> >> There was some concern that no one from the Oslo team is actually in a >> position to support mox3 if something were to break (such as happened in >> some libraries with Python 3.6). Since there is a community goal to >> remove mox from all OpenStack projects in Rocky this will hopefully not >> be a long-term problem, but there was some discussion that if projects >> needed to keep mox for some reason that they would be asked to provide a >> maintainer for mox3. This topic is kind of on hold pending the outcome >> of the community goal this cycle. >> >> automatic configuration migration on upgrade >> -------------------------------------------- >> >> There is a desire for oslo.config to provide a mechanism to >> automatically migrate deprecated options to their new location on >> version upgrades. This is a fairly complex topic that I can't cover >> adequately in a summary email, but there is a spec proposed at >> https://review.openstack.org/#/c/520043/ and POC changes at >> https://review.openstack.org/#/c/526314/ and >> https://review.openstack.org/#/c/526261/ >> >> One outcome of the discussion was that in the initial version we would >> not try to handle complex migrations, such as the one that happened when >> we combined all of the separate rabbit connection opts into a single >> connection string. To start with we will just raise a warning to the >> user that they need to handle those manually, but a templated or >> hook-based method of automating those migrations could be added as a >> follow-up if there is sufficient demand. >> >> oslo.messaging plans >> -------------------- >> >> There was quite a bit discussed under this topic. I'm going to break it >> down into sub-topics for clarity. >> >> oslo.messaging heartbeats >> ========================= >> >> Everyone seemed to be in favor of this feature, so we anticipate >> development moving forward in Rocky. There is an initial patch proposed >> at https://review.openstack.org/546763 >> >> We felt that it should be possible to opt in and out of the feature, and >> that the configuration should be done at the application level. This >> should _not_ be an operator decision as they do not have the knowledge >> to make it sanely. >> >> There was also a desire to have a TTL for messages. >> >> bug cleanup >> =========== >> >> There are quite a few launchpad bugs open against oslo.messaging that >> were reported against old, now unsupported versions. Since we have the >> launchpad bug expirer enabled in Oslo the action item proposed for such >> bugs was to mark them incomplete and ask the reporter to confirm that >> they still occur against a supported version. This way bugs that don't >> reproduce or where the reporter has lost interest will eventually be >> closed automatically, but bugs that do still exist can be updated with >> more current information. >> >> deprecations >> ============ >> >> The Pika driver will be deprecated in Rocky. To our knowledge, no one >> has ever used it and there are no known benefits over the existing >> Rabbit driver. >> >> Once again, the ZeroMQ driver was proposed for deprecation as well. The >> CI jobs for ZMQ have been broken for a while, and there doesn't seem to >> be much interest in maintaining them. Furthermore, the breakage seems to >> be a fundamental problem with the driver that would require non-trivial >> work to fix. >> >> Given that ZMQ has been a consistent pain point in oslo.messaging over >> the past few years, it was proposed that if someone does step forward >> and want to maintain it going forward then we should split the driver >> off into its own library which could then have its own core team and >> iterate independently of oslo.messaging. However, at this time the plan >> is to propose the deprecation and start that discussion first. >> >> CI >> == >> >> Need to migrate oslo.messaging to zuulv3 native jobs. The >> openstackclient library was proposed as a good example of how to do so. >> >> We also want to have voting hybrid messaging jobs (where the >> notification and rpc messages are sent via different backends). We will >> define a devstack job variant that other projects can turn on if desired. >> >> We also want to add amqp1 support to pifpaf for functional testing. >> >> Low level messaging API >> ======================= >> >> A proposal for a new oslo.messaging API to expose lower level messaging >> functionality was proposed. There is a presentation at >> >> https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing >> >> >> This seemed to generally be well-received by the room, and dragonflow >> and neutron reviewers were suggested for the spec. >> >> Kafka >> ===== >> >> Andy Smith gave an update on the status of the Kafka driver. Currently >> it is still experimental, and intended to be used for notifications >> only. There is a presentation with more details in >> >> https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p >> >> >> testing for Edge/FEMDC use cases >> ================================ >> >> Matthieu Simonin gave a presentation about the testing he has done >> related to messaging in the Edge/FEMDC scenario where messaging targets >> might be widely distributed. The slides can be found at >> >> https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 >> >> >> In short, there is a desire to build clouds that have widely distributed >> nodes such that content can be delivered to users from a location as >> close as possible. This puts a lot of pressure on the messaging layer as >> compute nodes (for example) could be halfway around the world from the >> control nodes, which is problematic for a broker-based system such as >> Rabbit. There is some very interesting data comparing Rabbit with a more >> distributed AMQP1 system based on qpid-dispatch-router. In short, the >> distributed system performed much better for this use case, although >> there was still some concern raised about the memory usage on the client >> side with both drivers. Some followup is needed on the oslo.messaging >> side to make sure we aren't leaking/wasting resources in some messaging >> scenarios. >> >> For further details I suggest taking a look at the presentation. >> >> mutable configuration >> --------------------- >> >> This is also a community goal for Rocky, and Chang Bo is driving its >> adoption. There was some discussion of how to test it, and also that we >> should provide an example of turning on mutability for the debug option >> since that is the target of the community goal. The cinder patch can be >> found here: https://review.openstack.org/#/c/464028/ Turns out it's >> really simple! >> >> Nova is also using this functionality for more complex options related >> to upgrades, so that would be a good place to look for more advanced use >> cases. >> >> Full documentation for the mutable config options is at >> https://docs.openstack.org/oslo.config/latest/reference/mutable.html >> >> The goal status is being tracked in >> https://storyboard.openstack.org/#!/story/2001545 >> >> Chang Bo was also going to talk to Lance about possibly coming up with a >> burndown chart like the one he had for the policy in code work. >> >> oslo healthcheck middleware >> --------------------------- >> >> As this ended up being the only major topic for the afternoon, the >> session was unfortunately lightly attended. However, the self-healing >> SIG was talking about related topics at the same time so we ended up >> moving to that room and had a good discussion. >> >> Overall the feature seemed to be well-received. There is some security >> concern with exposing service information over an un-authenticated >> endpoint, but because there is no authentication supported by the health >> checking functionality in things like Kubernetes or HAProxy this is >> unavoidable. The feature won't be mandatory, so if this exposure is >> unacceptable it can be turned off (with a corresponding loss of >> functionality, of course). >> >> There was also some discussion of dropping the asynchronous nature of >> the checks in the initial version in order to keep the complexity to a >> minimum. Asynchronous testing can always be added later if it proves >> necessary. >> >> The full spec is at https://review.openstack.org/#/c/531456 >> >> oslo.config strict validation >> ----------------------------- >> >> I actually had discussions with multiple people about this during the >> week. In both cases, they were just looking for a minimal amount of >> validation that would catch an error such at "devug=True". Such a >> validation might be fairly simple to write now that we have the >> YAML-based sample config with (ideally) information about all the >> options available to set in a project. It should be possible to compare >> the options set in the config file with the ones listed in the sample >> config and raise warnings for any that don't exist. >> >> There is also a more complete validation spec at >> >> http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html >> and a patch proposed at https://review.openstack.org/#/c/384559/ >> >> Unfortunately there has been little movement on that as of late, so it >> might be worthwhile to implement something more minimalist initially and >> then build from there. The existing patch is quite significant and >> difficult to review. >> >> Conclusion >> ---------- >> >> I feel like there were a lot of good discussions at the PTG and we have >> plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) >> >> Thanks to everyone who participated and I look forward to seeing how >> much progress we've made at the next Summit and PTG. >> >> -Ben >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ken Giusti (kgiusti at gmail.com) From jeremyfreudberg at gmail.com Mon Mar 12 15:44:54 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 12 Mar 2018 11:44:54 -0400 Subject: [openstack-dev] [sahara][all] Sahara Rocky Virtual PTG scheduling Message-ID: Hi all, Due to my unexpected absence from Dublin we have decided that a virtual PTG is a good idea. Let's try to find 90-120 minutes somewhere in our busy schedules to convene remotely. https://www.when2meet.com/?6755109-XpWjd Please use the poll linked above to choose some times which work for you. I've already started it with times that potentially work for me, but I can become even more flexible if needed. (Be warned that the poll site is a bit glitchy... make sure that you are seeing the right times. Whatever time zone you are viewing it in should show the first slot on Monday as equivalent to 1300 UTC. Also be warned that depending on what time zone you are viewing in, the date may "roll over" mid-column.) All interested parties are welcome. We will decide the exact medium of communication based on what media the confirmed participants are able to use. In any case the outcomes will be logged to the mailing list. Best, Jeremy From doug at doughellmann.com Mon Mar 12 15:54:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 12 Mar 2018 11:54:35 -0400 Subject: [openstack-dev] [oslo] Oslo PTG Summary In-Reply-To: References: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> <5AA1A2AD.3050204@fastmail.com> Message-ID: I can’t see https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p > On Mar 12, 2018, at 11:39 AM, Ken Giusti wrote: > > Hi Josh - I'm able to view all of them, but I probably have special > google powers ;) > > Which links are broken for you? > > thanks, > > On Thu, Mar 8, 2018 at 3:53 PM, Joshua Harlow wrote: >> >> Can we get some of those doc links opened. >> >> 'You need permission to access this published document.' I am getting for a >> few of them :( >> >> >> Ben Nemec wrote: >>> >>> Hi, >>> >>> Here's my summary of the discussions we had in the Oslo room at the PTG. >>> Please feel free to reply with any additions if I missed something or >>> correct anything I've misrepresented. >>> >>> oslo.config drivers for secret management >>> ----------------------------------------- >>> >>> The oslo.config implementation is in progress, while the Castellan >>> driver still needs to be written. We want to land this early in Rocky as >>> it is a significant change in architecture for oslo.config and we want >>> it to be well-exercised before release. >>> >>> There are discussions with the TripleO team around adding support for >>> this feature to its deployment tooling and there will be a functional >>> test job for the Castellan driver with Custodia. >>> >>> There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 >>> UTC for discussion of this feature. >>> >>> oslo.config driver implementation: https://review.openstack.org/#/c/513844 >>> spec: >>> >>> https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html >>> >>> Custodia key management support for Castellan: >>> https://review.openstack.org/#/c/515190/ >>> >>> "stable" libraries >>> ------------------ >>> >>> Some of the Oslo libraries are in a mature state where there are very >>> few, if any, meaningful changes to them. With the removal of the >>> requirements sync process in Rocky, we may need to change the release >>> process for these libraries. My understanding was that there were no >>> immediate action items for this, but it was something we need to be >>> aware of. >>> >>> dropping support for mox3 >>> ------------------------- >>> >>> There was some concern that no one from the Oslo team is actually in a >>> position to support mox3 if something were to break (such as happened in >>> some libraries with Python 3.6). Since there is a community goal to >>> remove mox from all OpenStack projects in Rocky this will hopefully not >>> be a long-term problem, but there was some discussion that if projects >>> needed to keep mox for some reason that they would be asked to provide a >>> maintainer for mox3. This topic is kind of on hold pending the outcome >>> of the community goal this cycle. >>> >>> automatic configuration migration on upgrade >>> -------------------------------------------- >>> >>> There is a desire for oslo.config to provide a mechanism to >>> automatically migrate deprecated options to their new location on >>> version upgrades. This is a fairly complex topic that I can't cover >>> adequately in a summary email, but there is a spec proposed at >>> https://review.openstack.org/#/c/520043/ and POC changes at >>> https://review.openstack.org/#/c/526314/ and >>> https://review.openstack.org/#/c/526261/ >>> >>> One outcome of the discussion was that in the initial version we would >>> not try to handle complex migrations, such as the one that happened when >>> we combined all of the separate rabbit connection opts into a single >>> connection string. To start with we will just raise a warning to the >>> user that they need to handle those manually, but a templated or >>> hook-based method of automating those migrations could be added as a >>> follow-up if there is sufficient demand. >>> >>> oslo.messaging plans >>> -------------------- >>> >>> There was quite a bit discussed under this topic. I'm going to break it >>> down into sub-topics for clarity. >>> >>> oslo.messaging heartbeats >>> ========================= >>> >>> Everyone seemed to be in favor of this feature, so we anticipate >>> development moving forward in Rocky. There is an initial patch proposed >>> at https://review.openstack.org/546763 >>> >>> We felt that it should be possible to opt in and out of the feature, and >>> that the configuration should be done at the application level. This >>> should _not_ be an operator decision as they do not have the knowledge >>> to make it sanely. >>> >>> There was also a desire to have a TTL for messages. >>> >>> bug cleanup >>> =========== >>> >>> There are quite a few launchpad bugs open against oslo.messaging that >>> were reported against old, now unsupported versions. Since we have the >>> launchpad bug expirer enabled in Oslo the action item proposed for such >>> bugs was to mark them incomplete and ask the reporter to confirm that >>> they still occur against a supported version. This way bugs that don't >>> reproduce or where the reporter has lost interest will eventually be >>> closed automatically, but bugs that do still exist can be updated with >>> more current information. >>> >>> deprecations >>> ============ >>> >>> The Pika driver will be deprecated in Rocky. To our knowledge, no one >>> has ever used it and there are no known benefits over the existing >>> Rabbit driver. >>> >>> Once again, the ZeroMQ driver was proposed for deprecation as well. The >>> CI jobs for ZMQ have been broken for a while, and there doesn't seem to >>> be much interest in maintaining them. Furthermore, the breakage seems to >>> be a fundamental problem with the driver that would require non-trivial >>> work to fix. >>> >>> Given that ZMQ has been a consistent pain point in oslo.messaging over >>> the past few years, it was proposed that if someone does step forward >>> and want to maintain it going forward then we should split the driver >>> off into its own library which could then have its own core team and >>> iterate independently of oslo.messaging. However, at this time the plan >>> is to propose the deprecation and start that discussion first. >>> >>> CI >>> == >>> >>> Need to migrate oslo.messaging to zuulv3 native jobs. The >>> openstackclient library was proposed as a good example of how to do so. >>> >>> We also want to have voting hybrid messaging jobs (where the >>> notification and rpc messages are sent via different backends). We will >>> define a devstack job variant that other projects can turn on if desired. >>> >>> We also want to add amqp1 support to pifpaf for functional testing. >>> >>> Low level messaging API >>> ======================= >>> >>> A proposal for a new oslo.messaging API to expose lower level messaging >>> functionality was proposed. There is a presentation at >>> >>> https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing >>> >>> >>> This seemed to generally be well-received by the room, and dragonflow >>> and neutron reviewers were suggested for the spec. >>> >>> Kafka >>> ===== >>> >>> Andy Smith gave an update on the status of the Kafka driver. Currently >>> it is still experimental, and intended to be used for notifications >>> only. There is a presentation with more details in >>> >>> https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p >>> >>> >>> testing for Edge/FEMDC use cases >>> ================================ >>> >>> Matthieu Simonin gave a presentation about the testing he has done >>> related to messaging in the Edge/FEMDC scenario where messaging targets >>> might be widely distributed. The slides can be found at >>> >>> https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 >>> >>> >>> In short, there is a desire to build clouds that have widely distributed >>> nodes such that content can be delivered to users from a location as >>> close as possible. This puts a lot of pressure on the messaging layer as >>> compute nodes (for example) could be halfway around the world from the >>> control nodes, which is problematic for a broker-based system such as >>> Rabbit. There is some very interesting data comparing Rabbit with a more >>> distributed AMQP1 system based on qpid-dispatch-router. In short, the >>> distributed system performed much better for this use case, although >>> there was still some concern raised about the memory usage on the client >>> side with both drivers. Some followup is needed on the oslo.messaging >>> side to make sure we aren't leaking/wasting resources in some messaging >>> scenarios. >>> >>> For further details I suggest taking a look at the presentation. >>> >>> mutable configuration >>> --------------------- >>> >>> This is also a community goal for Rocky, and Chang Bo is driving its >>> adoption. There was some discussion of how to test it, and also that we >>> should provide an example of turning on mutability for the debug option >>> since that is the target of the community goal. The cinder patch can be >>> found here: https://review.openstack.org/#/c/464028/ Turns out it's >>> really simple! >>> >>> Nova is also using this functionality for more complex options related >>> to upgrades, so that would be a good place to look for more advanced use >>> cases. >>> >>> Full documentation for the mutable config options is at >>> https://docs.openstack.org/oslo.config/latest/reference/mutable.html >>> >>> The goal status is being tracked in >>> https://storyboard.openstack.org/#!/story/2001545 >>> >>> Chang Bo was also going to talk to Lance about possibly coming up with a >>> burndown chart like the one he had for the policy in code work. >>> >>> oslo healthcheck middleware >>> --------------------------- >>> >>> As this ended up being the only major topic for the afternoon, the >>> session was unfortunately lightly attended. However, the self-healing >>> SIG was talking about related topics at the same time so we ended up >>> moving to that room and had a good discussion. >>> >>> Overall the feature seemed to be well-received. There is some security >>> concern with exposing service information over an un-authenticated >>> endpoint, but because there is no authentication supported by the health >>> checking functionality in things like Kubernetes or HAProxy this is >>> unavoidable. The feature won't be mandatory, so if this exposure is >>> unacceptable it can be turned off (with a corresponding loss of >>> functionality, of course). >>> >>> There was also some discussion of dropping the asynchronous nature of >>> the checks in the initial version in order to keep the complexity to a >>> minimum. Asynchronous testing can always be added later if it proves >>> necessary. >>> >>> The full spec is at https://review.openstack.org/#/c/531456 >>> >>> oslo.config strict validation >>> ----------------------------- >>> >>> I actually had discussions with multiple people about this during the >>> week. In both cases, they were just looking for a minimal amount of >>> validation that would catch an error such at "devug=True". Such a >>> validation might be fairly simple to write now that we have the >>> YAML-based sample config with (ideally) information about all the >>> options available to set in a project. It should be possible to compare >>> the options set in the config file with the ones listed in the sample >>> config and raise warnings for any that don't exist. >>> >>> There is also a more complete validation spec at >>> >>> http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html >>> and a patch proposed at https://review.openstack.org/#/c/384559/ >>> >>> Unfortunately there has been little movement on that as of late, so it >>> might be worthwhile to implement something more minimalist initially and >>> then build from there. The existing patch is quite significant and >>> difficult to review. >>> >>> Conclusion >>> ---------- >>> >>> I feel like there were a lot of good discussions at the PTG and we have >>> plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) >>> >>> Thanks to everyone who participated and I look forward to seeing how >>> much progress we've made at the next Summit and PTG. >>> >>> -Ben >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Ken Giusti (kgiusti at gmail.com) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Mar 12 15:56:49 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 12 Mar 2018 11:56:49 -0400 Subject: [openstack-dev] [sahara] PTG Summary In-Reply-To: References: Message-ID: Thanks Luigi and Telles for striving to work hard, even in my absence. :) Some notes inline. On Mon, Mar 12, 2018 at 10:39 AM, Telles Nobrega wrote: > Hello Saharans and interested folks, > > This PTG was a very interesting experience, for those not familiar with > what happened there, here goes a quick summary. > > The event became known as SnowOpenStack PTG due to some snow that got in > the way of event. But that didn't get in the way of determined people that > wanted to do good work anyways. > This PTG for Sahara was different, we only had 2 people present so a few > topics couldn't be fully discussed. > > We started our week as Sahara joining the Storyboard room in order to > understand better their status and what are the needed steps for us to > migrate from launchpad to storyboard. > The outcome of that meeting was better than expected. There are only a few > things needed from our side to fully migrate. Tosky got ahead on this and > already updated the migration script to support sahara and other projects > needs and tested the migration with success. Now we only need to wait on > the storyboard team to test it as well and then we need to document that we > are migrating to storyboard. > Action item on this for all involved in the project: Take some time to > read on Storyboard and get familiar with it. > > Than we started Sahara meetings: > > We started with the traditional retrospective, we looked over the work > list from Queens, marking them with Done, Partially Done and Unstarted. > Please all take a look at that and let me know if anything not prioritized > from that list is necessary. > > Now to specific discussion topics. > > Plugins upgrades, deprecation and removals > ------------------------------------------------------------ > > CDH: > > - Upgrade to 5.12, 5.13 and 5.14 > - Deprecate 5.7, 5.9 if we are able to upgrade all three version or > two of them > - Remove 5.5.0 > > MapR: > > - Upgrade to 6.0 and update latest packages for 5.2 > > Looks like MapR 6 offers some new management services which we should acknowledge and add. > > - > - Remove all references to 5.1 > > Ambari: > > - Upgrade Ambari to 2.6 and HDP to 2.6 as well > - Use the Ambari 2.6 image with both HDP 2.6 and 2.5 and 2.4 > - Deprecate HDP version 2.3 and work the removal of ambari 2.4 for S > > Spark: > > - Upgrade to 2.3.0 and check 2.2.1 as subversion as 2.2.0 > - Deprecate 1.6, 2.1 (see move out of trusty discussion) > - Remove 1.3 > > Storm: > > - Upgrade to 1.2.1 and 1.2.0 (under the same tag 1.2) > - Deprecate 1.0.1 > - Remove 0.9.2 > > Vanilla: > > - Upgrade to vanilla 3.0.0 and check if we can add 2.7.5 > > 2.7.5 seems much lower priority to me. Even 2.8.3 or the eventual 2.8.4 seems more important. But I'm not sure. > > Sahara CI > ------------- > We are still facing issues with our third-party CI. We plan to have > nightly jobs running each day for a plugin so we won't need too much > resources, that seems possible with zuulv3. > Also there is an issue with experimental queue, now we run all > experimental jobs even when not necessary, we plan to split this in > separate queues so we can run specific tests. > It looks like the wheels of progress are starting to turn on my end... > > Python 3 support > ----------------------- > Python 2 support is closing and we need to fully migrate to Python 3. > Right now we have unit tests in place, tempest should be easily resolved; > Scenarios jobs are there but failing with swift issues. Once we have all of > those we will have a better grasp of what we need to do and how much work > will be. In any case, we need to get a jump on these soon. > > SSL/TLS everywhere > ---------------------------- > We need to check our status of secure communication with other projects as > well as between Sahara and its clusters. Right now Sahara should be able to > communicate with CDH using SSL but it is hard coded and we need to change > that and test how it goes. Also we need to check certificate management. > For Ambari, CA is generated but not reconized on RHEL/CentOS>=7.4, we need > to make sure this works. > > Move out of trusty > ------------------------ > Trusty support will be dropped soon and we need to make sure we don't have > any dependencies on our side. For that we need to work on removing plugins > versions that are only supported on trusty. > > Plugins outside Sahara > ------------------------------- > After discussion with Doug Hellman, seems like we have a good plan to have > this finally done. The goal is the have a new project (sahara-plugins) that > will require stuff from sahara and sahara itself will load plugins from > sahara-plugins project dinamically with stevedore, this way we don't have a > circular dependency. > The bulk of the work is done, spec should be on its way soon. > +1 > > APIv2 > -------- > We have it available as experimental, but we miss CLI, tempest client, > tempest tests and scenarios tests. > And a few other notes, which I'll bring up at the virtual PTG. > > Boot from volume > ----------------------- > This spec has been standing there for a long time and we need to get this > done. > > Bugs > ------- > Spark amount of resources: > > - We need to check if we can define on config file > - Set the default to 50% of flavor capacity > - If we can get this info from args we won't need to change much, just > add some conditions to make sure we get this info and it makes into command > line > > File copy time out when file is too big: > > - Paramiko PUT is not the ideal solution because it will need to write > file locally > - Pipelined + buffer also proved an inefficient solution > - We will test now putfo + StringIO for file-like object > - And if that doesn't work either we will fall back to scp > > This email ran longer than expected, more details can be found at > etherpad[1] along side with our priority list for Rocky. > > Let me know if you need help understanding anything at the etherpad. > > Thanks for all present, and those who tried their best to help even though > weren't there. > > [1] https://etherpad.openstack.org/p/sahara-rocky-ptg > -- > > TELLES NOBREGA > > SOFTWARE ENGINEER > > Red Hat Brasil > > Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo > > tenobreg at redhat.com > > TRIED. TESTED. TRUSTED. > Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil > pelo Great Place to Work. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Mon Mar 12 16:25:42 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 12 Mar 2018 16:25:42 +0000 Subject: [openstack-dev] [sahara][all] Sahara Rocky Virtual PTG scheduling In-Reply-To: References: Message-ID: Thanks for putting this together Jeremy. On Mon, Mar 12, 2018 at 12:44 PM Jeremy Freudberg wrote: > Hi all, > > Due to my unexpected absence from Dublin we have decided that a > virtual PTG is a good idea. Let's try to find 90-120 minutes somewhere > in our busy schedules to convene remotely. > > https://www.when2meet.com/?6755109-XpWjd > > Please use the poll linked above to choose some times which work for > you. I've already started it with times that potentially work for me, > but I can become even more flexible if needed. > > (Be warned that the poll site is a bit glitchy... make sure that you > are seeing the right times. Whatever time zone you are viewing it in > should show the first slot on Monday as equivalent to 1300 UTC. Also > be warned that depending on what time zone you are viewing in, the > date may "roll over" mid-column.) > > All interested parties are welcome. > > We will decide the exact medium of communication based on what media > the confirmed participants are able to use. In any case the outcomes > will be logged to the mailing list. > > Best, > Jeremy > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikolai.defigueiredo at netronome.com Mon Mar 12 16:33:36 2018 From: nikolai.defigueiredo at netronome.com (Nikolai de Figueiredo) Date: Mon, 12 Mar 2018 18:33:36 +0200 Subject: [openstack-dev] [Neutron] [FWaaS] Stateless Security Groups or Firewalls Discussion Message-ID: Hi all I have just discovered an RFE [1] that was submitted on the 5th of March (a few days ago) proposing an extension to Neutron and specifically security groups to enable stateless filtering. The topic of stateless firewalls was discussed at the Rocky PTG with the FWaaS sub-team but we didn't discuss the feature from the perspective of security groups [2]. >From the sound of the RFE Nokia is volunteering resources to implement this extension for similar reasons as were discussed by the firewall team. I would like to firstly bring this to the attention of the team and secondly begin the discussions around the inevitable need for a specification. My initial question is as in the subject line: should this be a security group extension or a firewall extension or both? Further discussion may continue in the shared Google document [3]. Regards Nikolai [1] https://bugs.launchpad.net/neutron/+bug/1753466 [2] https://etherpad.openstack.org/p/fwaas-rocky-planning [3] https://docs.google.com/document/d/1pWU5wSIlba7oixpKT8CS xpJnFCJt8zJj18c-mVE9uUM/edit?usp=sharing -- *Nikolai de Figueiredo* *Software Engineer* *Netronome* | Unit 7, Corporate Corner, 2 Marco Polo St, Centurion, 0157 Phone: +27 12 665 4427 <+27%2012%20665%204427> | Skype: live:sphvengaurd | www.netronome.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Signature-Option.png Type: image/png Size: 7936 bytes Desc: not available URL: From lbragstad at gmail.com Mon Mar 12 16:45:28 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 12 Mar 2018 11:45:28 -0500 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: <5AA0D066.1070600@fastmail.com> References: <5AA0D066.1070600@fastmail.com> Message-ID: I missed the document describing the process for this sort of thing [0]. So I'm back tracking a bit to go through a more formal process. [0] http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html # Proposed new library oslo.limit This is a proposal to create a new library dedicated to enabling more consistent quota and limit enforcement across OpenStack. ## Proposed library mission Enforcing quotas and limits across OpenStack has traditionally been a tough problem to solve. Determining enforcement requires quota knowledge from the service along with information about the project owning the resource. Up until the Queens release, quota calculation and enforcement has been left to the services to implement, forcing them to understand complexities of keystone project structure. During the Pike and Queens PTG, there were several productive discussions towards redesigning the current approach to quota enforcement. Because keystone is the authority of project structure, it makes sense to allow keystone to hold the association between a resource limit and a project. This means services still need to calculate quota and usage, but the problem should be easier for services to implement since developers shouldn't need to re-implement possible hierarchies of projects and their associated limits. Instead, we can offload some of that work to a common library for services to consume that handles enforcing quota calculation based on limits associated to projects in keystone. This proposal is to have a new library called oslo.limit that fills that need. ## Consuming projects The services consuming this work will be any service that currently implements a quota system, or plans to implement one. Since keystone already supports unified limits and association of limits to projects, the implementation for consuming projects is easier. instead of having to re-write that implementation, developers need to ensure quota calculation to passed to the oslo.limit library somewhere in the API's validation layer. The pattern described here is very similar to the pattern currently used by services that leverage oslo.policy for authorization decisions. ## Alternative libraries It looks like there was an existing library that attempted to solve some of these problems, called delimiter [1]. It looks like delimiter could be used to talk to keystone about quota enforcement, where as the existing approach with oslo.limit would be to use keystone directly. Someone more familiar with the library (harlowja?) can probably shed more light on it's intended uses (I couldn't find much documentation), but the presentation linked in a previous note was helpful. [1] https://github.com/openstack/delimiter ## Proposed adoption model/plan The unified limit API [2] in keystone is currently marked as experimental, but the keystone team is actively collecting and addressing feedback that will result in stabilizing the API. Stabilization changes that effect the oslo.limit library will also be addressed before version 1.0.0 is released. From there, we can look to incorporate the library into various services that either have an existing quota implementation, or services that have a quota requirement but no implementation. This should help us refine the interfaces between services and oslo.limit, while providing a facade to handle complexities of project hierarchies. This should enable adoption by simplifying the process and making it easier for quota to be implemented in a consistent way across services. [2] https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html ## Reviewer activity At first thought, it makes sense to model the reviewer structure after the oslo.policy library, where the core team consists of people not only interested in limits and quota, but also people familiar with the keystone implementation of the unified limits API. ## Implementation ### Primary Authors:   Lance Bragstad (lbragstad at gmail.com) lbragstad   You? ### Other contributors:   You? ## Work Items * Create a new library called oslo.limit * Create a core group for the project * Define the minimum we need to enforce quota calculations in oslo.limit * Propose an implementation that allows services to test out quota enforcement via unified limits ## References Rocky PTG Etherpad for unified limits: https://etherpad.openstack.org/p/unified-limits-rocky-ptg ## Revision History Introduced in Rocky On 03/07/2018 11:55 PM, Joshua Harlow wrote: > So the following was a prior effort: > > https://github.com/openstack/delimiter > > Maybe just continue down the path of that and/or take that whole repo > over and iterate (or adjust the prior code, or ...)?? Or if not that's > ok to, ya'll get to decide. > > https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal > > > Lance Bragstad wrote: >> Hi all, >> >> Per the identity-integration track at the PTG [0], I proposed a new oslo >> library for services to use for hierarchical quota enforcement [1]. Let >> me know if you have any questions or concerns about the library. If the >> oslo team would like, I can add an agenda item for next weeks oslo >> meeting to discuss. >> >> Thanks, >> >> Lance >> >> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >> [1] https://review.openstack.org/#/c/550491/ >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sombrafam at gmail.com Mon Mar 12 16:46:30 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 12 Mar 2018 13:46:30 -0300 Subject: [openstack-dev] [cinder][summit] Forum topic proposal etherpad created ... In-Reply-To: <31474553-382e-8551-5779-99b81a125589@gmail.com> References: <31474553-382e-8551-5779-99b81a125589@gmail.com> Message-ID: I think I missed something about this. By Forum you mean Summit? Will that happen during the Vancouver Summit right? Will the forum be something similar to PTG? What is the target public? Operators, users admin? Erlon 2018-03-08 14:52 GMT-03:00 Jay S Bryant : > All, > > I just wanted to share the fact that I have created the etherpad for > proposing topics for the Vancouver Forum. [1] > > Please take a few moments to add topics there. I will need to propose the > topics we have in the next two weeks so this will need attention before > that point in time. > > Thanks! > > Jay > > (jungleboyj) > > [1] https://etherpad.openstack.org/p/YVR-cinder-brainstorming > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Mon Mar 12 16:47:55 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 12 Mar 2018 13:47:55 -0300 Subject: [openstack-dev] [cinder][ptg] PTG Summary Now Available ... In-Reply-To: References: <2bceb48a-b643-66a1-e816-ec40b4a18ea8@gmail.com> Message-ID: Thanks Jay, that helps a lot! 2018-03-06 18:46 GMT-03:00 Ivan Kolodyazhny : > Jay, thanks a lot for this great summary! > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Tue, Mar 6, 2018 at 10:02 PM, Jay S Bryant > wrote: > >> Team, >> >> I have collected all of our actions and agreements out of the three days >> of etherpads into the following summary page: [1] . The etherpad includes >> links to the original etherpads and video clips. >> >> I am planning to use the wiki to help guide our development during Rocky. >> >> Let me know if you have any questions or concerns over the content. >> >> Thanks! >> >> Jay >> >> (jungleboyj) >> >> [1] https://wiki.openstack.org/wiki/CinderRockyPTGSummary >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Mon Mar 12 17:05:46 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Mon, 12 Mar 2018 17:05:46 +0000 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: <16E94906-8710-42CE-90F4-C72DEC804E10@cisco.com> +1 From: Jeffrey Zhang Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Sunday, March 11, 2018 at 7:13 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [kolla][vote] core nomination for caoyuan sorry for a typo. The vote is open for 7 days until Mar 19th. On Mon, Mar 12, 2018 at 10:06 AM, Jeffrey Zhang > wrote: ​​Kolla core reviewer team, It is my pleasure to nominate caoyuan for kolla core team. caoyuan's output is fantastic over the last cycle. And he is the most active non-core contributor on Kolla project for last 180 days[1]. He focuses on configuration optimize and improve the pre-checks feature. Consider this nomination a +1 vote from me. A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 is a veto. Voting is open for 7 days until Mar 12th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/180 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Mar 12 17:08:06 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 12 Mar 2018 12:08:06 -0500 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA0D066.1070600@fastmail.com> Message-ID: Huge +1 from me on bringing sanity to the various quota implementations in OpenStack. This is something we discussed back in Paris, but the conclusion at the time was that it wasn't practical for it to live in Oslo because there was no place for Oslo to store the data (Oslo libs dictating db schemas to consuming projects would get messy fast). Now that Keystone is providing a central storage location we should be able to make more progress. On 03/12/2018 11:45 AM, Lance Bragstad wrote: > I missed the document describing the process for this sort of thing [0]. > So I'm back tracking a bit to go through a more formal process. > > [0] > http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html > > # Proposed new library oslo.limit > > This is a proposal to create a new library dedicated to enabling more > consistent quota and limit enforcement across OpenStack. > > ## Proposed library mission > > Enforcing quotas and limits across OpenStack has traditionally been a > tough problem to solve. Determining enforcement requires quota knowledge > from the service along with information about the project owning the > resource. Up until the Queens release, quota calculation and enforcement > has been left to the services to implement, forcing them to understand > complexities of keystone project structure. During the Pike and Queens > PTG, there were several productive discussions towards redesigning the > current approach to quota enforcement. Because keystone is the authority > of project structure, it makes sense to allow keystone to hold the > association between a resource limit and a project. This means services > still need to calculate quota and usage, but the problem should be > easier for services to implement since developers shouldn't need to > re-implement possible hierarchies of projects and their associated > limits. Instead, we can offload some of that work to a common library > for services to consume that handles enforcing quota calculation based > on limits associated to projects in keystone. This proposal is to have a > new library called oslo.limit that fills that need. > > ## Consuming projects > > The services consuming this work will be any service that currently > implements a quota system, or plans to implement one. Since keystone > already supports unified limits and association of limits to projects, > the implementation for consuming projects is easier. instead of having > to re-write that implementation, developers need to ensure quota > calculation to passed to the oslo.limit library somewhere in the API's > validation layer. The pattern described here is very similar to the > pattern currently used by services that leverage oslo.policy for > authorization decisions. > > ## Alternative libraries > > It looks like there was an existing library that attempted to solve some > of these problems, called delimiter [1]. It looks like delimiter could > be used to talk to keystone about quota enforcement, where as the > existing approach with oslo.limit would be to use keystone directly. > Someone more familiar with the library (harlowja?) can probably shed > more light on it's intended uses (I couldn't find much documentation), > but the presentation linked in a previous note was helpful. > > [1] https://github.com/openstack/delimiter I took a look at delimiter as well. There are a couple of points that I think justify a new library: 1) Delimiter appears to implement a pluggable backend strategy, which isn't what we're planning for oslo.limit. We'll only be targeting Keystone as the storage backend. 2) Given that, it makes sense for the library to live in the oslo namespace. Projects like Delimiter that live outside the oslo namespace are generally supposed to be usable without OpenStack, but a hard dependency on Keystone makes that impossible. Given that we aren't overstocked with developers these days, I'd rather not try to solve quota problems for the whole world. If at some point we get oslo.limit to a good place and can factor some bits out to delimiter then great, but I'd rather solve quota for just OpenStack first at this point. 3) It doesn't look like delimiter ever got to the point where it had adoption in OpenStack. It isn't present in global-requirements that I can see and it's not clear to me from looking at the repo whether it even could be used in its current state. In the end I think it might be just as much work to get delimiter to a point where it could be used and we'd still be left with some less-than-ideal design points. > > ## Proposed adoption model/plan > > The unified limit API [2] in keystone is currently marked as > experimental, but the keystone team is actively collecting and > addressing feedback that will result in stabilizing the API. > Stabilization changes that effect the oslo.limit library will also be > addressed before version 1.0.0 is released. From there, we can look to > incorporate the library into various services that either have an > existing quota implementation, or services that have a quota requirement > but no implementation. > > This should help us refine the interfaces between services and > oslo.limit, while providing a facade to handle complexities of project > hierarchies. This should enable adoption by simplifying the process and > making it easier for quota to be implemented in a consistent way across > services. > > [2] > https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html > > ## Reviewer activity > > At first thought, it makes sense to model the reviewer structure after > the oslo.policy library, where the core team consists of people not only > interested in limits and quota, but also people familiar with the > keystone implementation of the unified limits API. +1 > > ## Implementation > > ### Primary Authors: > >   Lance Bragstad (lbragstad at gmail.com) lbragstad >   You? > > ### Other contributors: > >   You? > > ## Work Items > > * Create a new library called oslo.limit > * Create a core group for the project > * Define the minimum we need to enforce quota calculations in oslo.limit > * Propose an implementation that allows services to test out quota > enforcement via unified limits > > ## References > > Rocky PTG Etherpad for unified limits: > https://etherpad.openstack.org/p/unified-limits-rocky-ptg > > ## Revision History > > Introduced in Rocky > > > On 03/07/2018 11:55 PM, Joshua Harlow wrote: >> So the following was a prior effort: >> >> https://github.com/openstack/delimiter >> >> Maybe just continue down the path of that and/or take that whole repo >> over and iterate (or adjust the prior code, or ...)?? Or if not that's >> ok to, ya'll get to decide. >> >> https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal >> >> >> Lance Bragstad wrote: >>> Hi all, >>> >>> Per the identity-integration track at the PTG [0], I proposed a new oslo >>> library for services to use for hierarchical quota enforcement [1]. Let >>> me know if you have any questions or concerns about the library. If the >>> oslo team would like, I can add an agenda item for next weeks oslo >>> meeting to discuss. >>> >>> Thanks, >>> >>> Lance >>> >>> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >>> [1] https://review.openstack.org/#/c/550491/ >>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Mon Mar 12 17:17:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 12 Mar 2018 17:17:50 +0000 Subject: [openstack-dev] [cinder][summit] Forum topic proposal etherpad created ... In-Reply-To: References: <31474553-382e-8551-5779-99b81a125589@gmail.com> Message-ID: <20180312171749.fqeosi2di6fpsyao@yuggoth.org> On 2018-03-12 13:46:30 -0300 (-0300), Erlon Cruz wrote: > I think I missed something about this. By Forum you mean Summit? Will that > happen during the Vancouver Summit right? Will the forum be something > similar to PTG? What is the target public? Operators, users admin? Yes, the Forum takes place at the Summit conference. It is the term we've been using for open discourse replacing the planning activity which went on at the "Design Summit" before we split out the Project Team Gathering as a separate event. See https://wiki.openstack.org/wiki/Forum for a more in-depth description, but it's targeted at all who have a vested interest in design and planning for the future of OpenStack (devsopsusersadminseveryone). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gcerami at redhat.com Mon Mar 12 18:18:26 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Mon, 12 Mar 2018 18:18:26 +0000 Subject: [openstack-dev] [TripleO] Proposal for quickstart devmode replacement Message-ID: <20180312181826.nojo2y4d26ngj5vz@localhost> Hi, we recently changed our set of scripts to uniform the users workflow with the CI workflow. One of the results of this change was the reproducer script, that is now the official way to spawn a live environment to debug a set of changes. One of the negative results of this changes was the deprecation of devmode. How to spawn a live environment with no set of changes as a base ? We are trying to fill this gap with this proposal https://review.openstack.org/548005 The well-known quickstart.sh script called with the proper set of arguments will use the reproducer script, but with a set of option that will not actually use any upstream change to test. For example, the command quickstart.sh --generate-reproducer --jobtype ovb-1ctlr_1comp_1ceph-featureset024 --credentials-file myrdocreds.sh Will spawn a live environment with the exact set of features described in the job periodic-tripleo-ci-ovb-1ctlr_1comp_1ceph-featureset024 Other combinations that don't follow upstream jobs configurations are possible by modifying environment variables, but are not our primary focus at the moment. We are trying to gather as much feedback as possible to make this proposal a worthy successor of the devmode script, so please point out what of the old functionality you think is missing, and what new would you really like to see in it. Thanks. From haleyb.dev at gmail.com Mon Mar 12 18:43:09 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 12 Mar 2018 14:43:09 -0400 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: <550fcc21-e186-4246-6880-114d25997b0e@gmail.com> Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/bugs/1753504 - Remove mox/mox3 usage from testing Multiple people have taken ownership of fixing this * https://bugs.launchpad.net/bugs/1754062 - openstack client does not pass prefixlen when creating subnet Looks like both a bug in the api-ref and in the openstacksdk * https://bugs.launchpad.net/bugs/1754327 - Tempest scenario jobs failing due to no FIP connectivity https://review.openstack.org/#/c/550832/ proposed to try and track down reason for failure (thanks Slawek!) * https://bugs.launchpad.net/bugs/1755243 - AttributeError when updating DvrEdgeRouter objects running on network nodes Bug submitter has already posted a patch * https://bugs.launchpad.net/bugs/1754563 - Arp_responder function has failed since Ocata Looks like there is maybe a missing setup_privsep() call in the L2 agent code? Yes, and Slawek just fixed it in master, so we'll need to backport just the change in one file. haleyb pushed a change. Medium bugs ----------- * https://bugs.launchpad.net/bugs/1753540 - When isolated/force metadata is enabled, metadata proxy doesn't get automatically started/stopped when needed Daniel Alvarez sent patch for master and backports (thanks!) * https://bugs.launchpad.net/bugs/1754770 - Duplicate iptables rule detected in Linuxbridge agent logs Slawek took ownership as messages started after another change Low bugs -------- * https://bugs.launchpad.net/bugs/1753384 - The old QoS policy ID is returned when updating the QoS policy ID, when the revision plugin is enabled Guoshuai Li took ownership Bugs that need further triage ----------------------------- * https://bugs.launchpad.net/bugs/1753434 - Unbound ports floating ip not working with address scopes in DVR HA Found on Pike, need to determine if already fixed in master * https://bugs.launchpad.net/bugs/1754695 - Incorrect state of the Openflow table Restarting neutron_openvswitch_agent container fixed issue * https://bugs.launchpad.net/bugs/1754600 - the detail for openstack quota show is not supported Probably duplicate of https://bugs.launchpad.net/neutron/+bug/1716043 since that was opened to track CLI quota change RFE bugs for drivers team ------------------------- * https://bugs.launchpad.net/bugs/1753466 - [RFE] Support stateless security groups * https://bugs.launchpad.net/bugs/1754123 - [RFE] Support filter with floating IP address substring Thanks, -Brian From miguel at mlavalle.com Mon Mar 12 18:45:27 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 12 Mar 2018 13:45:27 -0500 Subject: [openstack-dev] [Neutron] Dublin PTG Summary Message-ID: Hi All! First of all, I want to thank you the team for the productive week we had in Dublin. Following below is a high level summary of the discussions we had. If there is something I left out, please reply to this email thread to add it. However, if you want to continue the discussion on any of the individual points summarized below, please start a new thread, so we don't have a lot of conversations going on attached to this update. You can find the etherpad we used during the PTG meetings here: https://etherpad.openstack.org/p/neutron-ptg-rocky Retrospective ========== * The team missed one community goal in the Pike cycle ( https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and one in the Queens cycle (https://governance.openstack. org/tc/goals/queens/policy-in-code.html) - Akihiro Motoki will work on https://governance.openstack.o rg/tc/goals/queens/policy-in-code.html during Rocky - We need volunteers to complete https://governance.op enstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and the two new goals for the Rocky cycle: https://governance.openstack.o rg/tc/goals/rocky/enable-mutable-configuration.html and https://governance.openstack.org/tc/goals/rocky/mox_removal.html. Akihiro Motoki will lead the effort for mox removal - We decided to add a section to our weekly meeting agenda where we are going to track the progress towards catching up with the community goals during the Rocky cycle * As part of the neutron-lib effort, we have found networking projects that are very inactive. Examples are networking-brocade (no updates since May of 2016) and networking-ofagent (no updates since March of 2017). Miguel Lavalle will contact these projects leads to ascertain their situation. If they are indeed inactive, we will not support them as part of neutron-lib updates and will also try to remove them from code search * We will continue our efforts to recruit new contributors and develop core reviewers. During the conversation on this topic, Nikolai de Figueiredo and Pawel Suder announced that they will become active in Neutron. Both of them, along with Hongbin Lu, indicated that are interested in working towards becoming core reviewers. * The team went through the blueprints in the backlog. Here is the status for those blueprints that are not discussed in other sections of this summary: - Adopt oslo.versionedobjects for database interactions. This is a continuing effort. The contact is Ihar Hrachyshka (ihrachys). Contributors are wanted. There is a weekly meeting led by Ihar where this topic is covered: http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting - Enable adoption of an existing subnet into a subnetpool. The final patch in the series to implement this feature is: https://review.openstack.org/#/c/348080. Pawel Suder will drive this patch to completion - Neutron in-tree API reference (https://blueprints.launchpad. net/neutron/+spec/neutron-in-tree-api-ref). There are two remaining TODOs to complete this blueprint: https://bugs.launchpad.net/neutron/+bug/1752274 and https://bugs.launchpad.net/neutron/+bug/1752275. We need volunteers for these two work items - Add TCP/UDP port forwarding extension to L3. The spec was merged recently: https://specs.openstack.org/openstack/neutron-specs/specs/qu eens/port-forwarding.html. Implementation effort is in progress: https://review.openstack.org/#/c/533850/ and https://review.openstack.org/# /c/535647/ - Pure Python driven Linux network configuration ( https://bugs.launchpad.net/neutron/+bug/1492714). This effort has been going on for several cycles gradually adopting pyroute2. Slawek Kaplonski is continuing it with https://review.openstack.org/#/c/545355 and https://review.openstack.org/#/c/548267 Port behind port API proposal ====================== * Omer Anson proposed to extend the Trunk Port API to generalize the support for port behind port use cases such as containers nested as MACVLANs within a VM or HA proxy port behind amphora VM port: https://bugs.launchpad.net/bugs/1730845 - After discussing the proposed use cases, the agreement was to develop a specification making sure input is provided by the Kuryr and Octavia teams ML2 and Mechanism drivers ===================== * Hongbin Lu presented a proposal (https://bugs.launchpad.net/ne utron/+bug/1722720) to add a new value "auto" to the port attribute admin_state_up. - This is to support SR-IOV ports, where admin_state_up == "auto" would mean that the VF link state follows that of the PF. This may be useful when VMs use the link as a trigger for its own HA mechanism - The agreement was not to overload the admin_state_up attribute with more values, since it reflects the desired administrative state of the port and add a new attribute for the intended purpose * Zhang Yanxian presented a specification (https://review.openstack.org/ 506066) to support SR-IOV bonds whereby a Neutron port is associated with two VFs in separate PFs. This is useful in NFV scenarios, where link redundancy is necessary. - Nikolai de Figueiredo agreed to help to drive this effort forward, starting with the specification both in the Neutron and the Nova sides - Sam Betts indicated this type of bond is also of interest for Ironic. He requested to be kept in the loop * Ruijing Guo proposed to support VLAN transparency in Neutron OVS agent. - There is a previous incomplete effort to provide this support: https://bugs.launchpad.net/neutron/+bug/1705719. Patches are here: https://review.openstack.org/#/q/project:openstack/neutron+topic:bug/1705719 - Agreement was for Ruijing to look at the existing patches to re-start the effort. Thomas Morin may provide help for this - While on this topic, the conversation temporarily forked to the use of registers instead of ovsdb port tags in L2 agent br-int and possibly remove br-tun. Thomas Morin committed to draft a RFE for this. * Mike Kolesnik, Omer Anson, Irena Berezovsky, Takashi Yamamoto, Lucas Alvares, Ricardo Noriega, Miguel Ajo, Isaku Yamahata presented the proposal to implement a common mechanism to achieve synchronization between Neutron's DB and the DBs of sub-projects / SDN frameworks - Currently each sub-project / SDN framework has its own solution for this problem. The group thinks that a common solution can be achieved - The agreement was to create a specification where the common solution can be fleshed out - The synchronization mechanism will exist in Neutron * Mike Kolesnik (networking-odl) requested feedback from members of other Neutron sub-projects about the value of inheriting ML2 Neutron's unit tests to get "free testing" for mechanism drivers - The conclusion was that there is no value in that practice for the sub-rpojects - Sam Betts and Miguel Lavalle will explore moving unit tests utils to neutron-lib to enable subprojects to create their own base classes - Mike Kolesnik will document a guideline for sub-projects not to inherit unit tests from Neutron API topics ======== * Isaku Yamahata presented a proposal of a new API for cloud admins to retrieve the physical networks configured in compute hosts - This information is currently stored in configuration files. In agent-less environments it is difficult to retrieve - The agreement was to extend the agent API to expose the physnet as a standard attribute. This will be fed by a pseudo-agent * Isaku Yamahata presented a proposal of a new API to report mechanism drivers health - The overall idea is to report mechanism driver status, similar to the agents API which reports agent health. In the case of mechanism drivers API, it would report connectivity to backend SDN controller or MQ server and report its health/config periodically - Thomas Morin pointed out that this is relevant not only for ML2 mechanism drivers but also for all drivers of different services - The agreement was to start with a specification where we scope the proposal into something manageable for implementation * Yushiro Furukawa proposed to add support of 'snat' as a loggable resource type: https://bugs.launchpad.net/neutron/+bug/1752290 - The agreement was to implement it in Rocky - Brian Haley agreed to be the approver * Hongbin Lu indicated that If users provide different kinds of invalid query parameters, the behavior of the Neutron API looks unpredictable ( https://bugs.launchpad.net/neutron/+bug/1749820) - The proposal is to improve the predictability of the Neutron API by handling invalid query parameters consistently - The proposal was accepted. It will need to provide API discoverability when behavior changes on filter parameter validation - It was also recommended to discuss this with the API SIG to get their guidance. The discussion already started in the mailing list: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128021.html Openflow Manager and Common Classification Framework ========================================== * The Openflow manager implementation needs reviews to continue making progress - The approved spec is here: https://specs.openstack.org/op enstack/neutron-specs/specs/backlog/pike/l2-extension-ovs-fl ow-management.html - The code is here: https://review.openstack.org/323963 - Thomas Morin, David Shaughnessy and Miguel Lavalle discussed and reviewed the implementation during the last day of the PTG. The result of that conversation was reflected in the patch. Thomas and Miguel committed to continue reviewing the patch * The Common Classification Framework (https://specs.openstack.org/o penstack/neutron-specs/specs/pike/common-classification-framework.html) needs to be adopted by its potential consumers: QoS, SFC, FWaaS - David Shaughnessy and Miguel Lavalle met with Slawek Kaplonski over IRC the last day of the PTG (http://eavesdrop.openstack.or g/irclogs/%23openstack-neutron/%23openstack-neutron.2018-03- 02.log.html#t2018-03-02T12:00:34) to discuss the adoption of the framework in QoS code. The agreement was to have a PoC for the DSCP marking rule, since it uses OpenFlow and wouldn't involve big backend changes - David Shaughnessy and Yushiro Furukawa are going to meet to discuss adoption of the framework in FWaaS Neutron to Neutron interconnection ========================= * Thomas Morin walked the team through an overview of his proposal ( https://review.openstack.org/#/c/545826) for Neutron to Neutron interconnection, whereby the following requirements are satisfied: - Interconnection is consumable on-demand, without admin intervention - Have network isolation and allow the use of private IP addressing end to end - Avoid the overhead of packet encryption * Feedback was positive and the agreement is to continue developing and reviewing the specification L3 and L3 flavors ============ * Isaku Yamahata shared with the team that the implementation of routers using the L3 flavors framework gives rise to the need of specifying the order in which callbacks are executed in response to events - Over the past couple of months several alternatives have been considered: callback cascading among resources, SQLAlchemy events, assigning priorities to callbacks responding to the same event - The agreement was an approach based on assigning a priority structure to callbacks in neutron-lib: https://review.openstack.org/#/c/541766 * Isaku Yamahata shared with the team the progress made with the PoC for an Openflow based DVR: https://review.openstack.org/#/c/472289/ and https://review.openstack.org/#/c/528336/ - There was a discussion on whether we need to ask the OVS community to do ipv6 modification to support this PoC. The conclusion was that the feature already exists - There was also an agreement for David Chou add Tempest testing for the scenario of mixed agents neutron-lib ======== * The team reviewed two neutron-lib specs, providing feedback through Gerrit: - A spec to rehome db api and utils into neutron-lib: https://review.openstack.org/#/c/473531. - A spec to decouple neutron db models and ovo for neutron-lib: https://review.openstack.org/#/c/509564/. There is agreement from Ihar Ihrachys that OVO base classes should go into neutron-lib. But he asked not to move yet neutron.objects.db.api since it's still in flux * Manjeet Singh Bhatia proposed making payload consistent for all the callbacks so all the operations of an object get same type of payload. ( https://bugs.launchpad.net/neutron/+bug/1747747) - The agreement was for Manjeet to document all the instances in the code where this is happening so he and others can work on making the payloads consistent Proposal to migrate neutronclient python bindings to OpenStack SDK ================================================== * Akihiro Motoki proposed to change the first priority of neutron-related python binding to OpenStack SDK rather than neutronclient python bindings, given that OpenStack SDK became official in Queens ( http://lists.openstack.org/pipermail/openstack-dev/2018-February/127726.html ) - The proposal is to implement all Neutron features in OpenStack SDK as the first citizen and the neutronclient OSC plugin consumes corresponding OpenStack SDK APIs - New features should be supported in OpenStack SDK and OSC/neutronclient OSC plugin as the first priority - If a new feature depends on neutronclient python bindings, it can be implemented in neutornclient python bindings first and they are ported as part of existing feature transition - Existing features only supported in neutronclient python bindings are ported into OpenStack SDK, and neutronclient OSC plugin will consume them once they are implemented in OpenStack SDK - There is no plan to drop the neutronclient python bindings since not a small number of projects consumes it. It will be maintained as-is - Projects like Nova that consume a small set of neutron features can continue using neutronclient python bindings. Projects like Horizon or Heat that would like to support a wide range of features might be better off switching to OpenStack SDK - Proposal was accepted Cross project planning with Nova ======================== * Minimum bandwidth support in the Nova scheduler. The summary of the outcome of the discussion and further work done after the PTG is the following: - Minimum bandwidth support guarantees a port minimum bandwidth. Strict minimum bandwidth support requires cooperation with the Nova scheduler, to avoid physical interfaces bandwidth overcommitment - Neutron will create in each host networking RPs (Resource Providers) under the compute RP with proper traits and then will report resource inventories based on the discovered and / or configured resource inventory in the host - The hostname will be used by Neutron to find the compute RP created by Nova for the compute host. This convention can create ambiguity in deployments with multiple cells, where hostnames may not be unique. However this problem is not exclusive to this effort, so its solution will be considered out of scope - Two new standard Resource Classes will be defined to represent the bandwidth in each direction, named as `NET_BANDWIDTH_INGRESS_BITS_SEC` and `NET_BANDWIDTH_EGRESS_BITS_SEC - New traits will be defined to distinguish a network back-end agent: `NET_AGENT_SRIOV`, `NET_AGENT_OVS`. Also new traits will be used to indicate which physical network a given Network RP is connected to - Neutron will express a port's bandwidth needs through the port API in a new attribute named "resource_request" that will include ingress bandwidth, egress bandwidth, the physical net and the agent type - The first implementation of this feature will support server create with pre-created Neutron ports having QoS policy with minimum bandwidth rules. Server create with networks having QoS policy minimum bandwidth rule will be out of scope of the first implementation, because currently, in this case, the corresponding port creations happen after the scheduling decision has been made - For the first implementation, Neutron should reject a QoS minimum bandwidth policy rule created on a bound port - The following cases don't involve any interaction in Nova and as a consequence, Neutron will have to adjust the resource allocations: QoS policy rule bandwidth amount change on a bound port and QoS aware sub port create under a bound parent port - For more detailed discussion, please go to the following specs: https://review.openstack.org/#/c/502306 and https://review.openstack.org/# /c/508149 * Provide Port Binding Information for Nova Live Migration ( https://specs.openstack.org/openstack/neutron-specs/specs/ backlog/pike/portbinding_information_for_nova.html and https://specs.openstack.org/openstack/nova-specs/specs/ queens/approved/neutron-new-port-binding-api.html). - There was no discussion around this topic - There was only an update to both teams about the solid progress that has been made on both sides: https://review.openstack.org/#/c/414251/ and https://review.openstack.org/#/q/status:open+project: openstack/nova+branch:master+topic:bp/neutron-new-port-binding-api - The plan is to finish this in Rocky * NUMA aware switches https://review.openstack.org/#/c/541290/ - The agreement on this topic was to do this during Rocky entirely in Nova using a config option which is a list of JSON blobs * Miguel Lavalle and Hongbin Lu proposed to add device_id of the associated port to the floating IP resource - The use case is to allow Nova to filter instances by floating IPs - The agreement was that this would be adding an entirely new contract to Nova with new query parameters. This will not be implemented in Nova, especially since the use case can already be fulfilled by making 3 API calls in a client: find floating IP via filter (Neutron), use that to filter port to get the device_id (Neutron), use that to get the server (Nova) Team photos ========= * Thanks to Kendall Nelson, the official PTG team photos can be found here: https://www.dropbox.com/sh/dtei3ovfi7z74vo/AABT7UR5el6iXRx5WihkbOB3a/ Neutron?dl=0 * Thanks to Nikolai de Figueiredo for sharing with us pictures of our team dinner. Please find a couple of them attached to this message -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Neutron dinner Dublin 1.jpg Type: image/jpeg Size: 4024788 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Neutron dinner Dublin 2.jpg Type: image/jpeg Size: 3106795 bytes Desc: not available URL: From ramamani.yeleswarapu at intel.com Mon Mar 12 18:53:01 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 12 Mar 2018 18:53:01 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Rocky Priorities - https://review.openstack.org/#/c/550174/ - Remaining Rescue patches - https://review.openstack.org/#/c/546919/ - Fix a bug for unrescuing with whole disk image - https://review.openstack.org/#/c/524118/ - Devstack changes to enable testing add support for rescue mode - https://review.openstack.org/#/c/538119/ - Rescue mode standalone tests - BIOS interface specification - https://review.openstack.org/#/c/496481/ Old priorities list that needs to be reconciled ---------------------------------------------- - Fix the multitenant grenade - https://bugs.launchpad.net/ironic/+bug/1744139 - Testing another possibility - Disable .pyc file creation https://review.openstack.org/544750 MERGED - Avoids library incompatibility issue by disabling .pyc files from being written to disk in the scenario. - backport to stable/queens: https://review.openstack.org/#/c/545089/ MERGED - The nova issue noted under critical bugs is also needed to make multitenant grenade reliable again. - Required Backports/Nice to haves below - CRITICAL bugs (must be fixed and backported to queens before the release) - Nova - Placement has issues after upgrade if ironic is unreachable for too long - Current WIP: https://review.openstack.org/#/c/545479/ - https://bugs.launchpad.net/nova/+bug/1750450 Required Queens Backports ------------------------- - detached VIF reappearing: https://bugs.launchpad.net/ironic/+bug/1750785 - workaround: https://review.openstack.org/546584 abandoned - decided to revert the original patch: https://review.openstack.org/546705 MERGED - backport to stable/queens: https://review.openstack.org/546719 APPROVED Nice to have backports ---------------------- - Ansible docs - https://review.openstack.org/#/c/525501/ MERGED - backport https://review.openstack.org/#/c/546079/ MERGED - inspector: do not try passing non-MACs as switch_id: https://review.openstack.org/542214 APPROVED - stable/queens - https://review.openstack.org/543961 MERGED - Fix for CLEANING on conductor restart: https://review.openstack.org/349971 MERGED - backport: https://review.openstack.org/#/c/545893/ MERGED - Reset reservations on take over: https://review.openstack.org/546273 Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: https://review.openstack.org/#/c/543883/ - rescue support for irmc-virtual-media boot oneview: Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 19 Feb 2018 and 12 Mar 2018) - Ironic: 211 bugs (+3) + 248 wishlist items (+5). 5 new (+2), 152 in progress (-3), 1 critical, 33 high and 24 incomplete (+1) - Inspector: 14 bugs (+1) + 26 wishlist items. 0 new, 14 in progress (+2), 0 critical, 3 high (+1) and 4 incomplete - Nova bugs with Ironic tag: 15 (-1). 1 new (-1), 0 critical, 0 high - via http://dashboard-ironic.7e14.starter-us-west-2.openshiftapps.com/ - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Essential Priorities ==================== Ironic client API version negotiation (TheJulia, dtantsur) ---------------------------------------------------------- - RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145 - Nova bug https://bugs.launchpad.net/nova/+bug/1739440 - gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145 - status as of 12 Feb 2017: - TODO: - API-SIG guideline on consuming versions in SDKs https://review.openstack.org/532814 on review - Rocky cycle work is different and will be client oriented in nova. TheJulia will abandon the remaining patches proposed for the client Classic drivers deprecation (dtantsur) -------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 12 Mar 2017: - switch documentation to hardware types: - need help from vendors updating their pages! - ilo: https://review.openstack.org/#/c/542593/ MERGED - idrac looks fine for now - api-ref examples: TODO - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: TODO - ironic-lib: TODO? - python-ironicclient: TODO? - python-ironic-inspector-client: TODO? - virtualbmc: TODO? Traits support planning (mgoddard, johnthetubaguy, dtantsur) - To be removed from rocky cycle list -------------------------------------------------------------------------------------------------- - status as of 12 Feb 2018: - deploy templates spec: https://review.openstack.org/504952 needs reviews - depends on deploy-steps spec: https://review.openstack.org/#/c/412523 - traits API: - need to validate node's instance_info['traits'] at deploy time (https://bugs.launchpad.net/ironic/+bug/1722194/comments/31) - https://review.openstack.org/#/c/543461 - will need to backport this to stable/queens - notes on next steps: https://etherpad.openstack.org/p/ironic-node-instance-traits Reference architecture guide (dtantsur, sambetts) ------------------------------------------------- - status as of 19 Feb 2017: - dtantsur is returning to this after the release - TheJulia suggested we do it right on the PTG - list of cases from the PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? High Priorities =============== Neutron event processing (vdrok, vsaienk0, sambetts) ---------------------------------------------------- - status as of 27 Sep 2017: - spec at https://review.openstack.org/343684, ready for reviews, replies from authors - WIP code at https://review.openstack.org/440778 Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challanges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ MERGED - Devstack changes to enable testing add support for rescue mode: https://review.openstack.org/#/c/524118/ - We need to be careful with this, in that we can't use python-ironicclient changes that have not been released. - Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Bugs: - unrescue fails with partition user image: https://review.openstack.org/#/c/544278/ MERGED - rescue ramdisk doesn't boot on UEFI: https://review.openstack.org/#/c/545186/ MERGED - Can't Merge until we do a client release with rescue support (in Rocky): - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ Graphical console interface (pas-ha, vdrok, rpioso) --------------------------------------------------- - status as of 8 Jan 2017: - spec on review: https://review.openstack.org/#/c/306074/ - there is nova part here, which has to be approved too - dtantsur is worried by absence of progress here - (TheJulia) I think for rocky, it might be worth making it a prime focus, or making it a background goal. BIOS config framework (dtantsur, yolanda, rpioso) ------------------------------------------------- - status as of 8 Jan 2017: - spec under active review: https://review.openstack.org/#/c/496481/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. SIGHUP support -------------- - Proposed for ironic by rloo -- this is done: https://review.openstack.org/474331 MERGED \o/ - TODO: - ironic-inspector - networking-baremetal Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Mon Mar 12 18:56:34 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 12 Mar 2018 14:56:34 -0400 Subject: [openstack-dev] [quotas] [cyborg]Dublin Rocky PTG Summary In-Reply-To: References: Message-ID: <20180312185634.csicgfaredk4i5jr@barron.net> Just a remark below w.r.t. quota support in some other projects fwiw. On 09/03/18 15:46 +0800, Zhipeng Huang wrote: > Hi Team, > >Thanks to our topic leads' efforts, below is the aggregated summary from >our dublin ptg session discussion. Please check it out and feel free to >feedback any concerns you might have. > < -- snip -- > > Quota and Multi-tenancy Support > Etherpad: https://etherpad.openstack.org/p/cyborg-ptg-rocky-quota > Slide: > https://docs.google.com/presentation/d/1DUKWW2vgqUI3Udl4UDvxgJ53Ve5LmyaBpX4u--rVrCc/edit?usp=sharing > > 1. Provide project and user level quota support > 2. Treat all resources as the reserved resource type > 3. Add quota engine and quota driver for the quota support > 4. Tables: quotas, quota_usage, reservation > 5. Transactions operation: reserve, commit, rollback > > - Concerns on rollback > > > - Implement a two-stage resevation and rollback > > > - reserve - commit - rollback (if failed) > Note that cinder and manila followed the nova implementation of a two-stage reservation/commit/rollback model but the resulting system has been buggy. Over time, the quota system's notion of resource usage gets out of sync with actual resource usage. Nova has since dropped the reserve/commit/rollback model [0] and cinder and manila are considering making a similar change. Currently we create reservation records and update quota usage in the API service and then remove the reservation records and update quota usage in another service at commit or rollback time, or on reservation timeout. Nova now avoids the double bookkeeping of resource usage and the need to update these records correctly across separate services by directly checking resource counts in the api at the time requests are received. If we can do the same thing in cinder and manila a whole class of tough, recurrent bugs can be eliminated. The main concern expressed thus far with this "resource counting" approach is that there may be some negative performance impact since the current approach provides cached usage information to the api service. As you can see here [1] there probably is not yet agreement on the degree of performance impact but there does seem to be agreement that we need first to get a quota system that is correct and reliable, then optimize for performance as needed. Best regards, -- Tom Barron [0] https://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/cells-count-resources-to-check-quota-in-api.html [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128108.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From whayutin at redhat.com Mon Mar 12 20:04:01 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 12 Mar 2018 16:04:01 -0400 Subject: [openstack-dev] [TripleO] Proposal for quickstart devmode replacement In-Reply-To: <20180312181826.nojo2y4d26ngj5vz@localhost> References: <20180312181826.nojo2y4d26ngj5vz@localhost> Message-ID: On Mon, Mar 12, 2018 at 2:18 PM, Gabriele Cerami wrote: > Hi, > > we recently changed our set of scripts to uniform the users workflow > with the CI workflow. > One of the results of this change was the reproducer script, that is now > the official way to spawn a live environment to debug a set of changes. > One of the negative results of this changes was the deprecation of > devmode. How to spawn a live environment with no set of changes as a > base ? > We are trying to fill this gap with this proposal > > https://review.openstack.org/548005 > > The well-known quickstart.sh script called with the proper set of > arguments will use the reproducer script, but with a set of option that > will not actually use any upstream change to test. > > For example, the command > > quickstart.sh --generate-reproducer --jobtype ovb-1ctlr_1comp_1ceph-featureset024 > --credentials-file myrdocreds.sh > > Will spawn a live environment with the exact set of features described > in the job periodic-tripleo-ci-ovb-1ctlr_1comp_1ceph-featureset024 > > Other combinations that don't follow upstream jobs configurations are > possible by modifying environment variables, but are not our primary > focus at the moment. > > We are trying to gather as much feedback as possible to make this > proposal a worthy successor of the devmode script, so please point out > what of the old functionality you think is missing, and what new would > you really like to see in it. > > Thanks. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Many folks in the community have requested this feature, so looking for feedback please. Basically it's taking the tooling found in http://tripleo.org/contributor/reproduce-ci.html and allowing you to create a plain old vanilla deployment for your development. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Mon Mar 12 20:28:40 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 12 Mar 2018 16:28:40 -0400 Subject: [openstack-dev] [manila] [summit] Forum topic proposal etherpad Message-ID: <20180312202840.p2kad4arhxmjf5xb@barron.net> Please add proposed topics for manila to this etherpad [1] for the Vancouver Forum. In a couple weeks we'll use this list to submit abstracts for the next stage of the process [2] As a reminder, the Forum is the part of the Summit conference dedicated to open discourse among operators, developers, users -- all who have a vested interest in design and planning of the future of OpenStack [3]. I've added a few topics to prime the pump. [1] https://etherpad.openstack.org/p/YVR-manila-brainstorming [2] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.html [3] quoting http://lists.openstack.org/pipermail/openstack-dev/2018-March/128180.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jungleboyj at gmail.com Mon Mar 12 20:28:40 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 12 Mar 2018 15:28:40 -0500 Subject: [openstack-dev] [cinder] [oslo] cinder.conf generation is broken for my_ip, building non-reproducibly In-Reply-To: <54b240eb-fa3b-1f61-5022-09b3c2e92a84@debian.org> References: <54b240eb-fa3b-1f61-5022-09b3c2e92a84@debian.org> Message-ID: Thomas, Thanks for finding this.  I have opened a bug and submitted a patch. Patch:  https://review.openstack.org/552134 Bug:  https://bugs.launchpad.net/cinder/+bug/1755282 Jay (jungleboyj) On 3/12/2018 3:17 AM, Thomas Goirand wrote: > Hi, > > When inspecting Cinder's (Queens release) cinder.conf, I can see: > > # Warning: Failed to format sample for my_ip > # unhashable type: 'HostAddress' > > So it seems there's an issue in either Cinder or Oslo. How can I > investigate and fix this? > > It's very likely that I'm once more the only person in the OpenStack > community that is really checking config file generation (it used to be > like that for past releases), and therefore the only one who noticed it. > > Also, looking at the code, this seems to be yet-another-instance of > "package cannot be built reproducible" [1] with the build host config > leaking in the configuration (well, once that's fixed...). Indeed, in > the code I can read: > > cfg.HostAddressOpt('my_ip', > default=netutils.get_my_ipv4(), > help='IP address of this host'), > > This means that, when that's repaired, build Cinder will write something > like this: > > #my_ip = 1.2.3.4 > > With 1.2.3.4 being the value of netutils.get_my_ipv4(). This is easily > fixed by adding something like this: > > sample-default='' > > I'm writing this here for Cinder, but there's been numerous cases like > this already. The most common mistake being the hostname of the build > host leaking in the configuration. While this is easily fixed at the > packaging level fixing the config file after generating it with > oslo.config, often that config file is also built with the sphinx doc, > and then that file isn't built reproducibly. That's harder to detect, > and easier fixed upstream. > > Cheers, > > Thomas Goirand (zigo) > > [1] https://reproducible-builds.org/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From e0ne at e0ne.info Mon Mar 12 21:20:37 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 12 Mar 2018 23:20:37 +0200 Subject: [openstack-dev] [horizon][ptg] Team Photo Message-ID: Hi team, Thanks everyone who joined PTG. You can find our photo in the attachment. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon_ptg_photo.jpeg Type: image/jpeg Size: 6055018 bytes Desc: not available URL: From kakuma at valinux.co.jp Mon Mar 12 22:24:43 2018 From: kakuma at valinux.co.jp (fumihiko kakuma) Date: Tue, 13 Mar 2018 07:24:43 +0900 Subject: [openstack-dev] [Neutron] Dublin PTG Summary In-Reply-To: References: Message-ID: <20180313072443.9F95.A24531E9@valinux.co.jp> Hi Miguel, > * As part of the neutron-lib effort, we have found networking projects that > are very inactive. Examples are networking-brocade (no updates since May of > 2016) and networking-ofagent (no updates since March of 2017). Miguel > Lavalle will contact these projects leads to ascertain their situation. If > they are indeed inactive, we will not support them as part of neutron-lib > updates and will also try to remove them from code search networking-ofagent has been removed in the Newton release. So it will not be necessary to support it as part of neutron-lib updates. Thanks kakuma. On Mon, 12 Mar 2018 13:45:27 -0500 Miguel Lavalle wrote: > Hi All! > > First of all, I want to thank you the team for the productive week we had > in Dublin. Following below is a high level summary of the discussions we > had. If there is something I left out, please reply to this email thread to > add it. However, if you want to continue the discussion on any of the > individual points summarized below, please start a new thread, so we don't > have a lot of conversations going on attached to this update. > > You can find the etherpad we used during the PTG meetings here: > https://etherpad.openstack.org/p/neutron-ptg-rocky > > > Retrospective > ========== > > * The team missed one community goal in the Pike cycle ( > https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and > one in the Queens cycle (https://governance.openstack. > org/tc/goals/queens/policy-in-code.html) > > - Akihiro Motoki will work on https://governance.openstack.o > rg/tc/goals/queens/policy-in-code.html during Rocky > > - We need volunteers to complete https://governance.op > enstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and the two new goals > for the Rocky cycle: https://governance.openstack.o > rg/tc/goals/rocky/enable-mutable-configuration.html and > https://governance.openstack.org/tc/goals/rocky/mox_removal.html. Akihiro > Motoki will lead the effort for mox removal > > - We decided to add a section to our weekly meeting agenda where we are > going to track the progress towards catching up with the community goals > during the Rocky cycle > > * As part of the neutron-lib effort, we have found networking projects that > are very inactive. Examples are networking-brocade (no updates since May of > 2016) and networking-ofagent (no updates since March of 2017). Miguel > Lavalle will contact these projects leads to ascertain their situation. If > they are indeed inactive, we will not support them as part of neutron-lib > updates and will also try to remove them from code search > > * We will continue our efforts to recruit new contributors and develop core > reviewers. During the conversation on this topic, Nikolai de Figueiredo and > Pawel Suder announced that they will become active in Neutron. Both of > them, along with Hongbin Lu, indicated that are interested in working > towards becoming core reviewers. > > * The team went through the blueprints in the backlog. Here is the status > for those blueprints that are not discussed in other sections of this > summary: > > - Adopt oslo.versionedobjects for database interactions. This is a > continuing effort. The contact is Ihar Hrachyshka (ihrachys). Contributors > are wanted. There is a weekly meeting led by Ihar where this topic is > covered: http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting > > - Enable adoption of an existing subnet into a subnetpool. The final > patch in the series to implement this feature is: > https://review.openstack.org/#/c/348080. Pawel Suder will drive this patch > to completion > > - Neutron in-tree API reference (https://blueprints.launchpad. > net/neutron/+spec/neutron-in-tree-api-ref). There are two remaining TODOs > to complete this blueprint: https://bugs.launchpad.net/neutron/+bug/1752274 > and https://bugs.launchpad.net/neutron/+bug/1752275. We need volunteers for > these two work items > > - Add TCP/UDP port forwarding extension to L3. The spec was merged > recently: https://specs.openstack.org/openstack/neutron-specs/specs/qu > eens/port-forwarding.html. Implementation effort is in progress: > https://review.openstack.org/#/c/533850/ and https://review.openstack.org/# > /c/535647/ > > - Pure Python driven Linux network configuration ( > https://bugs.launchpad.net/neutron/+bug/1492714). This effort has been > going on for several cycles gradually adopting pyroute2. Slawek Kaplonski > is continuing it with https://review.openstack.org/#/c/545355 and > https://review.openstack.org/#/c/548267 > > > Port behind port API proposal > ====================== > > * Omer Anson proposed to extend the Trunk Port API to generalize the > support for port behind port use cases such as containers nested as > MACVLANs within a VM or HA proxy port behind amphora VM port: > https://bugs.launchpad.net/bugs/1730845 > > - After discussing the proposed use cases, the agreement was to develop > a specification making sure input is provided by the Kuryr and Octavia teams > > > ML2 and Mechanism drivers > ===================== > > * Hongbin Lu presented a proposal (https://bugs.launchpad.net/ne > utron/+bug/1722720) to add a new value "auto" to the port attribute > admin_state_up. > > - This is to support SR-IOV ports, where admin_state_up == "auto" would > mean that the VF link state follows that of the PF. This may be useful when > VMs use the link as a trigger for its own HA mechanism > - The agreement was not to overload the admin_state_up attribute with > more values, since it reflects the desired administrative state of the port > and add a new attribute for the intended purpose > > * Zhang Yanxian presented a specification (https://review.openstack.org/ > 506066) to support SR-IOV bonds whereby a Neutron port is associated with > two VFs in separate PFs. This is useful in NFV scenarios, where link > redundancy is necessary. > > - Nikolai de Figueiredo agreed to help to drive this effort forward, > starting with the specification both in the Neutron and the Nova sides > - Sam Betts indicated this type of bond is also of interest for Ironic. > He requested to be kept in the loop > > * Ruijing Guo proposed to support VLAN transparency in Neutron OVS agent. > > - There is a previous incomplete effort to provide this support: > https://bugs.launchpad.net/neutron/+bug/1705719. Patches are here: > https://review.openstack.org/#/q/project:openstack/neutron+topic:bug/1705719 > - Agreement was for Ruijing to look at the existing patches to re-start > the effort. Thomas Morin may provide help for this > - While on this topic, the conversation temporarily forked to the use of > registers instead of ovsdb port tags in L2 agent br-int and possibly remove > br-tun. Thomas Morin committed to draft a RFE for this. > > * Mike Kolesnik, Omer Anson, Irena Berezovsky, Takashi Yamamoto, Lucas > Alvares, Ricardo Noriega, Miguel Ajo, Isaku Yamahata presented the proposal > to implement a common mechanism to achieve synchronization between > Neutron's DB and the DBs of sub-projects / SDN frameworks > > - Currently each sub-project / SDN framework has its own solution for > this problem. The group thinks that a common solution can be achieved > - The agreement was to create a specification where the common solution > can be fleshed out > - The synchronization mechanism will exist in Neutron > > * Mike Kolesnik (networking-odl) requested feedback from members of other > Neutron sub-projects about the value of inheriting ML2 Neutron's unit tests > to get "free testing" for mechanism drivers > > - The conclusion was that there is no value in that practice for the > sub-rpojects > - Sam Betts and Miguel Lavalle will explore moving unit tests utils to > neutron-lib to enable subprojects to create their own base classes > - Mike Kolesnik will document a guideline for sub-projects not to > inherit unit tests from Neutron > > > API topics > ======== > > * Isaku Yamahata presented a proposal of a new API for cloud admins to > retrieve the physical networks configured in compute hosts > > - This information is currently stored in configuration files. In > agent-less environments it is difficult to retrieve > - The agreement was to extend the agent API to expose the physnet as a > standard attribute. This will be fed by a pseudo-agent > > * Isaku Yamahata presented a proposal of a new API to report mechanism > drivers health > > - The overall idea is to report mechanism driver status, similar to the > agents API which reports agent health. In the case of mechanism drivers > API, it would report connectivity to backend SDN controller or MQ server > and report its health/config periodically > - Thomas Morin pointed out that this is relevant not only for ML2 > mechanism drivers but also for all drivers of different services > - The agreement was to start with a specification where we scope the > proposal into something manageable for implementation > > * Yushiro Furukawa proposed to add support of 'snat' as a loggable resource > type: https://bugs.launchpad.net/neutron/+bug/1752290 > > - The agreement was to implement it in Rocky > - Brian Haley agreed to be the approver > > * Hongbin Lu indicated that If users provide different kinds of invalid > query parameters, the behavior of the Neutron API looks unpredictable ( > https://bugs.launchpad.net/neutron/+bug/1749820) > > - The proposal is to improve the predictability of the Neutron API by > handling invalid query parameters consistently > - The proposal was accepted. It will need to provide API discoverability > when behavior changes on filter parameter validation > - It was also recommended to discuss this with the API SIG to get their > guidance. The discussion already started in the mailing list: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128021.html > > > Openflow Manager and Common Classification Framework > ========================================== > > * The Openflow manager implementation needs reviews to continue making > progress > > - The approved spec is here: https://specs.openstack.org/op > enstack/neutron-specs/specs/backlog/pike/l2-extension-ovs-fl > ow-management.html > - The code is here: https://review.openstack.org/323963 > - Thomas Morin, David Shaughnessy and Miguel Lavalle discussed and > reviewed the implementation during the last day of the PTG. The result of > that conversation was reflected in the patch. Thomas and Miguel committed > to continue reviewing the patch > > * The Common Classification Framework (https://specs.openstack.org/o > penstack/neutron-specs/specs/pike/common-classification-framework.html) > needs to be adopted by its potential consumers: QoS, SFC, FWaaS > > - David Shaughnessy and Miguel Lavalle met with Slawek Kaplonski over > IRC the last day of the PTG (http://eavesdrop.openstack.or > g/irclogs/%23openstack-neutron/%23openstack-neutron.2018-03- > 02.log.html#t2018-03-02T12:00:34) to discuss the adoption of the framework > in QoS code. The agreement was to have a PoC for the DSCP marking rule, > since it uses OpenFlow and wouldn't involve big backend changes > > - David Shaughnessy and Yushiro Furukawa are going to meet to discuss > adoption of the framework in FWaaS > > > Neutron to Neutron interconnection > ========================= > > * Thomas Morin walked the team through an overview of his proposal ( > https://review.openstack.org/#/c/545826) for Neutron to Neutron > interconnection, whereby the following requirements are satisfied: > > - Interconnection is consumable on-demand, without admin intervention > - Have network isolation and allow the use of private IP addressing end > to end > - Avoid the overhead of packet encryption > > * Feedback was positive and the agreement is to continue developing and > reviewing the specification > > > L3 and L3 flavors > ============ > > * Isaku Yamahata shared with the team that the implementation of routers > using the L3 flavors framework gives rise to the need of specifying the > order in which callbacks are executed in response to events > > - Over the past couple of months several alternatives have been > considered: callback cascading among resources, SQLAlchemy events, > assigning priorities to callbacks responding to the same event > - The agreement was an approach based on assigning a priority structure > to callbacks in neutron-lib: https://review.openstack.org/#/c/541766 > > * Isaku Yamahata shared with the team the progress made with the PoC for an > Openflow based DVR: https://review.openstack.org/#/c/472289/ and > https://review.openstack.org/#/c/528336/ > > - There was a discussion on whether we need to ask the OVS community to > do ipv6 modification to support this PoC. The conclusion was that the > feature already exists > - There was also an agreement for David Chou add Tempest testing for the > scenario of mixed agents > > > neutron-lib > ======== > > * The team reviewed two neutron-lib specs, providing feedback through > Gerrit: > > - A spec to rehome db api and utils into neutron-lib: > https://review.openstack.org/#/c/473531. > - A spec to decouple neutron db models and ovo for neutron-lib: > https://review.openstack.org/#/c/509564/. There is agreement from Ihar > Ihrachys that OVO base classes should go into neutron-lib. But he asked not > to move yet neutron.objects.db.api since it's still in flux > > * Manjeet Singh Bhatia proposed making payload consistent for all the > callbacks so all the operations of an object get same type of payload. ( > https://bugs.launchpad.net/neutron/+bug/1747747) > > - The agreement was for Manjeet to document all the instances in the > code where this is happening so he and others can work on making the > payloads consistent > > > Proposal to migrate neutronclient python bindings to OpenStack SDK > ================================================== > > * Akihiro Motoki proposed to change the first priority of neutron-related > python binding to OpenStack SDK rather than neutronclient python bindings, > given that OpenStack SDK became official in Queens ( > http://lists.openstack.org/pipermail/openstack-dev/2018-February/127726.html > ) > > - The proposal is to implement all Neutron features in OpenStack SDK as > the first citizen and the neutronclient OSC plugin consumes corresponding > OpenStack SDK APIs > - New features should be supported in OpenStack SDK and > OSC/neutronclient OSC plugin as the first priority > - If a new feature depends on neutronclient python bindings, it can be > implemented in neutornclient python bindings first and they are ported as > part of existing feature transition > - Existing features only supported in neutronclient python bindings are > ported into OpenStack SDK, and neutronclient OSC plugin will consume them > once they are implemented in OpenStack SDK > - There is no plan to drop the neutronclient python bindings since not a > small number of projects consumes it. It will be maintained as-is > - Projects like Nova that consume a small set of neutron features can > continue using neutronclient python bindings. Projects like Horizon or Heat > that would like to support a wide range of features might be better off > switching to OpenStack SDK > - Proposal was accepted > > > Cross project planning with Nova > ======================== > > * Minimum bandwidth support in the Nova scheduler. The summary of the > outcome of the discussion and further work done after the PTG is the > following: > > - Minimum bandwidth support guarantees a port minimum bandwidth. Strict > minimum bandwidth support requires cooperation with the Nova scheduler, to > avoid physical interfaces bandwidth overcommitment > - Neutron will create in each host networking RPs (Resource Providers) > under the compute RP with proper traits and then will report resource > inventories based on the discovered and / or configured resource inventory > in the host > - The hostname will be used by Neutron to find the compute RP created by > Nova for the compute host. This convention can create ambiguity in > deployments with multiple cells, where hostnames may not be unique. However > this problem is not exclusive to this effort, so its solution will be > considered out of scope > - Two new standard Resource Classes will be defined to represent the > bandwidth in each direction, named as `NET_BANDWIDTH_INGRESS_BITS_SEC` and > `NET_BANDWIDTH_EGRESS_BITS_SEC > - New traits will be defined to distinguish a network back-end agent: > `NET_AGENT_SRIOV`, `NET_AGENT_OVS`. Also new traits will be used to > indicate which physical network a given Network RP is connected to > - Neutron will express a port's bandwidth needs through the port API in > a new attribute named "resource_request" that will include ingress > bandwidth, egress bandwidth, the physical net and the agent type > - The first implementation of this feature will support server create > with pre-created Neutron ports having QoS policy with minimum bandwidth > rules. Server create with networks having QoS policy minimum bandwidth rule > will be out of scope of the first implementation, because currently, in > this case, the corresponding port creations happen after the scheduling > decision has been made > - For the first implementation, Neutron should reject a QoS minimum > bandwidth policy rule created on a bound port > - The following cases don't involve any interaction in Nova and as a > consequence, Neutron will have to adjust the resource allocations: QoS > policy rule bandwidth amount change on a bound port and QoS aware sub port > create under a bound parent port > - For more detailed discussion, please go to the following specs: > https://review.openstack.org/#/c/502306 and https://review.openstack.org/# > /c/508149 > > * Provide Port Binding Information for Nova Live Migration ( > https://specs.openstack.org/openstack/neutron-specs/specs/ > backlog/pike/portbinding_information_for_nova.html and > https://specs.openstack.org/openstack/nova-specs/specs/ > queens/approved/neutron-new-port-binding-api.html). > > - There was no discussion around this topic > - There was only an update to both teams about the solid progress that > has been made on both sides: https://review.openstack.org/#/c/414251/ and > https://review.openstack.org/#/q/status:open+project: > openstack/nova+branch:master+topic:bp/neutron-new-port-binding-api > - The plan is to finish this in Rocky > > * NUMA aware switches https://review.openstack.org/#/c/541290/ > > - The agreement on this topic was to do this during Rocky entirely in > Nova using a config option which is a list of JSON blobs > > * Miguel Lavalle and Hongbin Lu proposed to add device_id of the associated > port to the floating IP resource > > - The use case is to allow Nova to filter instances by floating IPs > - The agreement was that this would be adding an entirely new contract > to Nova with new query parameters. This will not be implemented in Nova, > especially since the use case can already be fulfilled by making 3 API > calls in a client: find floating IP via filter (Neutron), use that to > filter port to get the device_id (Neutron), use that to get the server > (Nova) > > > Team photos > ========= > > * Thanks to Kendall Nelson, the official PTG team photos can be found here: > https://www.dropbox.com/sh/dtei3ovfi7z74vo/AABT7UR5el6iXRx5WihkbOB3a/ > Neutron?dl=0 > > * Thanks to Nikolai de Figueiredo for sharing with us pictures of our team > dinner. Please find a couple of them attached to this message -- fumihiko kakuma From harlowja at fastmail.com Mon Mar 12 22:45:50 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Mon, 12 Mar 2018 15:45:50 -0700 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA0D066.1070600@fastmail.com> Message-ID: <5AA7031E.1080609@fastmail.com> The following may give u some more insight into delimiter, https://review.openstack.org/#/c/284454/ -Josh Lance Bragstad wrote: > I missed the document describing the process for this sort of thing [0]. > So I'm back tracking a bit to go through a more formal process. > > [0] > http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html > > # Proposed new library oslo.limit > > This is a proposal to create a new library dedicated to enabling more > consistent quota and limit enforcement across OpenStack. > > ## Proposed library mission > > Enforcing quotas and limits across OpenStack has traditionally been a > tough problem to solve. Determining enforcement requires quota knowledge > from the service along with information about the project owning the > resource. Up until the Queens release, quota calculation and enforcement > has been left to the services to implement, forcing them to understand > complexities of keystone project structure. During the Pike and Queens > PTG, there were several productive discussions towards redesigning the > current approach to quota enforcement. Because keystone is the authority > of project structure, it makes sense to allow keystone to hold the > association between a resource limit and a project. This means services > still need to calculate quota and usage, but the problem should be > easier for services to implement since developers shouldn't need to > re-implement possible hierarchies of projects and their associated > limits. Instead, we can offload some of that work to a common library > for services to consume that handles enforcing quota calculation based > on limits associated to projects in keystone. This proposal is to have a > new library called oslo.limit that fills that need. > > ## Consuming projects > > The services consuming this work will be any service that currently > implements a quota system, or plans to implement one. Since keystone > already supports unified limits and association of limits to projects, > the implementation for consuming projects is easier. instead of having > to re-write that implementation, developers need to ensure quota > calculation to passed to the oslo.limit library somewhere in the API's > validation layer. The pattern described here is very similar to the > pattern currently used by services that leverage oslo.policy for > authorization decisions. > > ## Alternative libraries > > It looks like there was an existing library that attempted to solve some > of these problems, called delimiter [1]. It looks like delimiter could > be used to talk to keystone about quota enforcement, where as the > existing approach with oslo.limit would be to use keystone directly. > Someone more familiar with the library (harlowja?) can probably shed > more light on it's intended uses (I couldn't find much documentation), > but the presentation linked in a previous note was helpful. > > [1] https://github.com/openstack/delimiter > > ## Proposed adoption model/plan > > The unified limit API [2] in keystone is currently marked as > experimental, but the keystone team is actively collecting and > addressing feedback that will result in stabilizing the API. > Stabilization changes that effect the oslo.limit library will also be > addressed before version 1.0.0 is released. From there, we can look to > incorporate the library into various services that either have an > existing quota implementation, or services that have a quota requirement > but no implementation. > > This should help us refine the interfaces between services and > oslo.limit, while providing a facade to handle complexities of project > hierarchies. This should enable adoption by simplifying the process and > making it easier for quota to be implemented in a consistent way across > services. > > [2] > https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html > > ## Reviewer activity > > At first thought, it makes sense to model the reviewer structure after > the oslo.policy library, where the core team consists of people not only > interested in limits and quota, but also people familiar with the > keystone implementation of the unified limits API. > > ## Implementation > > ### Primary Authors: > > Lance Bragstad (lbragstad at gmail.com) lbragstad > You? > > ### Other contributors: > > You? > > ## Work Items > > * Create a new library called oslo.limit > * Create a core group for the project > * Define the minimum we need to enforce quota calculations in oslo.limit > * Propose an implementation that allows services to test out quota > enforcement via unified limits > > ## References > > Rocky PTG Etherpad for unified limits: > https://etherpad.openstack.org/p/unified-limits-rocky-ptg > > ## Revision History > > Introduced in Rocky > > > On 03/07/2018 11:55 PM, Joshua Harlow wrote: >> So the following was a prior effort: >> >> https://github.com/openstack/delimiter >> >> Maybe just continue down the path of that and/or take that whole repo >> over and iterate (or adjust the prior code, or ...)?? Or if not that's >> ok to, ya'll get to decide. >> >> https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal >> >> >> Lance Bragstad wrote: >>> Hi all, >>> >>> Per the identity-integration track at the PTG [0], I proposed a new oslo >>> library for services to use for hierarchical quota enforcement [1]. Let >>> me know if you have any questions or concerns about the library. If the >>> oslo team would like, I can add an agenda item for next weeks oslo >>> meeting to discuss. >>> >>> Thanks, >>> >>> Lance >>> >>> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg >>> [1] https://review.openstack.org/#/c/550491/ >>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From prometheanfire at gentoo.org Mon Mar 12 23:47:14 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 12 Mar 2018 18:47:14 -0500 Subject: [openstack-dev] [requirements][zun] update websocket-client and kubernetes libraries Message-ID: <20180312234714.2536pwr3u3lujfuc@gentoo.org> Requirements plans to update both versions, removing the current cap on websocket-cient. The plan to do so is as follows. a. Remove the cap on websocket-client b. merge the gr-update into python-zunclient c. make release of python-zunclient d. Alter the constrained versions of websocket-client and kubernetes - to be co-installable with openstack libs (python-zunclient) e. Raise the minimum acceptable version for kubernetes raise the minimum version of websocket-client - raise to above the versions kubernetes had problems with -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From xinni.ge1990 at gmail.com Tue Mar 13 00:45:33 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Tue, 13 Mar 2018 09:45:33 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: Hello, Jaewook and everyone It looks like the error is caused by some angular module of heat-dashboard not being loaded correctly. I tried to reproduce it in my devstack by installing stable/queens Horiozn/Heat-dashboard, but couldn't see the same error. Maybe you want to try the following steps to restart web server and see if the issue can be fixed. Of course you can also remove the troubled panel in heat-dashboard, I will also describe how to do it as follows. 1. remove heat-dashboard related settings rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try to remove _1650_project_template_generator_panel.py to fix it) rm horizon/openstack_dashboard/local/local_settings.d/_1699_orchestration_settings.py* rm horizon/openstack_dashboard/conf/heat_policy.json 2. let horizon re-collect static files, and compress python manage.py collectstatic --clear python manage.py compress 3. restart apache server sudo service apache2 restart Hope the problem can be solved and everything goes well. And if anybody see the same error, please share more details about it. Best Regards, Xinni On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh wrote: > Thanks for feedback! > > As you said, I got errors in the JavaScript console. > > Below is the error log : > > 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active > fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' of > undefined > at Object.25../arrows (fddd6f634ef8.js:2299) > at s (fddd6f634ef8.js:2252) > at fddd6f634ef8.js:2252 > at Object.1../lib/dagre (fddd6f634ef8.js:2252) > at s (fddd6f634ef8.js:2252) > at e (fddd6f634ef8.js:2252) > at fddd6f634ef8.js:2252 > at fddd6f634ef8.js:2252 > at fddd6f634ef8.js:2252 > 25../arrows @ fddd6f634ef8.js:2299 > s @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > 1../lib/dagre @ fddd6f634ef8.js:2252 > s @ fddd6f634ef8.js:2252 > e @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to > instantiate module horizon.app due to: > Error: [$injector:modulerr] Failed to instantiate module > horizon.dashboard.project.heat_dashboard.template_generator due to: > Error: [$injector:nomod] Module 'horizon.dashboard.project. > heat_dashboard.template_generator' is not available! You either > misspelled the module name or forgot to load it. If registering a module > ensure that you specify the dependencies as the second argument. > http://errors.angularjs.org/1.5.8/$injector/nomod?p0= > horizon.dashboard.project.heat_dashboard.template_generator > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:818:59 > at ensure (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:816:320) > at module (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:818:8) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:35 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0= > horizon.dashboard.project.heat_dashboard.template_ > generator&p1=Error%3A%20%5B%24injector%3Anomod%5D% > 20Module%20'horizon.dashboard.project.heat_dashboard. > template_generator'%20is%20not%20available!%20You% > 20either%20misspelled%20the%20module%20name%20or%20forgot% > 20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you% > 20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp% > 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% > 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. > template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. > 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > %0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at createInjector (http://192.168.11.187/ > dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:792:36) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:793:58) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0= > horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D% > 20Failed%20to%20instantiate%20module%20horizon.dashboard. > project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B% > 24injector%3Anomod%5D%20Module%20'horizon.dashboard. > project.heat_dashboard.template_generator'%20is%20not%20available!%20You% > 20either%20misspelled%20the%20module%20name%20or%20forgot% > 20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you% > 20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp% > 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% > 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. > template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. > 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > %0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors. > angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0% > 3Dhorizon.dashboard.project.heat_dashboard.template_ > generator%26p1%3DError%253A%2520%255B%2524injector% > 253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard. > template_generator'%2520is%2520not%2520available!%2520You%2520either% > 2520misspelled%2520the%2520module%2520name%2520or% > 2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a% > 2520module%2520ensure%2520that%2520you%2520specify% > 2520the%2520dependencies%2520as%2520the%2520second% > 2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org% > 252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon. > dashboard.project.heat_dashboard.template_generator% > 250A%2520%2520%2520%2520at%2520http%253A%252F%252F192. > 168.11.187%252Fdashboard%252Fstatic%252Fdashboard% > 252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520% > 2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% > 253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816% > 253A320)%250A%2520%2520%2520%2520at%2520module%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% > 253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% > 253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703% > 253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520( > http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% > 253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% > 253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703% > 253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520( > http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% > 253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8% > 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% > 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% > 20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at% > 20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)% > 0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11. > 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:818:59 > at ensure (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:816:320) > at module (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:818:8) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:35 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0= > horizon.dashboard.project.heat_dashboard.template_ > generator&p1=Error%3A%20%5B%24injector%3Anomod%5D% > 20Module%20'horizon.dashboard.project.heat_dashboard. > template_generator'%20is%20not%20available!%20You% > 20either%20misspelled%20the%20module%20name%20or%20forgot% > 20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you% > 20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp% > 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% > 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. > template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. > 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > %0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at createInjector (http://192.168.11.187/ > dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:792:36) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:793:58) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0= > horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D% > 20Failed%20to%20instantiate%20module%20horizon.dashboard. > project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B% > 24injector%3Anomod%5D%20Module%20'horizon.dashboard. > project.heat_dashboard.template_generator'%20is%20not%20available!%20You% > 20either%20misspelled%20the%20module%20name%20or%20forgot% > 20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you% > 20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp% > 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% > 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. > template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. > 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > %0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors. > angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0% > 3Dhorizon.dashboard.project.heat_dashboard.template_ > generator%26p1%3DError%253A%2520%255B%2524injector% > 253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard. > template_generator'%2520is%2520not%2520available!%2520You%2520either% > 2520misspelled%2520the%2520module%2520name%2520or% > 2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a% > 2520module%2520ensure%2520that%2520you%2520specify% > 2520the%2520dependencies%2520as%2520the%2520second% > 2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org% > 252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon. > dashboard.project.heat_dashboard.template_generator% > 250A%2520%2520%2520%2520at%2520http%253A%252F%252F192. > 168.11.187%252Fdashboard%252Fstatic%252Fdashboard% > 252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520% > 2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% > 253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816% > 253A320)%250A%2520%2520%2520%2520at%2520module%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% > 253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% > 253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703% > 253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520( > http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% > 253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187% > 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% > 253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http% > 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703% > 253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520( > http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% > 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% > 253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8% > 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7% > 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% > 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) > %0A%20%20%20%20at%20loadModules%20(http%3A%2F% > 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% > 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( > http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% > 2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at% > 20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% > 20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% > 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at% > 20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard% > 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)% > 0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11. > 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:703:400) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:924:156) > at createInjector (http://192.168.11.187/ > dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:792:36) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:793:58) > at angularInit (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:789:556) > at HTMLDocument. (http://192.168.11.187/ > dashboard/static/dashboard/js/3bf910c7ae4c.js:1846:1383) > at fire (http://192.168.11.187/dashboard/static/dashboard/js/ > 3bf910c7ae4c.js:208:299) > (anonymous) @ 3bf910c7ae4c.js:699 > (anonymous) @ 3bf910c7ae4c.js:927 > forEach @ 3bf910c7ae4c.js:703 > loadModules @ 3bf910c7ae4c.js:924 > createInjector @ 3bf910c7ae4c.js:913 > doBootstrap @ 3bf910c7ae4c.js:792 > bootstrap @ 3bf910c7ae4c.js:793 > angularInit @ 3bf910c7ae4c.js:789 > (anonymous) @ 3bf910c7ae4c.js:1846 > fire @ 3bf910c7ae4c.js:208 > fireWith @ 3bf910c7ae4c.js:213 > ready @ 3bf910c7ae4c.js:32 > completed @ 3bf910c7ae4c.js:14 > > I don't know exactly what error do I have to search.. > > Best Regards, > Jaewook. > > > 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: > > Do you get any errors in the JavaScript console or in the network tab of > the inspector? > > On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh wrote: > >> Hello, this is Jaewook from Korea. >> >> Today I reinstalled devstack, but something weird dashboard was displayed. >> >> Dashboard shows panels everything. >> >> Please looking at the image. >> >> >> >> >> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet >> Details'. >> >> *But every menus are in Network tab, no distinguished at all. And when I >> click the 'Subnet' or 'Subnet Details', nothing happen.* >> >> And also when I click the dropdown menu such as 'Select a project', it >> shows the projects, but I cannot not select it. *Even though I clicked >> it, it still shows 'Select a project'.* >> >> The OpenStack version is 3.14.0 and Queens release. >> I installed it with devstack master version. >> >> What I suspect is* 'heat-dashboard'.* >> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >> But after adding it, this error happened. >> >> I have no idea but to reinstall it. >> >> Is this error already known issue? >> >> I would very appreciate if somebody help me.. >> >> Best Regards, >> Jaewook. >> ================================================ >> *Jaewook Oh* (오재욱) >> IISTRC - Internet Infra System Technology Research Center >> 369 Sangdo-ro, Dongjak-gu, >> 06978, Seoul, Republic of Korea >> ​ >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Mar 13 00:50:57 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 13 Mar 2018 09:50:57 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: The detail of this issue is tracked in https://bugs.launchpad.net/bugs/1755140 and it turned out that it is caused by the older version of the enabled file of the template generator. I strongly recommend to cover this in the release notes. Akihiro 2018/03/13 午前9:45 "Xinni Ge" : > Hello, Jaewook and everyone > > It looks like the error is caused by some angular module of heat-dashboard > not being loaded correctly. > > I tried to reproduce it in my devstack by installing stable/queens > Horiozn/Heat-dashboard, but couldn't see the same error. > > Maybe you want to try the following steps to restart web server and see if > the issue can be fixed. > Of course you can also remove the troubled panel in heat-dashboard, I will > also describe how to do it as follows. > > 1. remove heat-dashboard related settings > rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try > to remove _1650_project_template_generator_panel.py to fix it) > rm horizon/openstack_dashboard/local/local_settings.d/_1699_ > orchestration_settings.py* > rm horizon/openstack_dashboard/conf/heat_policy.json > > 2. let horizon re-collect static files, and compress > python manage.py collectstatic --clear > python manage.py compress > > 3. restart apache server > sudo service apache2 restart > > Hope the problem can be solved and everything goes well. > And if anybody see the same error, please share more details about it. > > Best Regards, > Xinni > > On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh wrote: > >> Thanks for feedback! >> >> As you said, I got errors in the JavaScript console. >> >> Below is the error log : >> >> 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active >> fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' of >> undefined >> at Object.25../arrows (fddd6f634ef8.js:2299) >> at s (fddd6f634ef8.js:2252) >> at fddd6f634ef8.js:2252 >> at Object.1../lib/dagre (fddd6f634ef8.js:2252) >> at s (fddd6f634ef8.js:2252) >> at e (fddd6f634ef8.js:2252) >> at fddd6f634ef8.js:2252 >> at fddd6f634ef8.js:2252 >> at fddd6f634ef8.js:2252 >> 25../arrows @ fddd6f634ef8.js:2299 >> s @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> 1../lib/dagre @ fddd6f634ef8.js:2252 >> s @ fddd6f634ef8.js:2252 >> e @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to >> instantiate module horizon.app due to: >> Error: [$injector:modulerr] Failed to instantiate module >> horizon.dashboard.project.heat_dashboard.template_generator due to: >> Error: [$injector:nomod] Module 'horizon.dashboard.project.hea >> t_dashboard.template_generator' is not available! You either misspelled >> the module name or forgot to load it. If registering a module ensure that >> you specify the dependencies as the second argument. >> http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon >> .dashboard.project.heat_dashboard.template_generator >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:818:59 >> at ensure (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >> at module (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:35 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.dashboard.project.heat_dashboard.template_generator& >> p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >> 20to%20instantiate%20module%20horizon.dashboard.project. >> heat_dashboard.template_generator%20due%20to%3A% >> 0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A% >> 2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr% >> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1% >> 3DError%253A%2520%255B%2524injector%253Anomod%255D% >> 2520Module%2520'horizon.dashboard.project.heat_ >> dashboard.template_generator'%2520is%2520not%2520available!% >> 2520You%2520either%2520misspelled%2520the%2520module% >> 2520name%2520or%2520forgot%2520to%2520load%2520it.% >> 2520If%2520registering%2520a%2520module%2520ensure%2520that% >> 2520you%2520specify%2520the%2520dependencies%2520as% >> 2520the%2520second%2520argument.%250Ahttp%253A%252F% >> 252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnom >> od%253Fp0%253Dhorizon.dashboard.project.heat_dashboard. >> template_generator%250A%2520%2520%2520%2520at%2520http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699% >> 253A8%250A%2520%2520%2520%2520at%2520http%253A%252F% >> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >> 252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520% >> 2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at% >> 2520module%2520(http%253A%252F%252F192.168.11.187%252Fda >> shboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A% >> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboa >> rd%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520% >> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at% >> 2520loadModules%2520(http%253A%252F%252F192.168.11.187% >> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >> 4c.js%253A924%253A156)%250A%2520%2520%2520%2520at% >> 2520http%253A%252F%252F192.168.11.187%252Fdashboard% >> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520( >> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%2 >> 52Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)% >> 250A%2520%2520%2520%2520at%2520loadModules%2520(http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >> ashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20% >> 20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashb >> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913% >> 3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192. >> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >> e4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >> rd%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:818:59 >> at ensure (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >> at module (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:35 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.dashboard.project.heat_dashboard.template_generator& >> p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >> 20to%20instantiate%20module%20horizon.dashboard.project. >> heat_dashboard.template_generator%20due%20to%3A% >> 0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A% >> 2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr% >> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1% >> 3DError%253A%2520%255B%2524injector%253Anomod%255D% >> 2520Module%2520'horizon.dashboard.project.heat_ >> dashboard.template_generator'%2520is%2520not%2520available!% >> 2520You%2520either%2520misspelled%2520the%2520module% >> 2520name%2520or%2520forgot%2520to%2520load%2520it.% >> 2520If%2520registering%2520a%2520module%2520ensure%2520that% >> 2520you%2520specify%2520the%2520dependencies%2520as% >> 2520the%2520second%2520argument.%250Ahttp%253A%252F% >> 252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnom >> od%253Fp0%253Dhorizon.dashboard.project.heat_dashboard. >> template_generator%250A%2520%2520%2520%2520at%2520http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699% >> 253A8%250A%2520%2520%2520%2520at%2520http%253A%252F% >> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >> 252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520% >> 2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at% >> 2520module%2520(http%253A%252F%252F192.168.11.187%252Fda >> shboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A% >> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboa >> rd%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520% >> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at% >> 2520loadModules%2520(http%253A%252F%252F192.168.11.187% >> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >> 4c.js%253A924%253A156)%250A%2520%2520%2520%2520at% >> 2520http%253A%252F%252F192.168.11.187%252Fdashboard% >> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520( >> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%2 >> 52Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)% >> 250A%2520%2520%2520%2520at%2520loadModules%2520(http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >> ashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20% >> 20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashb >> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913% >> 3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192. >> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >> e4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >> rd%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> at angularInit (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:789:556) >> at HTMLDocument. (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:1846:1383) >> at fire (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c >> 7ae4c.js:208:299) >> (anonymous) @ 3bf910c7ae4c.js:699 >> (anonymous) @ 3bf910c7ae4c.js:927 >> forEach @ 3bf910c7ae4c.js:703 >> loadModules @ 3bf910c7ae4c.js:924 >> createInjector @ 3bf910c7ae4c.js:913 >> doBootstrap @ 3bf910c7ae4c.js:792 >> bootstrap @ 3bf910c7ae4c.js:793 >> angularInit @ 3bf910c7ae4c.js:789 >> (anonymous) @ 3bf910c7ae4c.js:1846 >> fire @ 3bf910c7ae4c.js:208 >> fireWith @ 3bf910c7ae4c.js:213 >> ready @ 3bf910c7ae4c.js:32 >> completed @ 3bf910c7ae4c.js:14 >> >> I don't know exactly what error do I have to search.. >> >> Best Regards, >> Jaewook. >> >> >> 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: >> >> Do you get any errors in the JavaScript console or in the network tab of >> the inspector? >> >> On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh wrote: >> >>> Hello, this is Jaewook from Korea. >>> >>> Today I reinstalled devstack, but something weird dashboard was >>> displayed. >>> >>> Dashboard shows panels everything. >>> >>> Please looking at the image. >>> >>> >>> >>> >>> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet >>> Details'. >>> >>> *But every menus are in Network tab, no distinguished at all. And when I >>> click the 'Subnet' or 'Subnet Details', nothing happen.* >>> >>> And also when I click the dropdown menu such as 'Select a project', it >>> shows the projects, but I cannot not select it. *Even though I clicked >>> it, it still shows 'Select a project'.* >>> >>> The OpenStack version is 3.14.0 and Queens release. >>> I installed it with devstack master version. >>> >>> What I suspect is* 'heat-dashboard'.* >>> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >>> But after adding it, this error happened. >>> >>> I have no idea but to reinstall it. >>> >>> Is this error already known issue? >>> >>> I would very appreciate if somebody help me.. >>> >>> Best Regards, >>> Jaewook. >>> ================================================ >>> *Jaewook Oh* (오재욱) >>> IISTRC - Internet Infra System Technology Research Center >>> 369 Sangdo-ro, Dongjak-gu, >>> 06978, Seoul, Republic of Korea >>> ​ >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.oh95 at gmail.com Tue Mar 13 00:54:29 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Tue, 13 Mar 2018 09:54:29 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: Hello Xinni, Thanks for your kind help and information. I'll try what you said soon :) By the way, I reported this issue in the bug report, if anybody interested in it, please visit https://bugs.launchpad.net/bugs/1755140 Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea > 2018. 3. 13. 오전 9:45, Xinni Ge 작성: > > Hello, Jaewook and everyone > > It looks like the error is caused by some angular module of heat-dashboard not being loaded correctly. > > I tried to reproduce it in my devstack by installing stable/queens Horiozn/Heat-dashboard, but couldn't see the same error. > > Maybe you want to try the following steps to restart web server and see if the issue can be fixed. > Of course you can also remove the troubled panel in heat-dashboard, I will also describe how to do it as follows. > > 1. remove heat-dashboard related settings > rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try to remove _1650_project_template_generator_panel.py to fix it) > rm horizon/openstack_dashboard/local/local_settings.d/_1699_orchestration_settings.py* > rm horizon/openstack_dashboard/conf/heat_policy.json > > 2. let horizon re-collect static files, and compress > python manage.py collectstatic --clear > python manage.py compress > > 3. restart apache server > sudo service apache2 restart > > Hope the problem can be solved and everything goes well. > And if anybody see the same error, please share more details about it. > > Best Regards, > Xinni > > On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh > wrote: > Thanks for feedback! > > As you said, I got errors in the JavaScript console. > > Below is the error log : > > 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active > fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' of undefined > at Object.25../arrows (fddd6f634ef8.js:2299) > at s (fddd6f634ef8.js:2252) > at fddd6f634ef8.js:2252 > at Object.1../lib/dagre (fddd6f634ef8.js:2252) > at s (fddd6f634ef8.js:2252) > at e (fddd6f634ef8.js:2252) > at fddd6f634ef8.js:2252 > at fddd6f634ef8.js:2252 > at fddd6f634ef8.js:2252 > 25../arrows @ fddd6f634ef8.js:2299 > s @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > 1../lib/dagre @ fddd6f634ef8.js:2252 > s @ fddd6f634ef8.js:2252 > e @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > (anonymous) @ fddd6f634ef8.js:2252 > 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to instantiate module horizon.app due to: > Error: [$injector:modulerr] Failed to instantiate module horizon.dashboard.project.heat_dashboard.template_generator due to: > Error: [$injector:nomod] Module 'horizon.dashboard.project.heat_dashboard.template_generator' is not available! You either misspelled the module name or forgot to load it. If registering a module ensure that you specify the dependencies as the second argument. > http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon.dashboard.project.heat_dashboard.template_generator > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:59 > at ensure (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:816:320 ) > at module (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:8 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:35 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.dashboard.project.heat_dashboard.template_generator&p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464 ) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36 ) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58 ) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed%20to%20instantiate%20module%20horizon.dashboard.project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard.template_generator'%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project.heat_dashboard.template_generator%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520module%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:59 > at ensure (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:816:320 ) > at module (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:818:8 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:35 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.dashboard.project.heat_dashboard.template_generator&p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:925:84 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464 ) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36 ) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58 ) > http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=horizon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed%20to%20instantiate%20module%20horizon.dashboard.project.heat_dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not%20available!%20You%20either%20misspelled%20the%20module%20name%20or%20forgot%20to%20load%20it.%20If%20registering%20a%20module%20ensure%20that%20you%20specify%20the%20dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1%3DError%253A%2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard.project.heat_dashboard.template_generator'%2520is%2520not%2520available!%2520You%2520either%2520misspelled%2520the%2520module%2520name%2520or%2520forgot%2520to%2520load%2520it.%2520If%2520registering%2520a%2520module%2520ensure%2520that%2520you%2520specify%2520the%2520dependencies%2520as%2520the%2520second%2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project.heat_dashboard.template_generator%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520module%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:699:8 > at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:927:7 > at forEach (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:703:400 ) > at loadModules (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:924:156 ) > at createInjector (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:913:464 ) > at doBootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:792:36 ) > at bootstrap (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:793:58 ) > at angularInit (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:789:556 ) > at HTMLDocument. (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:1846:1383 ) > at fire (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7ae4c.js:208:299 ) > (anonymous) @ 3bf910c7ae4c.js:699 > (anonymous) @ 3bf910c7ae4c.js:927 > forEach @ 3bf910c7ae4c.js:703 > loadModules @ 3bf910c7ae4c.js:924 > createInjector @ 3bf910c7ae4c.js:913 > doBootstrap @ 3bf910c7ae4c.js:792 > bootstrap @ 3bf910c7ae4c.js:793 > angularInit @ 3bf910c7ae4c.js:789 > (anonymous) @ 3bf910c7ae4c.js:1846 > fire @ 3bf910c7ae4c.js:208 > fireWith @ 3bf910c7ae4c.js:213 > ready @ 3bf910c7ae4c.js:32 > completed @ 3bf910c7ae4c.js:14 > > I don't know exactly what error do I have to search.. > > Best Regards, > Jaewook. > > >> 2018. 3. 12. 오후 9:48, Radomir Dopieralski > 작성: >> >> Do you get any errors in the JavaScript console or in the network tab of the inspector? >> >> On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh > wrote: >> Hello, this is Jaewook from Korea. >> >> Today I reinstalled devstack, but something weird dashboard was displayed. >> >> Dashboard shows panels everything. >> >> Please looking at the image. >> >> >> >> >> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet Details'. >> >> But every menus are in Network tab, no distinguished at all. And when I click the 'Subnet' or 'Subnet Details', nothing happen. >> >> And also when I click the dropdown menu such as 'Select a project', it shows the projects, but I cannot not select it. Even though I clicked it, it still shows 'Select a project'. >> >> The OpenStack version is 3.14.0 and Queens release. >> I installed it with devstack master version. >> >> What I suspect is 'heat-dashboard'. >> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >> But after adding it, this error happened. >> >> I have no idea but to reinstall it. >> >> Is this error already known issue? >> >> I would very appreciate if somebody help me.. >> >> Best Regards, >> Jaewook. >> ================================================ >> Jaewook Oh (오재욱) >> IISTRC - Internet Infra System Technology Research Center >> 369 Sangdo-ro, Dongjak-gu, >> 06978, Seoul, Republic of Korea >> ​ >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > 葛馨霓 Xinni Ge > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.oh95 at gmail.com Tue Mar 13 01:34:26 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Tue, 13 Mar 2018 10:34:26 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: Hello, Xinni. I followed your description, and it worked properly :) Can you comment your description in the bug report? https://bugs.launchpad.net/bugs/1755140 It would be very helpful for me or somebody else who doesn't know how to restart the horizon independently! Best Regards, Jaewook. 2018-03-13 9:45 GMT+09:00 Xinni Ge : > Hello, Jaewook and everyone > > It looks like the error is caused by some angular module of heat-dashboard > not being loaded correctly. > > I tried to reproduce it in my devstack by installing stable/queens > Horiozn/Heat-dashboard, but couldn't see the same error. > > Maybe you want to try the following steps to restart web server and see if > the issue can be fixed. > Of course you can also remove the troubled panel in heat-dashboard, I will > also describe how to do it as follows. > > 1. remove heat-dashboard related settings > rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try > to remove _1650_project_template_generator_panel.py to fix it) > rm horizon/openstack_dashboard/local/local_settings.d/_1699_ > orchestration_settings.py* > rm horizon/openstack_dashboard/conf/heat_policy.json > > 2. let horizon re-collect static files, and compress > python manage.py collectstatic --clear > python manage.py compress > > 3. restart apache server > sudo service apache2 restart > > Hope the problem can be solved and everything goes well. > And if anybody see the same error, please share more details about it. > > Best Regards, > Xinni > > On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh wrote: > >> Thanks for feedback! >> >> As you said, I got errors in the JavaScript console. >> >> Below is the error log : >> >> 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active >> fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' of >> undefined >> at Object.25../arrows (fddd6f634ef8.js:2299) >> at s (fddd6f634ef8.js:2252) >> at fddd6f634ef8.js:2252 >> at Object.1../lib/dagre (fddd6f634ef8.js:2252) >> at s (fddd6f634ef8.js:2252) >> at e (fddd6f634ef8.js:2252) >> at fddd6f634ef8.js:2252 >> at fddd6f634ef8.js:2252 >> at fddd6f634ef8.js:2252 >> 25../arrows @ fddd6f634ef8.js:2299 >> s @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> 1../lib/dagre @ fddd6f634ef8.js:2252 >> s @ fddd6f634ef8.js:2252 >> e @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> (anonymous) @ fddd6f634ef8.js:2252 >> 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to >> instantiate module horizon.app due to: >> Error: [$injector:modulerr] Failed to instantiate module >> horizon.dashboard.project.heat_dashboard.template_generator due to: >> Error: [$injector:nomod] Module 'horizon.dashboard.project.hea >> t_dashboard.template_generator' is not available! You either misspelled >> the module name or forgot to load it. If registering a module ensure that >> you specify the dependencies as the second argument. >> http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon >> .dashboard.project.heat_dashboard.template_generator >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:818:59 >> at ensure (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >> at module (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:35 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.dashboard.project.heat_dashboard.template_generator& >> p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >> 20to%20instantiate%20module%20horizon.dashboard.project. >> heat_dashboard.template_generator%20due%20to%3A% >> 0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A% >> 2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr% >> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1% >> 3DError%253A%2520%255B%2524injector%253Anomod%255D% >> 2520Module%2520'horizon.dashboard.project.heat_ >> dashboard.template_generator'%2520is%2520not%2520available!% >> 2520You%2520either%2520misspelled%2520the%2520module% >> 2520name%2520or%2520forgot%2520to%2520load%2520it.% >> 2520If%2520registering%2520a%2520module%2520ensure%2520that% >> 2520you%2520specify%2520the%2520dependencies%2520as% >> 2520the%2520second%2520argument.%250Ahttp%253A%252F% >> 252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnom >> od%253Fp0%253Dhorizon.dashboard.project.heat_dashboard. >> template_generator%250A%2520%2520%2520%2520at%2520http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699% >> 253A8%250A%2520%2520%2520%2520at%2520http%253A%252F% >> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >> 252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520% >> 2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at% >> 2520module%2520(http%253A%252F%252F192.168.11.187%252Fda >> shboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A% >> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboa >> rd%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520% >> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at% >> 2520loadModules%2520(http%253A%252F%252F192.168.11.187% >> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >> 4c.js%253A924%253A156)%250A%2520%2520%2520%2520at% >> 2520http%253A%252F%252F192.168.11.187%252Fdashboard% >> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520( >> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%2 >> 52Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)% >> 250A%2520%2520%2520%2520at%2520loadModules%2520(http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >> ashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20% >> 20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashb >> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913% >> 3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192. >> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >> e4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >> rd%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:818:59 >> at ensure (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >> at module (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:35 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.dashboard.project.heat_dashboard.template_generator& >> p1=Error%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:925:84 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >> 20to%20instantiate%20module%20horizon.dashboard.project. >> heat_dashboard.template_generator%20due%20to%3A% >> 0AError%3A%20%5B%24injector%3Anomod%5D%20Module%20' >> horizon.dashboard.project.heat_dashboard.template_generator'%20is%20not% >> 20available!%20You%20either%20misspelled%20the%20module% >> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >> g%20a%20module%20ensure%20that%20you%20specify%20the%2 >> 0dependencies%20as%20the%20second%20argument.%0Ahttp%3A%2F% >> 2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod%3Fp0% >> 3Dhorizon.dashboard.project.heat_dashboard.template_ >> generator%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20(http >> %3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard% >> 2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20%20at%20http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0Ahttp%3A% >> 2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fmodulerr% >> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard.template_generator%26p1% >> 3DError%253A%2520%255B%2524injector%253Anomod%255D% >> 2520Module%2520'horizon.dashboard.project.heat_ >> dashboard.template_generator'%2520is%2520not%2520available!% >> 2520You%2520either%2520misspelled%2520the%2520module% >> 2520name%2520or%2520forgot%2520to%2520load%2520it.% >> 2520If%2520registering%2520a%2520module%2520ensure%2520that% >> 2520you%2520specify%2520the%2520dependencies%2520as% >> 2520the%2520second%2520argument.%250Ahttp%253A%252F% >> 252Ferrors.angularjs.org%252F1.5.8%252F%2524injector%252Fnom >> od%253Fp0%253Dhorizon.dashboard.project.heat_dashboard. >> template_generator%250A%2520%2520%2520%2520at%2520http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A699% >> 253A8%250A%2520%2520%2520%2520at%2520http%253A%252F% >> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >> 252Fjs%252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520% >> 2520%2520at%2520ensure%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A816%253A320)%250A%2520%2520%2520%2520at% >> 2520module%2520(http%253A%252F%252F192.168.11.187%252Fda >> shboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A% >> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboa >> rd%252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520% >> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910 >> c7ae4c.js%253A703%253A400)%250A%2520%2520%2520%2520at% >> 2520loadModules%2520(http%253A%252F%252F192.168.11.187% >> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >> 4c.js%253A924%253A156)%250A%2520%2520%2520%2520at% >> 2520http%253A%252F%252F192.168.11.187%252Fdashboard% >> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >> 253A925%253A84%250A%2520%2520%2520%2520at%2520forEach%2520( >> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%2 >> 52Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)% >> 250A%2520%2520%2520%2520at%2520loadModules%2520(http% >> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >> ashboard%252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20% >> 20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20% >> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A% >> 20%20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11. >> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >> c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168. >> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c. >> js%3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F% >> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >> f910c7ae4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules% >> 20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20% >> 20at%20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashb >> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913% >> 3A464)%0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192. >> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >> e4c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http% >> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >> rd%2Fjs%2F3bf910c7ae4c.js%3A793%3A58) >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:699:8 >> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >> ae4c.js:927:7 >> at forEach (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >> at loadModules (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >> at createInjector (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >> at doBootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >> at bootstrap (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >> at angularInit (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:789:556) >> at HTMLDocument. (http://192.168.11.187/dashboa >> rd/static/dashboard/js/3bf910c7ae4c.js:1846:1383) >> at fire (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c >> 7ae4c.js:208:299) >> (anonymous) @ 3bf910c7ae4c.js:699 >> (anonymous) @ 3bf910c7ae4c.js:927 >> forEach @ 3bf910c7ae4c.js:703 >> loadModules @ 3bf910c7ae4c.js:924 >> createInjector @ 3bf910c7ae4c.js:913 >> doBootstrap @ 3bf910c7ae4c.js:792 >> bootstrap @ 3bf910c7ae4c.js:793 >> angularInit @ 3bf910c7ae4c.js:789 >> (anonymous) @ 3bf910c7ae4c.js:1846 >> fire @ 3bf910c7ae4c.js:208 >> fireWith @ 3bf910c7ae4c.js:213 >> ready @ 3bf910c7ae4c.js:32 >> completed @ 3bf910c7ae4c.js:14 >> >> I don't know exactly what error do I have to search.. >> >> Best Regards, >> Jaewook. >> >> >> 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: >> >> Do you get any errors in the JavaScript console or in the network tab of >> the inspector? >> >> On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh wrote: >> >>> Hello, this is Jaewook from Korea. >>> >>> Today I reinstalled devstack, but something weird dashboard was >>> displayed. >>> >>> Dashboard shows panels everything. >>> >>> Please looking at the image. >>> >>> >>> >>> >>> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet >>> Details'. >>> >>> *But every menus are in Network tab, no distinguished at all. And when I >>> click the 'Subnet' or 'Subnet Details', nothing happen.* >>> >>> And also when I click the dropdown menu such as 'Select a project', it >>> shows the projects, but I cannot not select it. *Even though I clicked >>> it, it still shows 'Select a project'.* >>> >>> The OpenStack version is 3.14.0 and Queens release. >>> I installed it with devstack master version. >>> >>> What I suspect is* 'heat-dashboard'.* >>> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >>> But after adding it, this error happened. >>> >>> I have no idea but to reinstall it. >>> >>> Is this error already known issue? >>> >>> I would very appreciate if somebody help me.. >>> >>> Best Regards, >>> Jaewook. >>> ================================================ >>> *Jaewook Oh* (오재욱) >>> IISTRC - Internet Infra System Technology Research Center >>> 369 Sangdo-ro, Dongjak-gu, >>> 06978, Seoul, Republic of Korea >>> ​ >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Tue Mar 13 01:40:52 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Tue, 13 Mar 2018 10:40:52 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: Hello Jaewook, Very glad to know it. I will add comments to the bug report, and continue to find a better solution to prevent the issue from happening. Best Regards! Xinni On Tue, Mar 13, 2018 at 10:34 AM, Jaewook Oh wrote: > Hello, Xinni. > > I followed your description, and it worked properly :) > > Can you comment your description in the bug report? > > https://bugs.launchpad.net/bugs/1755140 > > It would be very helpful for me or somebody else who doesn't know how to > restart the horizon independently! > > Best Regards, > Jaewook. > > 2018-03-13 9:45 GMT+09:00 Xinni Ge : > >> Hello, Jaewook and everyone >> >> It looks like the error is caused by some angular module of >> heat-dashboard not being loaded correctly. >> >> I tried to reproduce it in my devstack by installing stable/queens >> Horiozn/Heat-dashboard, but couldn't see the same error. >> >> Maybe you want to try the following steps to restart web server and see >> if the issue can be fixed. >> Of course you can also remove the troubled panel in heat-dashboard, I >> will also describe how to do it as follows. >> >> 1. remove heat-dashboard related settings >> rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try >> to remove _1650_project_template_generator_panel.py to fix it) >> rm horizon/openstack_dashboard/local/local_settings.d/_1699_orc >> hestration_settings.py* >> rm horizon/openstack_dashboard/conf/heat_policy.json >> >> 2. let horizon re-collect static files, and compress >> python manage.py collectstatic --clear >> python manage.py compress >> >> 3. restart apache server >> sudo service apache2 restart >> >> Hope the problem can be solved and everything goes well. >> And if anybody see the same error, please share more details about it. >> >> Best Regards, >> Xinni >> >> On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh wrote: >> >>> Thanks for feedback! >>> >>> As you said, I got errors in the JavaScript console. >>> >>> Below is the error log : >>> >>> 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active >>> fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' >>> of undefined >>> at Object.25../arrows (fddd6f634ef8.js:2299) >>> at s (fddd6f634ef8.js:2252) >>> at fddd6f634ef8.js:2252 >>> at Object.1../lib/dagre (fddd6f634ef8.js:2252) >>> at s (fddd6f634ef8.js:2252) >>> at e (fddd6f634ef8.js:2252) >>> at fddd6f634ef8.js:2252 >>> at fddd6f634ef8.js:2252 >>> at fddd6f634ef8.js:2252 >>> 25../arrows @ fddd6f634ef8.js:2299 >>> s @ fddd6f634ef8.js:2252 >>> (anonymous) @ fddd6f634ef8.js:2252 >>> 1../lib/dagre @ fddd6f634ef8.js:2252 >>> s @ fddd6f634ef8.js:2252 >>> e @ fddd6f634ef8.js:2252 >>> (anonymous) @ fddd6f634ef8.js:2252 >>> (anonymous) @ fddd6f634ef8.js:2252 >>> (anonymous) @ fddd6f634ef8.js:2252 >>> 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to >>> instantiate module horizon.app due to: >>> Error: [$injector:modulerr] Failed to instantiate module >>> horizon.dashboard.project.heat_dashboard.template_generator due to: >>> Error: [$injector:nomod] Module 'horizon.dashboard.project.hea >>> t_dashboard.template_generator' is not available! You either misspelled >>> the module name or forgot to load it. If registering a module ensure that >>> you specify the dependencies as the second argument. >>> http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon >>> .dashboard.project.heat_dashboard.template_generator >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:699:8 >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:818:59 >>> at ensure (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >>> at module (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:35 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:84 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>> zon.dashboard.project.heat_dashboard.template_generator&p1= >>> Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon. >>> dashboard.project.heat_dashboard.template_generator'% >>> 20is%20not%20available!%20You%20either%20misspelled%20the% >>> 20module%20name%20or%20forgot%20to%20load%20it.%20If% >>> 20registering%20a%20module%20ensure%20that%20you% >>> 20specify%20the%20dependencies%20as%20the%20second% >>> 20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>> 24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_ >>> dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( >>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20% >>> 20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)% >>> 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35% >>> 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at% >>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:699:8 >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:927:7 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:84 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at createInjector (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>> at doBootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>> at bootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >>> 20to%20instantiate%20module%20horizon.dashboard.project.heat >>> _dashboard.template_generator%20due%20to%3A%0AError%3A%20% >>> 5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard. >>> project.heat_dashboard.template_generator'%20is% >>> 20not%20available!%20You%20either%20misspelled%20the% >>> 20module%20name%20or%20forgot%20to%20load%20it.%20If% >>> 20registering%20a%20module%20ensure%20that%20you% >>> 20specify%20the%20dependencies%20as%20the%20second% >>> 20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>> 24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_ >>> dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( >>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20% >>> 20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)% >>> 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35% >>> 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at% >>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5. >>> 8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project. >>> heat_dashboard.template_generator%26p1%3DError%253A% >>> 2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard. >>> project.heat_dashboard.template_generator'%2520is% >>> 2520not%2520available!%2520You%2520either%2520misspelled% >>> 2520the%2520module%2520name%2520or%2520forgot%2520to% >>> 2520load%2520it.%2520If%2520registering%2520a%2520modu >>> le%2520ensure%2520that%2520you%2520specify%2520the%2520depen >>> dencies%2520as%2520the%2520second%2520argument.% >>> 250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F% >>> 2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project. >>> heat_dashboard.template_generator%250A%2520%2520%2520% >>> 2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>> 253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A% >>> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>> 252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520% >>> 2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187% >>> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >>> 4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520modu >>> le%2520(http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253 >>> A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F% >>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>> 252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%252 >>> 0%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187% >>> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >>> 4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520load >>> Modules%2520(http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>> 253A924%253A156)%250A%2520%2520%2520%2520at%2520http% >>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% >>> 253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http% >>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >>> ashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A% >>> 2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F% >>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>> 252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20% >>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20% >>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20a >>> t%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fsta >>> tic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20% >>> 20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192. >>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>> e4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http% >>> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464) >>> %0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11. >>> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A793%3A58) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:699:8 >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:818:59 >>> at ensure (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >>> at module (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:35 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:84 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>> zon.dashboard.project.heat_dashboard.template_generator&p1= >>> Error%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon. >>> dashboard.project.heat_dashboard.template_generator'% >>> 20is%20not%20available!%20You%20either%20misspelled%20the% >>> 20module%20name%20or%20forgot%20to%20load%20it.%20If% >>> 20registering%20a%20module%20ensure%20that%20you% >>> 20specify%20the%20dependencies%20as%20the%20second% >>> 20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>> 24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_ >>> dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( >>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20% >>> 20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)% >>> 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35% >>> 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at% >>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:699:8 >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:927:7 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:925:84 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at createInjector (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>> at doBootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>> at bootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >>> 20to%20instantiate%20module%20horizon.dashboard.project.heat >>> _dashboard.template_generator%20due%20to%3A%0AError%3A%20% >>> 5B%24injector%3Anomod%5D%20Module%20'horizon.dashboard. >>> project.heat_dashboard.template_generator'%20is% >>> 20not%20available!%20You%20either%20misspelled%20the% >>> 20module%20name%20or%20forgot%20to%20load%20it.%20If% >>> 20registering%20a%20module%20ensure%20that%20you% >>> 20specify%20the%20dependencies%20as%20the%20second% >>> 20argument.%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>> 24injector%2Fnomod%3Fp0%3Dhorizon.dashboard.project.heat_ >>> dashboard.template_generator%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F% >>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20( >>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20% >>> 20at%20module%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)% >>> 0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdashboard% >>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35% >>> 0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at% >>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A924%3A156)%0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5. >>> 8%2F%24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project. >>> heat_dashboard.template_generator%26p1%3DError%253A% >>> 2520%255B%2524injector%253Anomod%255D%2520Module%2520'horizon.dashboard. >>> project.heat_dashboard.template_generator'%2520is% >>> 2520not%2520available!%2520You%2520either%2520misspelled% >>> 2520the%2520module%2520name%2520or%2520forgot%2520to% >>> 2520load%2520it.%2520If%2520registering%2520a%2520modu >>> le%2520ensure%2520that%2520you%2520specify%2520the%2520depen >>> dencies%2520as%2520the%2520second%2520argument.% >>> 250Ahttp%253A%252F%252Ferrors.angularjs.org%252F1.5.8%252F% >>> 2524injector%252Fnomod%253Fp0%253Dhorizon.dashboard.project. >>> heat_dashboard.template_generator%250A%2520%2520%2520% >>> 2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>> 253A699%253A8%250A%2520%2520%2520%2520at%2520http%253A% >>> 252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>> 252F3bf910c7ae4c.js%253A818%253A59%250A%2520%2520%2520% >>> 2520at%2520ensure%2520(http%253A%252F%252F192.168.11.187% >>> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >>> 4c.js%253A816%253A320)%250A%2520%2520%2520%2520at%2520modu >>> le%2520(http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253 >>> A818%253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F% >>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>> 252Fjs%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%252 >>> 0%2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187% >>> 252Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae >>> 4c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520load >>> Modules%2520(http%253A%252F%252F192.168.11.187%252Fdashboa >>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>> 253A924%253A156)%250A%2520%2520%2520%2520at%2520http% >>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A925% >>> 253A84%250A%2520%2520%2520%2520at%2520forEach%2520(http% >>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic%252Fd >>> ashboard%252Fjs%252F3bf910c7ae4c.js%253A703%253A400)%250A% >>> 2520%2520%2520%2520at%2520loadModules%2520(http%253A%252F% >>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>> 252Fjs%252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20% >>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20% >>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20a >>> t%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fsta >>> tic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20% >>> 20%20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A924%3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187% >>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>> 3A925%3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192. >>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>> e4c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http% >>> 3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>> rd%2Fjs%2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at% >>> 20createInjector%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464) >>> %0A%20%20%20%20at%20doBootstrap%20(http%3A%2F%2F192.168.11. >>> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>> c.js%3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A% >>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>> 2F3bf910c7ae4c.js%3A793%3A58) >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:699:8 >>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>> ae4c.js:927:7 >>> at forEach (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>> at loadModules (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>> at createInjector (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>> at doBootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>> at bootstrap (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>> at angularInit (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:789:556) >>> at HTMLDocument. (http://192.168.11.187/dashboa >>> rd/static/dashboard/js/3bf910c7ae4c.js:1846:1383) >>> at fire (http://192.168.11.187/dashboard/static/dashboard/js/3bf910c >>> 7ae4c.js:208:299) >>> (anonymous) @ 3bf910c7ae4c.js:699 >>> (anonymous) @ 3bf910c7ae4c.js:927 >>> forEach @ 3bf910c7ae4c.js:703 >>> loadModules @ 3bf910c7ae4c.js:924 >>> createInjector @ 3bf910c7ae4c.js:913 >>> doBootstrap @ 3bf910c7ae4c.js:792 >>> bootstrap @ 3bf910c7ae4c.js:793 >>> angularInit @ 3bf910c7ae4c.js:789 >>> (anonymous) @ 3bf910c7ae4c.js:1846 >>> fire @ 3bf910c7ae4c.js:208 >>> fireWith @ 3bf910c7ae4c.js:213 >>> ready @ 3bf910c7ae4c.js:32 >>> completed @ 3bf910c7ae4c.js:14 >>> >>> I don't know exactly what error do I have to search.. >>> >>> Best Regards, >>> Jaewook. >>> >>> >>> 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: >>> >>> Do you get any errors in the JavaScript console or in the network tab of >>> the inspector? >>> >>> On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh >>> wrote: >>> >>>> Hello, this is Jaewook from Korea. >>>> >>>> Today I reinstalled devstack, but something weird dashboard was >>>> displayed. >>>> >>>> Dashboard shows panels everything. >>>> >>>> Please looking at the image. >>>> >>>> >>>> >>>> >>>> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet >>>> Details'. >>>> >>>> *But every menus are in Network tab, no distinguished at all. And when >>>> I click the 'Subnet' or 'Subnet Details', nothing happen.* >>>> >>>> And also when I click the dropdown menu such as 'Select a project', it >>>> shows the projects, but I cannot not select it. *Even though I clicked >>>> it, it still shows 'Select a project'.* >>>> >>>> The OpenStack version is 3.14.0 and Queens release. >>>> I installed it with devstack master version. >>>> >>>> What I suspect is* 'heat-dashboard'.* >>>> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >>>> But after adding it, this error happened. >>>> >>>> I have no idea but to reinstall it. >>>> >>>> Is this error already known issue? >>>> >>>> I would very appreciate if somebody help me.. >>>> >>>> Best Regards, >>>> Jaewook. >>>> ================================================ >>>> *Jaewook Oh* (오재욱) >>>> IISTRC - Internet Infra System Technology Research Center >>>> 369 Sangdo-ro, Dongjak-gu, >>>> 06978, Seoul, Republic of Korea >>>> ​ >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org >>> ?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> 葛馨霓 Xinni Ge >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksnhr.tech at gmail.com Tue Mar 13 02:48:29 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Tue, 13 Mar 2018 11:48:29 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan & Horizon folks, Now we are submitting a couple of patches to have the new xstatic modules. Let me request you to have review the following patches. We need Horizon PTL's +1 to move these forward. project-config https://review.openstack.org/#/c/551978/ governance https://review.openstack.org/#/c/551980/ Thanks in advance:) Regards, Kaz 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski : > Yes, please do that. We can then discuss in the review about technical > details. > > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge wrote: >> >> Hi, Akihiro >> >> Thanks for the quick reply. >> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be >> modified. >> It is much better to enhance horizon plugin settings, >> and I think maybe there could be one option like ADD_XSTATIC_MODULES. >> This option adds the plugin's xstatic files in STATICFILES_DIRS. >> I am considering to add a bug report to describe it at first, and give a >> patch later maybe. >> Is that ok with the Horizon team? >> >> Best Regards. >> Xinni >> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki wrote: >>> >>> Hi Xinni, >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >>> > Hello Horizon Team, >>> > >>> > I would like to hear about your opinions about how to add new xstatic >>> > modules to horizon settings. >>> > >>> > As for Heat-dashboard project embedded 3rd-party files issue, thanks >>> > for >>> > your advices in Dublin PTG, we are now removing them and referencing as >>> > new >>> > xstatic-* libs. >>> >>> Thanks for moving this forward. >>> >>> > So we installed the new xstatic files (not uploaded as openstack >>> > official >>> > repos yet) in our development environment now, but hesitate to decide >>> > how to >>> > add the new installed xstatic lib path to STATICFILES_DIRS in >>> > openstack_dashboard.settings so that the static files could be >>> > automatically >>> > collected by *collectstatic* process. >>> > >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>> > openstack_dashboard/utils/settings.py and the relevant static fils are >>> > added >>> > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. >>> > We may want new plugin setting keywords ( something similar to >>> > ADD_JS_FILES) >>> > to update horizon XSTATIC_MODULES (or directly update >>> > STATICFILES_DIRS). >>> >>> IMHO it is better to allow horizon plugins to add xstatic modules >>> through horizon plugin settings. I don't think it is a good idea to >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >>> usages. It makes difficult to track why and where a xstatic module in >>> BASE_XSTATIC_MODULES is used. >>> Multiple horizon plugins can add a same entry, so horizon code to >>> handle plugin settings should merge multiple entries to a single one >>> hopefully. >>> My vote is to enhance the horizon plugin settings. >>> >>> Akihiro >>> >>> > >>> > Looking forward to hearing any suggestions from you guys, and >>> > Best Regards, >>> > >>> > Xinni Ge >>> > >>> > >>> > __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> -- >> 葛馨霓 Xinni Ge >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Tue Mar 13 03:06:27 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 13 Mar 2018 14:06:27 +1100 Subject: [openstack-dev] [requirements][zun][karbor][magnum][tacker][kolla][tripleo][zaqar][networking-odl] update websocket-client and kubernetes libraries In-Reply-To: <20180312234714.2536pwr3u3lujfuc@gentoo.org> References: <20180312234714.2536pwr3u3lujfuc@gentoo.org> Message-ID: <20180313030537.GA28855@thor.bakeyournoodle.com> On Mon, Mar 12, 2018 at 06:47:14PM -0500, Matthew Thode wrote: > Requirements plans to update both versions, removing the current cap on > websocket-cient. The plan to do so is as follows. > > a. Remove the cap on websocket-client This is being done in: https://review.openstack.org/#/c/549664/ > b. merge the gr-update into python-zunclient > c. make release of python-zunclient > d. Alter the constrained versions of websocket-client and kubernetes > - to be co-installable with openstack libs (python-zunclient) > e. Raise the minimum acceptable version for kubernetes > raise the minimum version of websocket-client > - raise to above the versions kubernetes had problems with The process above will impact the following projects: $ get-all-requirements.py --pkgs kubernetes websocket-client Package : kubernetes [kubernetes>=4.0.0] (used by 5 projects) Included in : 3 projects openstack/karbor [cycle-with-intermediary] openstack/magnum [cycle-with-intermediary] openstack/tacker [cycle-with-intermediary] Also affects : 2 projects openstack/kolla-kubernetes [None] openstack/kuryr-tempest-plugin [None] Package : websocket-client [websocket-client<=0.40.0,>=0.33.0] (used by 5 projects) Re-Release : 2 projects openstack/python-tripleoclient [cycle-trailing] openstack/python-zunclient [cycle-with-intermediary] Included in : 2 projects openstack/zaqar [cycle-with-milestones] openstack/zun [cycle-with-intermediary] Also affects : 1 projects openstack/networking-odl [cycle-with-milestones] So I've updated the subject to reflect that. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Tue Mar 13 04:18:10 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 13 Mar 2018 15:18:10 +1100 Subject: [openstack-dev] [murano][Openstack-stable-maint] Stable check of openstack/murano failed In-Reply-To: <20180309043157.GA4213@thor.bakeyournoodle.com> References: <20180309043157.GA4213@thor.bakeyournoodle.com> Message-ID: <20180313041810.GB28855@thor.bakeyournoodle.com> On Fri, Mar 09, 2018 at 03:32:02PM +1100, Tony Breeds wrote: > On Thu, Mar 08, 2018 at 06:16:27AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > > Build failed. > > > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/build-openstack-sphinx-docs/8b023b7/html/ : SUCCESS in 4m 44s > > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/murano/stable/pike/openstack-tox-py27/82d0dae/ : FAILURE in 5m 48s > > The job is failing on the periodic-stable pipeline which indicates that > all changes on pike will hit this same issue. > > There is fix on master[1] but it's wrong so rather than back porting > that pike it'd be great if someone from the murano team could own fixing > this properly. > > Based on my 5mins of poking it seems that reading the test yaml file is > generating a list of unicode values rather than the expected list of > string_type(). I think the answer is a simple as iterating over the > list and using six.string_type to massage the value. I don't knwo what > else that will break and I also don't know the details of the contract > that allowed pattern is describing. > > For example making it a simple string value would probably also fix it > but that isn't a backwards compatible change. > > Yours Tony. > > [1] https://review.openstack.org/#/c/523829/4/murano/tests/unit/packages/hot_package/test_hot_package.py at 114 I see that this has been fixed on pike but without fixing it on master and queens. I've proposed the forward-ports of the fix. Can you verify that they're correct and then apply them, as currently murano has/is violating the stable policy. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From xinni.ge1990 at gmail.com Tue Mar 13 05:18:26 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Tue, 13 Mar 2018 14:18:26 +0900 Subject: [openstack-dev] [horizon] [devstack] horizon 'network create' panel does not distinguished In-Reply-To: References: <5F35F817-D1E9-4BC5-91C4-E112FCA8FA86@gmail.com> Message-ID: Hello Jaewook and everyone, I tried to install upstream/master Horizon+Heat-dashboard, still could not see the error. Maybe a clean installation of devstack could fix the issue. If you want to enable heat-dashboard in current working environment, you can try to install it manually. I share my manual installation steps here, and you could also use it to switch to any version or apply a patch. 1. update heat-dashboard cd ~/heat-dashboard; # Choose the version wanted # download a particular patch by: # git review -d ; # or switch to any branch e.g. master by: # git checkout master; 2. install heat-dashboard # not necessarily needed, check if heat-dashboard is installed with `pip list` sudo pip install -e . 2. copy heat-dashboard settings to horizon cp -rv ~/heat-dashboard/heat_dashboard/enabled ~/horizon/openstack_dashboard/local/ cp ~/heat-dashboard/heat_dashboard/local_settings.d/_1699_ orchestration_settings.py ~/horizon/openstack_dashboard/local/local_settings.d/ cp ~/heat-dashboard/heat_dashboard/conf/heat_policy.json ~/horizon/openstack_dashboard/conf/ 3. let horizon re-collect static files, and compress python manage.py collectstatic --clear python manage.py compress 4. restart apache server sudo service apache2 restart Best Regards, Xinni On Tue, Mar 13, 2018 at 10:40 AM, Xinni Ge wrote: > Hello Jaewook, > > Very glad to know it. > I will add comments to the bug report, and continue to find a better > solution to prevent the issue from happening. > > Best Regards! > Xinni > > On Tue, Mar 13, 2018 at 10:34 AM, Jaewook Oh wrote: > >> Hello, Xinni. >> >> I followed your description, and it worked properly :) >> >> Can you comment your description in the bug report? >> >> https://bugs.launchpad.net/bugs/1755140 >> >> It would be very helpful for me or somebody else who doesn't know how to >> restart the horizon independently! >> >> Best Regards, >> Jaewook. >> >> 2018-03-13 9:45 GMT+09:00 Xinni Ge : >> >>> Hello, Jaewook and everyone >>> >>> It looks like the error is caused by some angular module of >>> heat-dashboard not being loaded correctly. >>> >>> I tried to reproduce it in my devstack by installing stable/queens >>> Horiozn/Heat-dashboard, but couldn't see the same error. >>> >>> Maybe you want to try the following steps to restart web server and see >>> if the issue can be fixed. >>> Of course you can also remove the troubled panel in heat-dashboard, I >>> will also describe how to do it as follows. >>> >>> 1. remove heat-dashboard related settings >>> rm horizon/openstack_dashboard/local/enabled/_16* # (particularly try >>> to remove _1650_project_template_generator_panel.py to fix it) >>> rm horizon/openstack_dashboard/local/local_settings.d/_1699_orc >>> hestration_settings.py* >>> rm horizon/openstack_dashboard/conf/heat_policy.json >>> >>> 2. let horizon re-collect static files, and compress >>> python manage.py collectstatic --clear >>> python manage.py compress >>> >>> 3. restart apache server >>> sudo service apache2 restart >>> >>> Hope the problem can be solved and everything goes well. >>> And if anybody see the same error, please share more details about it. >>> >>> Best Regards, >>> Xinni >>> >>> On Mon, Mar 12, 2018 at 9:55 PM, Jaewook Oh wrote: >>> >>>> Thanks for feedback! >>>> >>>> As you said, I got errors in the JavaScript console. >>>> >>>> Below is the error log : >>>> >>>> 3bf910c7ae4c.js:652 JQMIGRATE: Logging is active >>>> fddd6f634ef8.js:2299 Uncaught TypeError: Cannot read property 'layout' >>>> of undefined >>>> at Object.25../arrows (fddd6f634ef8.js:2299) >>>> at s (fddd6f634ef8.js:2252) >>>> at fddd6f634ef8.js:2252 >>>> at Object.1../lib/dagre (fddd6f634ef8.js:2252) >>>> at s (fddd6f634ef8.js:2252) >>>> at e (fddd6f634ef8.js:2252) >>>> at fddd6f634ef8.js:2252 >>>> at fddd6f634ef8.js:2252 >>>> at fddd6f634ef8.js:2252 >>>> 25../arrows @ fddd6f634ef8.js:2299 >>>> s @ fddd6f634ef8.js:2252 >>>> (anonymous) @ fddd6f634ef8.js:2252 >>>> 1../lib/dagre @ fddd6f634ef8.js:2252 >>>> s @ fddd6f634ef8.js:2252 >>>> e @ fddd6f634ef8.js:2252 >>>> (anonymous) @ fddd6f634ef8.js:2252 >>>> (anonymous) @ fddd6f634ef8.js:2252 >>>> (anonymous) @ fddd6f634ef8.js:2252 >>>> 3bf910c7ae4c.js:699 Uncaught Error: [$injector:modulerr] Failed to >>>> instantiate module horizon.app due to: >>>> Error: [$injector:modulerr] Failed to instantiate module >>>> horizon.dashboard.project.heat_dashboard.template_generator due to: >>>> Error: [$injector:nomod] Module 'horizon.dashboard.project.hea >>>> t_dashboard.template_generator' is not available! You either >>>> misspelled the module name or forgot to load it. If registering a module >>>> ensure that you specify the dependencies as the second argument. >>>> http://errors.angularjs.org/1.5.8/$injector/nomod?p0=horizon >>>> .dashboard.project.heat_dashboard.template_generator >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:699:8 >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:818:59 >>>> at ensure (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >>>> at module (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:35 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:84 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>>> zon.dashboard.project.heat_dashboard.template_generator&p1=E >>>> rror%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dash >>>> board.project.heat_dashboard.template_generator'%20is% >>>> 20not%20available!%20You%20either%20misspelled%20the%20modul >>>> e%20name%20or%20forgot%20to%20load%20it.%20If%20registerin >>>> g%20a%20module%20ensure%20that%20you%20specify%20the% >>>> 20dependencies%20as%20the%20second%20argument.%0Ahttp% >>>> 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% >>>> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. >>>> template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. >>>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>>> e4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >>>> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20% >>>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >>>> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) >>>> %0A%20%20%20%20at%20loadModules%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20% >>>> 20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:699:8 >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:927:7 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:84 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at createInjector (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>>> at doBootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>>> at bootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>>> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >>>> 20to%20instantiate%20module%20horizon.dashboard.project.heat >>>> _dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B >>>> %24injector%3Anomod%5D%20Module%20'horizon.dashboard.project >>>> .heat_dashboard.template_generator'%20is%20not% >>>> 20available!%20You%20either%20misspelled%20the%20module% >>>> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >>>> g%20a%20module%20ensure%20that%20you%20specify%20the% >>>> 20dependencies%20as%20the%20second%20argument.%0Ahttp% >>>> 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% >>>> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. >>>> template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. >>>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>>> e4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >>>> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20% >>>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >>>> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) >>>> %0A%20%20%20%20at%20loadModules%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20% >>>> 20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>>> %0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>>> 24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.he >>>> at_dashboard.template_generator%26p1%3DError%253A%2520%255B% >>>> 2524injector%253Anomod%255D%2520Module%2520'horizon. >>>> dashboard.project.heat_dashboard.template_generator'% >>>> 2520is%2520not%2520available!%2520You%2520either% >>>> 2520misspelled%2520the%2520module%2520name%2520or% >>>> 2520forgot%2520to%2520load%2520it.%2520If%2520registering >>>> %2520a%2520module%2520ensure%2520that%2520you%2520specify% >>>> 2520the%2520dependencies%2520as%2520the%2520second% >>>> 2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org% >>>> 252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon. >>>> dashboard.project.heat_dashboard.template_generator% >>>> 250A%2520%2520%2520%2520at%2520http%253A%252F%252F192. >>>> 168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fj >>>> s%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520% >>>> 2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboa >>>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>>> 253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520( >>>> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816% >>>> 253A320)%250A%2520%2520%2520%2520at%2520module%2520(http% >>>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% >>>> 253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F >>>> 192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fj >>>> s%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520% >>>> 2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%2 >>>> 52Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4 >>>> c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadMod >>>> ules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard% >>>> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% >>>> 253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F% >>>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>>> 252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520% >>>> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >>>> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>>> 252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520% >>>> 2520at%2520loadModules%2520(http%253A%252F%252F192.168.11. >>>> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>>> 252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at% >>>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>>> rd%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at% >>>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>>> rd%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at% >>>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashb >>>> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924% >>>> 3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdash >>>> board%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925% >>>> 3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11. >>>> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInje >>>> ctor%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20% >>>> 20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187% >>>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>>> 3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >>>> f910c7ae4c.js%3A793%3A58) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:699:8 >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:818:59 >>>> at ensure (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:816:320) >>>> at module (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:818:8) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:35 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:84 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>>> zon.dashboard.project.heat_dashboard.template_generator&p1=E >>>> rror%3A%20%5B%24injector%3Anomod%5D%20Module%20'horizon.dash >>>> board.project.heat_dashboard.template_generator'%20is% >>>> 20not%20available!%20You%20either%20misspelled%20the%20modul >>>> e%20name%20or%20forgot%20to%20load%20it.%20If%20registerin >>>> g%20a%20module%20ensure%20that%20you%20specify%20the% >>>> 20dependencies%20as%20the%20second%20argument.%0Ahttp% >>>> 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% >>>> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. >>>> template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. >>>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>>> e4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >>>> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20% >>>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >>>> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) >>>> %0A%20%20%20%20at%20loadModules%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20% >>>> 20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:699:8 >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:927:7 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:925:84 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at createInjector (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>>> at doBootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>>> at bootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>>> http://errors.angularjs.org/1.5.8/$injector/modulerr?p0=hori >>>> zon.app&p1=Error%3A%20%5B%24injector%3Amodulerr%5D%20Failed% >>>> 20to%20instantiate%20module%20horizon.dashboard.project.heat >>>> _dashboard.template_generator%20due%20to%3A%0AError%3A%20%5B >>>> %24injector%3Anomod%5D%20Module%20'horizon.dashboard.project >>>> .heat_dashboard.template_generator'%20is%20not% >>>> 20available!%20You%20either%20misspelled%20the%20module% >>>> 20name%20or%20forgot%20to%20load%20it.%20If%20registerin >>>> g%20a%20module%20ensure%20that%20you%20specify%20the% >>>> 20dependencies%20as%20the%20second%20argument.%0Ahttp% >>>> 3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F%24injector%2Fnomod% >>>> 3Fp0%3Dhorizon.dashboard.project.heat_dashboard. >>>> template_generator%0A%20%20%20%20at%20http%3A%2F%2F192. >>>> 168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7a >>>> e4c.js%3A699%3A8%0A%20%20%20%20at%20http%3A%2F%2F192.168. >>>> 11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A818%3A59%0A%20%20%20%20at%20ensure%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A816%3A320)%0A%20%20%20%20at%20module%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A818%3A8)%0A%20%20%20% >>>> 20at%20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925%3A35%0A%20%20%20% >>>> 20at%20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard% >>>> 2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400) >>>> %0A%20%20%20%20at%20loadModules%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A925%3A84%0A%20%20%20%20at%20forEach%20( >>>> http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20%20% >>>> 20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashboar >>>> d%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924%3A156) >>>> %0Ahttp%3A%2F%2Ferrors.angularjs.org%2F1.5.8%2F% >>>> 24injector%2Fmodulerr%3Fp0%3Dhorizon.dashboard.project.he >>>> at_dashboard.template_generator%26p1%3DError%253A%2520%255B% >>>> 2524injector%253Anomod%255D%2520Module%2520'horizon. >>>> dashboard.project.heat_dashboard.template_generator'% >>>> 2520is%2520not%2520available!%2520You%2520either% >>>> 2520misspelled%2520the%2520module%2520name%2520or% >>>> 2520forgot%2520to%2520load%2520it.%2520If%2520registering >>>> %2520a%2520module%2520ensure%2520that%2520you%2520specify% >>>> 2520the%2520dependencies%2520as%2520the%2520second% >>>> 2520argument.%250Ahttp%253A%252F%252Ferrors.angularjs.org% >>>> 252F1.5.8%252F%2524injector%252Fnomod%253Fp0%253Dhorizon. >>>> dashboard.project.heat_dashboard.template_generator% >>>> 250A%2520%2520%2520%2520at%2520http%253A%252F%252F192. >>>> 168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fj >>>> s%252F3bf910c7ae4c.js%253A699%253A8%250A%2520%2520%2520% >>>> 2520at%2520http%253A%252F%252F192.168.11.187%252Fdashboa >>>> rd%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js% >>>> 253A818%253A59%250A%2520%2520%2520%2520at%2520ensure%2520( >>>> http%253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A816% >>>> 253A320)%250A%2520%2520%2520%2520at%2520module%2520(http% >>>> 253A%252F%252F192.168.11.187%252Fdashboard%252Fstatic% >>>> 252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A818% >>>> 253A8)%250A%2520%2520%2520%2520at%2520http%253A%252F%252F >>>> 192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard%252Fj >>>> s%252F3bf910c7ae4c.js%253A925%253A35%250A%2520%2520%2520% >>>> 2520at%2520forEach%2520(http%253A%252F%252F192.168.11.187%2 >>>> 52Fdashboard%252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4 >>>> c.js%253A703%253A400)%250A%2520%2520%2520%2520at%2520loadMod >>>> ules%2520(http%253A%252F%252F192.168.11.187%252Fdashboard% >>>> 252Fstatic%252Fdashboard%252Fjs%252F3bf910c7ae4c.js%253A924% >>>> 253A156)%250A%2520%2520%2520%2520at%2520http%253A%252F% >>>> 252F192.168.11.187%252Fdashboard%252Fstatic%252Fdashboard% >>>> 252Fjs%252F3bf910c7ae4c.js%253A925%253A84%250A%2520%2520% >>>> 2520%2520at%2520forEach%2520(http%253A%252F%252F192.168.11. >>>> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>>> 252F3bf910c7ae4c.js%253A703%253A400)%250A%2520%2520%2520% >>>> 2520at%2520loadModules%2520(http%253A%252F%252F192.168.11. >>>> 187%252Fdashboard%252Fstatic%252Fdashboard%252Fjs% >>>> 252F3bf910c7ae4c.js%253A924%253A156)%0A%20%20%20%20at% >>>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>>> rd%2Fjs%2F3bf910c7ae4c.js%3A699%3A8%0A%20%20%20%20at% >>>> 20http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboa >>>> rd%2Fjs%2F3bf910c7ae4c.js%3A927%3A7%0A%20%20%20%20at% >>>> 20forEach%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstati >>>> c%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A703%3A400)%0A%20%20% >>>> 20%20at%20loadModules%20(http%3A%2F%2F192.168.11.187%2Fdashb >>>> oard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A924% >>>> 3A156)%0A%20%20%20%20at%20http%3A%2F%2F192.168.11.187%2Fdash >>>> board%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A925% >>>> 3A84%0A%20%20%20%20at%20forEach%20(http%3A%2F%2F192.168.11. >>>> 187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4 >>>> c.js%3A703%3A400)%0A%20%20%20%20at%20loadModules%20(http%3A% >>>> 2F%2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs% >>>> 2F3bf910c7ae4c.js%3A924%3A156)%0A%20%20%20%20at%20createInje >>>> ctor%20(http%3A%2F%2F192.168.11.187%2Fdashboard%2Fstatic% >>>> 2Fdashboard%2Fjs%2F3bf910c7ae4c.js%3A913%3A464)%0A%20%20%20% >>>> 20at%20doBootstrap%20(http%3A%2F%2F192.168.11.187% >>>> 2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3bf910c7ae4c.js% >>>> 3A792%3A36)%0A%20%20%20%20at%20bootstrap%20(http%3A%2F% >>>> 2F192.168.11.187%2Fdashboard%2Fstatic%2Fdashboard%2Fjs%2F3b >>>> f910c7ae4c.js%3A793%3A58) >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:699:8 >>>> at http://192.168.11.187/dashboard/static/dashboard/js/3bf910c7 >>>> ae4c.js:927:7 >>>> at forEach (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:703:400) >>>> at loadModules (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:924:156) >>>> at createInjector (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:913:464) >>>> at doBootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:792:36) >>>> at bootstrap (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:793:58) >>>> at angularInit (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:789:556) >>>> at HTMLDocument. (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:1846:1383) >>>> at fire (http://192.168.11.187/dashboa >>>> rd/static/dashboard/js/3bf910c7ae4c.js:208:299) >>>> (anonymous) @ 3bf910c7ae4c.js:699 >>>> (anonymous) @ 3bf910c7ae4c.js:927 >>>> forEach @ 3bf910c7ae4c.js:703 >>>> loadModules @ 3bf910c7ae4c.js:924 >>>> createInjector @ 3bf910c7ae4c.js:913 >>>> doBootstrap @ 3bf910c7ae4c.js:792 >>>> bootstrap @ 3bf910c7ae4c.js:793 >>>> angularInit @ 3bf910c7ae4c.js:789 >>>> (anonymous) @ 3bf910c7ae4c.js:1846 >>>> fire @ 3bf910c7ae4c.js:208 >>>> fireWith @ 3bf910c7ae4c.js:213 >>>> ready @ 3bf910c7ae4c.js:32 >>>> completed @ 3bf910c7ae4c.js:14 >>>> >>>> I don't know exactly what error do I have to search.. >>>> >>>> Best Regards, >>>> Jaewook. >>>> >>>> >>>> 2018. 3. 12. 오후 9:48, Radomir Dopieralski 작성: >>>> >>>> Do you get any errors in the JavaScript console or in the network tab >>>> of the inspector? >>>> >>>> On Mon, Mar 12, 2018 at 12:11 PM, Jaewook Oh >>>> wrote: >>>> >>>>> Hello, this is Jaewook from Korea. >>>>> >>>>> Today I reinstalled devstack, but something weird dashboard was >>>>> displayed. >>>>> >>>>> Dashboard shows panels everything. >>>>> >>>>> Please looking at the image. >>>>> >>>>> >>>>> >>>>> >>>>> For example, Create Network panel shows 'Network', 'Subnet', 'Subnet >>>>> Details'. >>>>> >>>>> *But every menus are in Network tab, no distinguished at all. And when >>>>> I click the 'Subnet' or 'Subnet Details', nothing happen.* >>>>> >>>>> And also when I click the dropdown menu such as 'Select a project', it >>>>> shows the projects, but I cannot not select it. *Even though I >>>>> clicked it, it still shows 'Select a project'.* >>>>> >>>>> The OpenStack version is 3.14.0 and Queens release. >>>>> I installed it with devstack master version. >>>>> >>>>> What I suspect is* 'heat-dashboard'.* >>>>> Before I add 'enable plugin ~~ heat-dashboard', it didn't happened. >>>>> But after adding it, this error happened. >>>>> >>>>> I have no idea but to reinstall it. >>>>> >>>>> Is this error already known issue? >>>>> >>>>> I would very appreciate if somebody help me.. >>>>> >>>>> Best Regards, >>>>> Jaewook. >>>>> ================================================ >>>>> *Jaewook Oh* (오재욱) >>>>> IISTRC - Internet Infra System Technology Research Center >>>>> 369 Sangdo-ro, Dongjak-gu, >>>>> 06978, Seoul, Republic of Korea >>>>> ​ >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org >>>> ?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> 葛馨霓 Xinni Ge >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > 葛馨霓 Xinni Ge > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Mar 13 06:08:07 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 13 Mar 2018 15:08:07 +0900 Subject: [openstack-dev] [QA] [PTG] QA New Office Hours and no more meetings Message-ID: Hi All, During Dublin PTG, QA team discussed on Office hours vs meetings and to have a consistent time for meeting or office hours [1]. Due to low attendance in current meetings, we decided to cancel all meetings and starting the office hours on same time every Thursday in both TZ (09:00 UTC and 17:00 UTC). We have tried Office hours bi-weekly in Queens cycle and it was much productively than meetings. Below are the changes for QA meetings and office hours: Past: - Thursday 08:00 UTC bi-weekly meeting on #openstack-meeting - NOW CANCELED - Thursday 17:00 UTC bi-weekly meeting on #openstack-meeting - NOW CANCELED Current: - Every Thursday 09:00 UTC Office Hours on #openstack-qa - Every Thursday 17:00 UTC Office hours on #openstack-qa Agenda for Office hours are defined on wiki page[2] and new topic can be added in Open Discussion section. I updated the QA meeting wiki page [2] and irc channel info [3] to reflect the changes. ..1 https://etherpad.openstack.org/p/qa-rocky-ptg-rocky-priority ..2 https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Weekly_QA_Team_meeting ..3 http://eavesdrop.openstack.org/#QA_Team_Office_hours -gmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhu.bingbing at 99cloud.net Tue Mar 13 08:12:40 2018 From: zhu.bingbing at 99cloud.net (zhubingbing) Date: Tue, 13 Mar 2018 16:12:40 +0800 (CST) Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: <16E94906-8710-42CE-90F4-C72DEC804E10@cisco.com> References: <16E94906-8710-42CE-90F4-C72DEC804E10@cisco.com> Message-ID: <4b119561.418d.1621e6b2190.Coremail.zhu.bingbing@99cloud.net> +1 在 2018-03-13 01:05:46,"Steven Dake (stdake)" 写道: +1 From: Jeffrey Zhang Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Sunday, March 11, 2018 at 7:13 PM To: OpenStack Development Mailing List Subject: Re: [openstack-dev] [kolla][vote] core nomination for caoyuan sorry for a typo. The vote is open for 7 days until Mar 19th. On Mon, Mar 12, 2018 at 10:06 AM, Jeffrey Zhang wrote: Kolla core reviewer team, It is my pleasure to nominate caoyuan for kolla core team. caoyuan's output is fantastic over the last cycle. And he is the most active non-core contributor on Kolla project for last 180 days[1]. He focuses on configuration optimize and improve the pre-checks feature. Consider this nomination a +1 vote from me. A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 is a veto. Voting is open for 7 days until Mar 12th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/180 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From 935540343 at qq.com Tue Mar 13 09:20:06 2018 From: 935540343 at qq.com (=?gb18030?B?X18gbWFuZ28u?=) Date: Tue, 13 Mar 2018 17:20:06 +0800 Subject: [openstack-dev] OpenStack - Install gnocchi error. Message-ID: hi, I refer to: https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html installation gnocchi, unable to find gnocchi - API related services is this why? Please help answer my question, thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Mar 13 09:26:13 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 13 Mar 2018 10:26:13 +0100 Subject: [openstack-dev] [cinder] [manila] Performance concern on new quota system In-Reply-To: <93200DF9-062E-4B8F-9108-0E46AFBB4127@gmx.com> References: <20180309121102.by2qgasphnz3lvyq@localhost> <93200DF9-062E-4B8F-9108-0E46AFBB4127@gmx.com> Message-ID: <20180313092613.gjkf5eozx7wl4npy@localhost> On 09/03, Sean McGinnis wrote: > > > On Mar 9, 2018, at 07:37, TommyLike Hu wrote: > > > > Thanks Gorka, > > To be clear, I started this discussion not because I reject this feature, instead I like it as it's much more clean and simple, compared with performance impact it solves several other issues which we hate badly. I wrote this is to point out we may have this issue, and to see whether we could improve it before it's actually landed. Better is better:) > > > > > > - Using a single query to retrieve both counts and sums instead of 2 > > queries. > > > > For this advice, I think I already combined count and sum into single query. > > Yes, but we would be doing 2 count and sum queries, one for the volumes and another one for the per volume types, the idea I was proposing is doing just 1 query for both calculations, that way even if you increase the payload of the response from the DB you are getting rid of a round trip to the DB as well as a pass through all the volumes for the volume type. > > - DB triggers to do the actual counting. > > > > Please, no DB triggers. :) > > > This seems a good idea, but not sure whether it could cover all of the cases we have in our quota system and whether can be easily integrated into cinder, can you share more detail on this? > > > > Thanks > > TommyLike > > > As a general rule I agree with Sean that it's better to not have triggers, but I wanted to mention them as an alternative in case we really, really, really have problems with the other alternatives. Cheers, Gorka. From majopela at redhat.com Tue Mar 13 09:26:33 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Tue, 13 Mar 2018 09:26:33 +0000 Subject: [openstack-dev] [Neutron] Dublin PTG Summary In-Reply-To: <20180313072443.9F95.A24531E9@valinux.co.jp> References: <20180313072443.9F95.A24531E9@valinux.co.jp> Message-ID: Very good summary, thanks for leading the PTG and neutron so well. :) On Mon, Mar 12, 2018 at 11:25 PM fumihiko kakuma wrote: > Hi Miguel, > > > * As part of the neutron-lib effort, we have found networking projects > that > > are very inactive. Examples are networking-brocade (no updates since May > of > > 2016) and networking-ofagent (no updates since March of 2017). Miguel > > Lavalle will contact these projects leads to ascertain their situation. > If > > they are indeed inactive, we will not support them as part of neutron-lib > > updates and will also try to remove them from code search > > networking-ofagent has been removed in the Newton release. > So it will not be necessary to support it as part of neutron-lib updates. > > Thanks > kakuma. > > > On Mon, 12 Mar 2018 13:45:27 -0500 > Miguel Lavalle wrote: > > > Hi All! > > > > First of all, I want to thank you the team for the productive week we had > > in Dublin. Following below is a high level summary of the discussions we > > had. If there is something I left out, please reply to this email thread > to > > add it. However, if you want to continue the discussion on any of the > > individual points summarized below, please start a new thread, so we > don't > > have a lot of conversations going on attached to this update. > > > > You can find the etherpad we used during the PTG meetings here: > > https://etherpad.openstack.org/p/neutron-ptg-rocky > > > > > > Retrospective > > ========== > > > > * The team missed one community goal in the Pike cycle ( > > https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html) > and > > one in the Queens cycle (https://governance.openstack. > > org/tc/goals/queens/policy-in-code.html) > > > > - Akihiro Motoki will work on https://governance.openstack.o > > rg/tc/goals/queens/policy-in-code.html during Rocky > > > > - We need volunteers to complete https://governance.op > > enstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and the two new goals > > for the Rocky cycle: https://governance.openstack.o > > rg/tc/goals/rocky/enable-mutable-configuration.html and > > https://governance.openstack.org/tc/goals/rocky/mox_removal.html. > Akihiro > > Motoki will lead the effort for mox removal > > > > - We decided to add a section to our weekly meeting agenda where we are > > going to track the progress towards catching up with the community goals > > during the Rocky cycle > > > > * As part of the neutron-lib effort, we have found networking projects > that > > are very inactive. Examples are networking-brocade (no updates since May > of > > 2016) and networking-ofagent (no updates since March of 2017). Miguel > > Lavalle will contact these projects leads to ascertain their situation. > If > > they are indeed inactive, we will not support them as part of neutron-lib > > updates and will also try to remove them from code search > > > > * We will continue our efforts to recruit new contributors and develop > core > > reviewers. During the conversation on this topic, Nikolai de Figueiredo > and > > Pawel Suder announced that they will become active in Neutron. Both of > > them, along with Hongbin Lu, indicated that are interested in working > > towards becoming core reviewers. > > > > * The team went through the blueprints in the backlog. Here is the status > > for those blueprints that are not discussed in other sections of this > > summary: > > > > - Adopt oslo.versionedobjects for database interactions. This is a > > continuing effort. The contact is Ihar Hrachyshka (ihrachys). > Contributors > > are wanted. There is a weekly meeting led by Ihar where this topic is > > covered: http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting > > > > - Enable adoption of an existing subnet into a subnetpool. The final > > patch in the series to implement this feature is: > > https://review.openstack.org/#/c/348080. Pawel Suder will drive this > patch > > to completion > > > > - Neutron in-tree API reference (https://blueprints.launchpad. > > net/neutron/+spec/neutron-in-tree-api-ref). There are two remaining TODOs > > to complete this blueprint: > https://bugs.launchpad.net/neutron/+bug/1752274 > > and https://bugs.launchpad.net/neutron/+bug/1752275. We need volunteers > for > > these two work items > > > > - Add TCP/UDP port forwarding extension to L3. The spec was merged > > recently: https://specs.openstack.org/openstack/neutron-specs/specs/qu > > eens/port-forwarding.html. Implementation effort is in progress: > > https://review.openstack.org/#/c/533850/ and > https://review.openstack.org/# > > /c/535647/ > > > > - Pure Python driven Linux network configuration ( > > https://bugs.launchpad.net/neutron/+bug/1492714). This effort has been > > going on for several cycles gradually adopting pyroute2. Slawek Kaplonski > > is continuing it with https://review.openstack.org/#/c/545355 and > > https://review.openstack.org/#/c/548267 > > > > > > Port behind port API proposal > > ====================== > > > > * Omer Anson proposed to extend the Trunk Port API to generalize the > > support for port behind port use cases such as containers nested as > > MACVLANs within a VM or HA proxy port behind amphora VM port: > > https://bugs.launchpad.net/bugs/1730845 > > > > - After discussing the proposed use cases, the agreement was to > develop > > a specification making sure input is provided by the Kuryr and Octavia > teams > > > > > > ML2 and Mechanism drivers > > ===================== > > > > * Hongbin Lu presented a proposal (https://bugs.launchpad.net/ne > > utron/+bug/1722720) to add a new value "auto" to the port attribute > > admin_state_up. > > > > - This is to support SR-IOV ports, where admin_state_up == "auto" > would > > mean that the VF link state follows that of the PF. This may be useful > when > > VMs use the link as a trigger for its own HA mechanism > > - The agreement was not to overload the admin_state_up attribute with > > more values, since it reflects the desired administrative state of the > port > > and add a new attribute for the intended purpose > > > > * Zhang Yanxian presented a specification (https://review.openstack.org/ > > 506066) to support SR-IOV bonds whereby a Neutron port is associated with > > two VFs in separate PFs. This is useful in NFV scenarios, where link > > redundancy is necessary. > > > > - Nikolai de Figueiredo agreed to help to drive this effort forward, > > starting with the specification both in the Neutron and the Nova sides > > - Sam Betts indicated this type of bond is also of interest for > Ironic. > > He requested to be kept in the loop > > > > * Ruijing Guo proposed to support VLAN transparency in Neutron OVS agent. > > > > - There is a previous incomplete effort to provide this support: > > https://bugs.launchpad.net/neutron/+bug/1705719. Patches are here: > > > https://review.openstack.org/#/q/project:openstack/neutron+topic:bug/1705719 > > - Agreement was for Ruijing to look at the existing patches to > re-start > > the effort. Thomas Morin may provide help for this > > - While on this topic, the conversation temporarily forked to the use > of > > registers instead of ovsdb port tags in L2 agent br-int and possibly > remove > > br-tun. Thomas Morin committed to draft a RFE for this. > > > > * Mike Kolesnik, Omer Anson, Irena Berezovsky, Takashi Yamamoto, Lucas > > Alvares, Ricardo Noriega, Miguel Ajo, Isaku Yamahata presented the > proposal > > to implement a common mechanism to achieve synchronization between > > Neutron's DB and the DBs of sub-projects / SDN frameworks > > > > - Currently each sub-project / SDN framework has its own solution for > > this problem. The group thinks that a common solution can be achieved > > - The agreement was to create a specification where the common > solution > > can be fleshed out > > - The synchronization mechanism will exist in Neutron > > > > * Mike Kolesnik (networking-odl) requested feedback from members of other > > Neutron sub-projects about the value of inheriting ML2 Neutron's unit > tests > > to get "free testing" for mechanism drivers > > > > - The conclusion was that there is no value in that practice for the > > sub-rpojects > > - Sam Betts and Miguel Lavalle will explore moving unit tests utils to > > neutron-lib to enable subprojects to create their own base classes > > - Mike Kolesnik will document a guideline for sub-projects not to > > inherit unit tests from Neutron > > > > > > API topics > > ======== > > > > * Isaku Yamahata presented a proposal of a new API for cloud admins to > > retrieve the physical networks configured in compute hosts > > > > - This information is currently stored in configuration files. In > > agent-less environments it is difficult to retrieve > > - The agreement was to extend the agent API to expose the physnet as a > > standard attribute. This will be fed by a pseudo-agent > > > > * Isaku Yamahata presented a proposal of a new API to report mechanism > > drivers health > > > > - The overall idea is to report mechanism driver status, similar to > the > > agents API which reports agent health. In the case of mechanism drivers > > API, it would report connectivity to backend SDN controller or MQ server > > and report its health/config periodically > > - Thomas Morin pointed out that this is relevant not only for ML2 > > mechanism drivers but also for all drivers of different services > > - The agreement was to start with a specification where we scope the > > proposal into something manageable for implementation > > > > * Yushiro Furukawa proposed to add support of 'snat' as a loggable > resource > > type: https://bugs.launchpad.net/neutron/+bug/1752290 > > > > - The agreement was to implement it in Rocky > > - Brian Haley agreed to be the approver > > > > * Hongbin Lu indicated that If users provide different kinds of invalid > > query parameters, the behavior of the Neutron API looks unpredictable ( > > https://bugs.launchpad.net/neutron/+bug/1749820) > > > > - The proposal is to improve the predictability of the Neutron API by > > handling invalid query parameters consistently > > - The proposal was accepted. It will need to provide API > discoverability > > when behavior changes on filter parameter validation > > - It was also recommended to discuss this with the API SIG to get > their > > guidance. The discussion already started in the mailing list: > > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128021.html > > > > > > Openflow Manager and Common Classification Framework > > ========================================== > > > > * The Openflow manager implementation needs reviews to continue making > > progress > > > > - The approved spec is here: https://specs.openstack.org/op > > enstack/neutron-specs/specs/backlog/pike/l2-extension-ovs-fl > > ow-management.html > > - The code is here: https://review.openstack.org/323963 > > - Thomas Morin, David Shaughnessy and Miguel Lavalle discussed and > > reviewed the implementation during the last day of the PTG. The result of > > that conversation was reflected in the patch. Thomas and Miguel committed > > to continue reviewing the patch > > > > * The Common Classification Framework (https://specs.openstack.org/o > > penstack/neutron-specs/specs/pike/common-classification-framework.html) > > needs to be adopted by its potential consumers: QoS, SFC, FWaaS > > > > - David Shaughnessy and Miguel Lavalle met with Slawek Kaplonski over > > IRC the last day of the PTG (http://eavesdrop.openstack.or > > g/irclogs/%23openstack-neutron/%23openstack-neutron.2018-03- > > 02.log.html#t2018-03-02T12:00:34) to discuss the adoption of the > framework > > in QoS code. The agreement was to have a PoC for the DSCP marking rule, > > since it uses OpenFlow and wouldn't involve big backend changes > > > > - David Shaughnessy and Yushiro Furukawa are going to meet to discuss > > adoption of the framework in FWaaS > > > > > > Neutron to Neutron interconnection > > ========================= > > > > * Thomas Morin walked the team through an overview of his proposal ( > > https://review.openstack.org/#/c/545826) for Neutron to Neutron > > interconnection, whereby the following requirements are satisfied: > > > > - Interconnection is consumable on-demand, without admin intervention > > - Have network isolation and allow the use of private IP addressing > end > > to end > > - Avoid the overhead of packet encryption > > > > * Feedback was positive and the agreement is to continue developing and > > reviewing the specification > > > > > > L3 and L3 flavors > > ============ > > > > * Isaku Yamahata shared with the team that the implementation of routers > > using the L3 flavors framework gives rise to the need of specifying the > > order in which callbacks are executed in response to events > > > > - Over the past couple of months several alternatives have been > > considered: callback cascading among resources, SQLAlchemy events, > > assigning priorities to callbacks responding to the same event > > - The agreement was an approach based on assigning a priority > structure > > to callbacks in neutron-lib: https://review.openstack.org/#/c/541766 > > > > * Isaku Yamahata shared with the team the progress made with the PoC for > an > > Openflow based DVR: https://review.openstack.org/#/c/472289/ and > > https://review.openstack.org/#/c/528336/ > > > > - There was a discussion on whether we need to ask the OVS community > to > > do ipv6 modification to support this PoC. The conclusion was that the > > feature already exists > > - There was also an agreement for David Chou add Tempest testing for > the > > scenario of mixed agents > > > > > > neutron-lib > > ======== > > > > * The team reviewed two neutron-lib specs, providing feedback through > > Gerrit: > > > > - A spec to rehome db api and utils into neutron-lib: > > https://review.openstack.org/#/c/473531. > > - A spec to decouple neutron db models and ovo for neutron-lib: > > https://review.openstack.org/#/c/509564/. There is agreement from Ihar > > Ihrachys that OVO base classes should go into neutron-lib. But he asked > not > > to move yet neutron.objects.db.api since it's still in flux > > > > * Manjeet Singh Bhatia proposed making payload consistent for all the > > callbacks so all the operations of an object get same type of payload. ( > > https://bugs.launchpad.net/neutron/+bug/1747747) > > > > - The agreement was for Manjeet to document all the instances in the > > code where this is happening so he and others can work on making the > > payloads consistent > > > > > > Proposal to migrate neutronclient python bindings to OpenStack SDK > > ================================================== > > > > * Akihiro Motoki proposed to change the first priority of neutron-related > > python binding to OpenStack SDK rather than neutronclient python > bindings, > > given that OpenStack SDK became official in Queens ( > > > http://lists.openstack.org/pipermail/openstack-dev/2018-February/127726.html > > ) > > > > - The proposal is to implement all Neutron features in OpenStack SDK > as > > the first citizen and the neutronclient OSC plugin consumes corresponding > > OpenStack SDK APIs > > - New features should be supported in OpenStack SDK and > > OSC/neutronclient OSC plugin as the first priority > > - If a new feature depends on neutronclient python bindings, it can be > > implemented in neutornclient python bindings first and they are ported as > > part of existing feature transition > > - Existing features only supported in neutronclient python bindings > are > > ported into OpenStack SDK, and neutronclient OSC plugin will consume them > > once they are implemented in OpenStack SDK > > - There is no plan to drop the neutronclient python bindings since > not a > > small number of projects consumes it. It will be maintained as-is > > - Projects like Nova that consume a small set of neutron features can > > continue using neutronclient python bindings. Projects like Horizon or > Heat > > that would like to support a wide range of features might be better off > > switching to OpenStack SDK > > - Proposal was accepted > > > > > > Cross project planning with Nova > > ======================== > > > > * Minimum bandwidth support in the Nova scheduler. The summary of the > > outcome of the discussion and further work done after the PTG is the > > following: > > > > - Minimum bandwidth support guarantees a port minimum bandwidth. > Strict > > minimum bandwidth support requires cooperation with the Nova scheduler, > to > > avoid physical interfaces bandwidth overcommitment > > - Neutron will create in each host networking RPs (Resource Providers) > > under the compute RP with proper traits and then will report resource > > inventories based on the discovered and / or configured resource > inventory > > in the host > > - The hostname will be used by Neutron to find the compute RP created > by > > Nova for the compute host. This convention can create ambiguity in > > deployments with multiple cells, where hostnames may not be unique. > However > > this problem is not exclusive to this effort, so its solution will be > > considered out of scope > > - Two new standard Resource Classes will be defined to represent the > > bandwidth in each direction, named as `NET_BANDWIDTH_INGRESS_BITS_SEC` > and > > `NET_BANDWIDTH_EGRESS_BITS_SEC > > - New traits will be defined to distinguish a network back-end agent: > > `NET_AGENT_SRIOV`, `NET_AGENT_OVS`. Also new traits will be used to > > indicate which physical network a given Network RP is connected to > > - Neutron will express a port's bandwidth needs through the port API > in > > a new attribute named "resource_request" that will include ingress > > bandwidth, egress bandwidth, the physical net and the agent type > > - The first implementation of this feature will support server create > > with pre-created Neutron ports having QoS policy with minimum bandwidth > > rules. Server create with networks having QoS policy minimum bandwidth > rule > > will be out of scope of the first implementation, because currently, in > > this case, the corresponding port creations happen after the scheduling > > decision has been made > > - For the first implementation, Neutron should reject a QoS minimum > > bandwidth policy rule created on a bound port > > - The following cases don't involve any interaction in Nova and as a > > consequence, Neutron will have to adjust the resource allocations: QoS > > policy rule bandwidth amount change on a bound port and QoS aware sub > port > > create under a bound parent port > > - For more detailed discussion, please go to the following specs: > > https://review.openstack.org/#/c/502306 and > https://review.openstack.org/# > > /c/508149 > > > > * Provide Port Binding Information for Nova Live Migration ( > > https://specs.openstack.org/openstack/neutron-specs/specs/ > > backlog/pike/portbinding_information_for_nova.html and > > https://specs.openstack.org/openstack/nova-specs/specs/ > > queens/approved/neutron-new-port-binding-api.html). > > > > - There was no discussion around this topic > > - There was only an update to both teams about the solid progress that > > has been made on both sides: https://review.openstack.org/#/c/414251/ > and > > https://review.openstack.org/#/q/status:open+project: > > openstack/nova+branch:master+topic:bp/neutron-new-port-binding-api > > - The plan is to finish this in Rocky > > > > * NUMA aware switches https://review.openstack.org/#/c/541290/ > > > > - The agreement on this topic was to do this during Rocky entirely in > > Nova using a config option which is a list of JSON blobs > > > > * Miguel Lavalle and Hongbin Lu proposed to add device_id of the > associated > > port to the floating IP resource > > > > - The use case is to allow Nova to filter instances by floating IPs > > - The agreement was that this would be adding an entirely new contract > > to Nova with new query parameters. This will not be implemented in Nova, > > especially since the use case can already be fulfilled by making 3 API > > calls in a client: find floating IP via filter (Neutron), use that to > > filter port to get the device_id (Neutron), use that to get the server > > (Nova) > > > > > > Team photos > > ========= > > > > * Thanks to Kendall Nelson, the official PTG team photos can be found > here: > > https://www.dropbox.com/sh/dtei3ovfi7z74vo/AABT7UR5el6iXRx5WihkbOB3a/ > > Neutron?dl=0 > > > > * Thanks to Nikolai de Figueiredo for sharing with us pictures of our > team > > dinner. Please find a couple of them attached to this message > > -- > fumihiko kakuma > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Tue Mar 13 09:41:04 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 13 Mar 2018 11:41:04 +0200 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Kaz, Thanks for cleaning this up. I put +1 on both of these patches Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara wrote: > Hi Ivan & Horizon folks, > > > Now we are submitting a couple of patches to have the new xstatic modules. > Let me request you to have review the following patches. > We need Horizon PTL's +1 to move these forward. > > project-config > https://review.openstack.org/#/c/551978/ > > governance > https://review.openstack.org/#/c/551980/ > > Thanks in advance:) > > Regards, > Kaz > > > 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski : > > Yes, please do that. We can then discuss in the review about technical > > details. > > > > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge > wrote: > >> > >> Hi, Akihiro > >> > >> Thanks for the quick reply. > >> > >> I agree with your opinion that BASE_XSTATIC_MODULES should not be > >> modified. > >> It is much better to enhance horizon plugin settings, > >> and I think maybe there could be one option like ADD_XSTATIC_MODULES. > >> This option adds the plugin's xstatic files in STATICFILES_DIRS. > >> I am considering to add a bug report to describe it at first, and give a > >> patch later maybe. > >> Is that ok with the Horizon team? > >> > >> Best Regards. > >> Xinni > >> > >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki > wrote: > >>> > >>> Hi Xinni, > >>> > >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : > >>> > Hello Horizon Team, > >>> > > >>> > I would like to hear about your opinions about how to add new xstatic > >>> > modules to horizon settings. > >>> > > >>> > As for Heat-dashboard project embedded 3rd-party files issue, thanks > >>> > for > >>> > your advices in Dublin PTG, we are now removing them and referencing > as > >>> > new > >>> > xstatic-* libs. > >>> > >>> Thanks for moving this forward. > >>> > >>> > So we installed the new xstatic files (not uploaded as openstack > >>> > official > >>> > repos yet) in our development environment now, but hesitate to decide > >>> > how to > >>> > add the new installed xstatic lib path to STATICFILES_DIRS in > >>> > openstack_dashboard.settings so that the static files could be > >>> > automatically > >>> > collected by *collectstatic* process. > >>> > > >>> > Currently Horizon defines BASE_XSTATIC_MODULES in > >>> > openstack_dashboard/utils/settings.py and the relevant static fils > are > >>> > added > >>> > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. > >>> > We may want new plugin setting keywords ( something similar to > >>> > ADD_JS_FILES) > >>> > to update horizon XSTATIC_MODULES (or directly update > >>> > STATICFILES_DIRS). > >>> > >>> IMHO it is better to allow horizon plugins to add xstatic modules > >>> through horizon plugin settings. I don't think it is a good idea to > >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin > >>> usages. It makes difficult to track why and where a xstatic module in > >>> BASE_XSTATIC_MODULES is used. > >>> Multiple horizon plugins can add a same entry, so horizon code to > >>> handle plugin settings should merge multiple entries to a single one > >>> hopefully. > >>> My vote is to enhance the horizon plugin settings. > >>> > >>> Akihiro > >>> > >>> > > >>> > Looking forward to hearing any suggestions from you guys, and > >>> > Best Regards, > >>> > > >>> > Xinni Ge > >>> > > >>> > > >>> > ____________________________________________________________ > ______________ > >>> > OpenStack Development Mailing List (not for usage questions) > >>> > Unsubscribe: > >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> > >> -- > >> 葛馨霓 Xinni Ge > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Tue Mar 13 09:55:57 2018 From: julien at danjou.info (Julien Danjou) Date: Tue, 13 Mar 2018 10:55:57 +0100 Subject: [openstack-dev] OpenStack - Install gnocchi error. In-Reply-To: (mango.'s message of "Tue, 13 Mar 2018 17:20:06 +0800") References: Message-ID: On Tue, Mar 13 2018, __ mango. wrote: > hi, > I refer to: https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html installation gnocchi, > unable to find gnocchi - API related services is this why? > Please help answer my question, thank you! Can you provide more details as what's your problem? thank you! -- Julien Danjou // Free Software hacker // https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From holkina at selectel.ru Tue Mar 13 10:54:22 2018 From: holkina at selectel.ru (=?UTF-8?B?0KLQsNGC0YzRj9C90LAg0KXQvtC70LrQuNC90LA=?=) Date: Tue, 13 Mar 2018 13:54:22 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing Message-ID: Hi, I'm using an ocata release of OpenStack where the option prevent_arp_spoofing can be managed via conf. But later in pike it was removed and it was decided to prevent spoofing by default. There are cases where security features should be disabled. As I can see now we can use a port_security option for these cases. But this option should be set for a particular port or network on create. The default value is set to True [1] and itt is impossible to change it. I'd like to suggest to get default value for port_security [2] from config option. It would be nice to know your opinion. [1] https://github.com/openstack/neutron-lib/blob/stable/queens/neutron_lib/api/definitions/port_security.py#L21 [2] https://github.com/openstack/neutron/blob/stable/queens/neutron/objects/extensions/port_security.py#L24 Best regards, Tatiana -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbelu at cloudbasesolutions.com Tue Mar 13 12:10:52 2018 From: cbelu at cloudbasesolutions.com (Claudiu Belu) Date: Tue, 13 Mar 2018 12:10:52 +0000 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Hi, Indeed ARP spoofing is prevented by default, but AFAIK, if you want it enabled for a port / network, you can simply disable the security groups on that neutron network / port. Best regards, Claudiu Belu ________________________________ From: Татьяна Холкина [holkina at selectel.ru] Sent: Tuesday, March 13, 2018 12:54 PM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [neutron] Prevent ARP spoofing Hi, I'm using an ocata release of OpenStack where the option prevent_arp_spoofing can be managed via conf. But later in pike it was removed and it was decided to prevent spoofing by default. There are cases where security features should be disabled. As I can see now we can use a port_security option for these cases. But this option should be set for a particular port or network on create. The default value is set to True [1] and itt is impossible to change it. I'd like to suggest to get default value for port_security [2] from config option. It would be nice to know your opinion. [1] https://github.com/openstack/neutron-lib/blob/stable/queens/neutron_lib/api/definitions/port_security.py#L21 [2] https://github.com/openstack/neutron/blob/stable/queens/neutron/objects/extensions/port_security.py#L24 Best regards, Tatiana -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Tue Mar 13 12:23:39 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 13 Mar 2018 08:23:39 -0400 Subject: [openstack-dev] [TripleO] ansible/config-download PTG session recap Message-ID: During the PTG, TripleO held a session about moving forward with config-download as the default deployment mechanism during Rocky. We captured our notes in this etherpad: https://etherpad.openstack.org/p/tripleo-ptg-config-download There was wide agreement to continue moving forward with this implementation. While os-collect-config and friends have served us well for many successful releases, it seemed there was a lot of desire to remove that polling based architecture in favor of a more pure ansible based solution. During the session we also talked about relying on more standalone native ansible roles. We agreed to adopt the approach of creating a new git repo per ansible role. While this may create more busy work upfront, the advantages of being able to version and release each role individually outweigh the disadvantages. There was also some discussion about developing a script/tool to one-time create standalone per-service ansible roles from the existing tripleo-heat-templates service templates. Once the roles were created they would become the source of truth moving forward for service configuration. The service templates from tripleo-heat-templates would then consume those roles directly. This has the advantage of removing the inlined tasks in the templates and would give us the ability to test the roles in a standalone fashion more easily outside of Heat. It also aligns better with future work around k8s/apb support. The goal is to make config-download the default by the Rocky-1 milestone (April 16 - April 20), and I feel we're still on track to do that. If you'd like to help with this effort we're coordinating our work with this etherpad: https://etherpad.openstack.org/p/tripleo-config-download-squad-status -- -- James Slagle -- From tobias at citynetwork.se Tue Mar 13 12:25:52 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Tue, 13 Mar 2018 13:25:52 +0100 Subject: [openstack-dev] [publiccloud-wg] Poll new meeting time and bi-weekly meeting Message-ID: <4ccfe58f-4e22-90e9-83f8-24aa6398552e@citynetwork.se> Hi folks, We have under some time had requests of changing the current time for our bi-weekly meetings. Not very many suggestions of new time slots have ended up in my inbox, so have added a few suggestions my self. The plan is to have this set and do final voting at tomorrows meeting. Reply to this email if you have other suggestions and I can add those as well. Please mark the alternatives that works for you no later than tomorrow 1400UTC. Doodle link: https://doodle.com/poll/2kv4h79xypmathac Tomorrows meeting will be held as planned in #openstack-meeting-3 at 1400 UTC. Agenda can be found at: https://etherpad.openstack.org/p/publiccloud-wg S -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From balazs.gibizer at ericsson.com Tue Mar 13 12:37:17 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 13 Mar 2018 13:37:17 +0100 Subject: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload In-Reply-To: <1520611692.7809.8@smtp.office365.com> References: <1520598388.7809.6@smtp.office365.com> <1520611692.7809.8@smtp.office365.com> Message-ID: <1520944637.5767.7@smtp.office365.com> On Fri, Mar 9, 2018 at 5:08 PM, Balázs Gibizer wrote: > > > On Fri, Mar 9, 2018 at 3:46 PM, Matt Riedemann > wrote: >> On 3/9/2018 6:26 AM, Balázs Gibizer wrote: >>> The instance-action REST API has already provide the traceback to >>> the user (to the admin by default) and the notifications are >>> also admin only things as they are emitted to the message bus by >>> default. So I assume that security is not a bigger concern for >>> the notification than for the REST API. So I think the only >>> issue we have to accept is that the traceback object in the >>> ExceptionPayload will not be a well defined field but a simple >>> string containing a serialized traceback. >>> >>> If there is no objection then Kevin or I can file a specless bp to >>> extend the ExceptionPayload. >> >> I think that's probably fine. As you said, if we already provide >> tracebacks in instance action event details (and faults), then the >> serialized traceback in the error notification payload also seems >> fine, and is what the legacy notifications did so it's not like >> there wasn't precedent. >> >> I don't think we need a blueprint for this, it's just a bug. > > I thought about a bp because it was explicitly defined in the > original spec not have traceback so for me it does not feels like a > bug. I filed the bp https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications > > Cheers, > gibi > >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zigo at debian.org Tue Mar 13 12:55:14 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 13 Mar 2018 13:55:14 +0100 Subject: [openstack-dev] [keystone] Keystone failing with error 104 (connection reset by peer) if using uwsgi In-Reply-To: References: <824a18a9-be00-53c2-5929-82026f973224@debian.org> Message-ID: <3843cfb1-2b51-5fcd-ee53-ff78710042c4@debian.org> On 03/11/2018 08:12 PM, Lance Bragstad wrote: > Hey Thomas,  > > Outside of the uwsgi config, are you following a specific guide for your > install? I'd like to try and recreate the issue. > > Do you happen to have any more logging information? > > Thanks Hi Lance, Thanks for your proposal to try to diagnose the issue. Here's the Debian package: http://stretch-queens.infomaniak.ch/keystone/ (it's 13.0.0-6, but that's really a backport for Stretch...) To use that version of Keystone, you will need this Queens repository: deb http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports main deb-src http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports main deb http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports-nochange main deb-src http://stretch-queens.infomaniak.ch/debian \ stretch-queens-backports-nochange main (sorry for the email client that is wrapping this...) this repository contains a full Queens backport for Stretch btw, and also holds a version of keystone (with Apache), so make sure you're using the correct uwsgi version from above. Cheers, Thomas Goirand (zigo) From thierry at openstack.org Tue Mar 13 12:55:56 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 13 Mar 2018 13:55:56 +0100 Subject: [openstack-dev] [release] Relaxed rules for cycle-trailing projects Message-ID: <8ec06f3a-5a2d-ab2d-9921-5cb75624a793@openstack.org> Hi! The cycle-trailing release model is designed for OpenStack packaging / deployment / lifecycle management tools. Since those package / help deploy OpenStack, they are generally released after the OpenStack coordinated release. The rule so far was that such projects should release exactly two weeks after the coordinated release. Feature integration and validation testing can take a lot more (or less?) time though (depending on your team staffing), and there is little value for our ecosystem in such a short and determined deadline. The release team therefore agreed to relax the rules. Such cycle-trailing deliverables can be released any time in the 3 months following the coordinated release. It should be plenty enough to complete the work, and if the work is not ready by then, you should probably be focusing on supporting the upcoming release anyway. In addition to that, we have a rule that to be included in the upcoming release, new deliverables need to be added before the milestone-2 of that release cycle. The idea is to protect packaging / deployment / lifecycle management tools teams against last-minute additions that would jeopardize their ability to deliver proper support for the upcoming release. That rule is very useful for the coordinated release components, but it is useless for the packaging/deployment tools themselves though. So we agreed to waive that rule for cycle-trailing deliverables. Since that work happens downstream from the coordinated release (and nobody else upstream is depending on it), it's OK for packaging teams added after milestone-2 to publish deliverables for that release. The end result of these changes is that it's OK for the recently-added OpenStack-Helm and LOCI teams to publish a release of their packaging recipes for Queens. We encourage them to release as soon as possible, but as long as it happens before the end of May, it would still be accepted by the release team as their "Queens" release. Please contact the release team (on the list or on #openstack-release) if you have questions about this change. -- Thierry Carrez (ttx) From gong.yongsheng at 99cloud.net Tue Mar 13 13:12:00 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Tue, 13 Mar 2018 21:12:00 +0800 (CST) Subject: [openstack-dev] [tacker][ptg] tacker vPtg will be held tomorrow Message-ID: <4fabb851.4dbf.1621f7d2d38.Coremail.gong.yongsheng@99cloud.net> https://etherpad.openstack.org/p/Tacker-PTG-Rocky gongys tacker 99cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Mar 13 13:58:48 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 13 Mar 2018 07:58:48 -0600 Subject: [openstack-dev] [tripleo] Blueprints for Rocky Message-ID: Hey everyone, So we currently have 63 blueprints for currently targeted for Rocky[0]. Please make sure that any blueprints you are interested in delivering have an assignee set and have been approved. I would like to have the ones we plan on delivering for Rocky to be updated by April 3, 2018. Any blueprints that have not been updated will be moved out to the next cycle after this date. Thanks, -Alex [0] https://blueprints.launchpad.net/tripleo/rocky From kgiusti at gmail.com Tue Mar 13 14:18:49 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 13 Mar 2018 10:18:49 -0400 Subject: [openstack-dev] [oslo] Oslo PTG Summary In-Reply-To: References: <64db6f20-a994-1555-5ed5-cdfe0f628436@nemebean.com> <5AA1A2AD.3050204@fastmail.com> Message-ID: Hi Doug, Andy updated the etherpad [0] with a new link [1]. Holler if it's still broken... [0] https://etherpad.openstack.org/p/oslo-ptg-rocky [1] https://docs.google.com/presentation/d/1PWJAGQohAvlwod4gMTp6u1jtZT1cuaE-whRmnV8uiMM/edit?usp=sharing On Mon, Mar 12, 2018 at 11:54 AM, Doug Hellmann wrote: > I can’t see > > https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p > > > > On Mar 12, 2018, at 11:39 AM, Ken Giusti wrote: > > Hi Josh - I'm able to view all of them, but I probably have special > google powers ;) > > Which links are broken for you? > > thanks, > > On Thu, Mar 8, 2018 at 3:53 PM, Joshua Harlow wrote: > > > Can we get some of those doc links opened. > > 'You need permission to access this published document.' I am getting for a > few of them :( > > > Ben Nemec wrote: > > > Hi, > > Here's my summary of the discussions we had in the Oslo room at the PTG. > Please feel free to reply with any additions if I missed something or > correct anything I've misrepresented. > > oslo.config drivers for secret management > ----------------------------------------- > > The oslo.config implementation is in progress, while the Castellan > driver still needs to be written. We want to land this early in Rocky as > it is a significant change in architecture for oslo.config and we want > it to be well-exercised before release. > > There are discussions with the TripleO team around adding support for > this feature to its deployment tooling and there will be a functional > test job for the Castellan driver with Custodia. > > There is a weekly meeting in #openstack-meeting-3 on Tuesdays at 1600 > UTC for discussion of this feature. > > oslo.config driver implementation: https://review.openstack.org/#/c/513844 > spec: > > https://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html > > Custodia key management support for Castellan: > https://review.openstack.org/#/c/515190/ > > "stable" libraries > ------------------ > > Some of the Oslo libraries are in a mature state where there are very > few, if any, meaningful changes to them. With the removal of the > requirements sync process in Rocky, we may need to change the release > process for these libraries. My understanding was that there were no > immediate action items for this, but it was something we need to be > aware of. > > dropping support for mox3 > ------------------------- > > There was some concern that no one from the Oslo team is actually in a > position to support mox3 if something were to break (such as happened in > some libraries with Python 3.6). Since there is a community goal to > remove mox from all OpenStack projects in Rocky this will hopefully not > be a long-term problem, but there was some discussion that if projects > needed to keep mox for some reason that they would be asked to provide a > maintainer for mox3. This topic is kind of on hold pending the outcome > of the community goal this cycle. > > automatic configuration migration on upgrade > -------------------------------------------- > > There is a desire for oslo.config to provide a mechanism to > automatically migrate deprecated options to their new location on > version upgrades. This is a fairly complex topic that I can't cover > adequately in a summary email, but there is a spec proposed at > https://review.openstack.org/#/c/520043/ and POC changes at > https://review.openstack.org/#/c/526314/ and > https://review.openstack.org/#/c/526261/ > > One outcome of the discussion was that in the initial version we would > not try to handle complex migrations, such as the one that happened when > we combined all of the separate rabbit connection opts into a single > connection string. To start with we will just raise a warning to the > user that they need to handle those manually, but a templated or > hook-based method of automating those migrations could be added as a > follow-up if there is sufficient demand. > > oslo.messaging plans > -------------------- > > There was quite a bit discussed under this topic. I'm going to break it > down into sub-topics for clarity. > > oslo.messaging heartbeats > ========================= > > Everyone seemed to be in favor of this feature, so we anticipate > development moving forward in Rocky. There is an initial patch proposed > at https://review.openstack.org/546763 > > We felt that it should be possible to opt in and out of the feature, and > that the configuration should be done at the application level. This > should _not_ be an operator decision as they do not have the knowledge > to make it sanely. > > There was also a desire to have a TTL for messages. > > bug cleanup > =========== > > There are quite a few launchpad bugs open against oslo.messaging that > were reported against old, now unsupported versions. Since we have the > launchpad bug expirer enabled in Oslo the action item proposed for such > bugs was to mark them incomplete and ask the reporter to confirm that > they still occur against a supported version. This way bugs that don't > reproduce or where the reporter has lost interest will eventually be > closed automatically, but bugs that do still exist can be updated with > more current information. > > deprecations > ============ > > The Pika driver will be deprecated in Rocky. To our knowledge, no one > has ever used it and there are no known benefits over the existing > Rabbit driver. > > Once again, the ZeroMQ driver was proposed for deprecation as well. The > CI jobs for ZMQ have been broken for a while, and there doesn't seem to > be much interest in maintaining them. Furthermore, the breakage seems to > be a fundamental problem with the driver that would require non-trivial > work to fix. > > Given that ZMQ has been a consistent pain point in oslo.messaging over > the past few years, it was proposed that if someone does step forward > and want to maintain it going forward then we should split the driver > off into its own library which could then have its own core team and > iterate independently of oslo.messaging. However, at this time the plan > is to propose the deprecation and start that discussion first. > > CI > == > > Need to migrate oslo.messaging to zuulv3 native jobs. The > openstackclient library was proposed as a good example of how to do so. > > We also want to have voting hybrid messaging jobs (where the > notification and rpc messages are sent via different backends). We will > define a devstack job variant that other projects can turn on if desired. > > We also want to add amqp1 support to pifpaf for functional testing. > > Low level messaging API > ======================= > > A proposal for a new oslo.messaging API to expose lower level messaging > functionality was proposed. There is a presentation at > > https://docs.google.com/presentation/d/1mCOGwROmpJvsBgCTFKo4PnK6s8DkDVCp1qnRnoKL_Yo/edit?usp=sharing > > > This seemed to generally be well-received by the room, and dragonflow > and neutron reviewers were suggested for the spec. > > Kafka > ===== > > Andy Smith gave an update on the status of the Kafka driver. Currently > it is still experimental, and intended to be used for notifications > only. There is a presentation with more details in > > https://docs.google.com/presentation/d/e/2PACX-1vQpaSSm7Amk9q4sBEAUi_IpyJ4l07qd3t5T_BPZkdLWfYbtSpSmF7obSB1qRGA65wjiiq2Sb7H2ylJo/pub?start=false&loop=false&delayms=3000&slide=id.p > > > testing for Edge/FEMDC use cases > ================================ > > Matthieu Simonin gave a presentation about the testing he has done > related to messaging in the Edge/FEMDC scenario where messaging targets > might be widely distributed. The slides can be found at > > https://docs.google.com/presentation/d/1LcF8WcihRDOGmOPIU1aUlkFd1XkHXEnaxIoLmRN4iXw/edit#slide=id.p3 > > > In short, there is a desire to build clouds that have widely distributed > nodes such that content can be delivered to users from a location as > close as possible. This puts a lot of pressure on the messaging layer as > compute nodes (for example) could be halfway around the world from the > control nodes, which is problematic for a broker-based system such as > Rabbit. There is some very interesting data comparing Rabbit with a more > distributed AMQP1 system based on qpid-dispatch-router. In short, the > distributed system performed much better for this use case, although > there was still some concern raised about the memory usage on the client > side with both drivers. Some followup is needed on the oslo.messaging > side to make sure we aren't leaking/wasting resources in some messaging > scenarios. > > For further details I suggest taking a look at the presentation. > > mutable configuration > --------------------- > > This is also a community goal for Rocky, and Chang Bo is driving its > adoption. There was some discussion of how to test it, and also that we > should provide an example of turning on mutability for the debug option > since that is the target of the community goal. The cinder patch can be > found here: https://review.openstack.org/#/c/464028/ Turns out it's > really simple! > > Nova is also using this functionality for more complex options related > to upgrades, so that would be a good place to look for more advanced use > cases. > > Full documentation for the mutable config options is at > https://docs.openstack.org/oslo.config/latest/reference/mutable.html > > The goal status is being tracked in > https://storyboard.openstack.org/#!/story/2001545 > > Chang Bo was also going to talk to Lance about possibly coming up with a > burndown chart like the one he had for the policy in code work. > > oslo healthcheck middleware > --------------------------- > > As this ended up being the only major topic for the afternoon, the > session was unfortunately lightly attended. However, the self-healing > SIG was talking about related topics at the same time so we ended up > moving to that room and had a good discussion. > > Overall the feature seemed to be well-received. There is some security > concern with exposing service information over an un-authenticated > endpoint, but because there is no authentication supported by the health > checking functionality in things like Kubernetes or HAProxy this is > unavoidable. The feature won't be mandatory, so if this exposure is > unacceptable it can be turned off (with a corresponding loss of > functionality, of course). > > There was also some discussion of dropping the asynchronous nature of > the checks in the initial version in order to keep the complexity to a > minimum. Asynchronous testing can always be added later if it proves > necessary. > > The full spec is at https://review.openstack.org/#/c/531456 > > oslo.config strict validation > ----------------------------- > > I actually had discussions with multiple people about this during the > week. In both cases, they were just looking for a minimal amount of > validation that would catch an error such at "devug=True". Such a > validation might be fairly simple to write now that we have the > YAML-based sample config with (ideally) information about all the > options available to set in a project. It should be possible to compare > the options set in the config file with the ones listed in the sample > config and raise warnings for any that don't exist. > > There is also a more complete validation spec at > > http://specs.openstack.org/openstack/oslo-specs/specs/ocata/oslo-validator.html > and a patch proposed at https://review.openstack.org/#/c/384559/ > > Unfortunately there has been little movement on that as of late, so it > might be worthwhile to implement something more minimalist initially and > then build from there. The existing patch is quite significant and > difficult to review. > > Conclusion > ---------- > > I feel like there were a lot of good discussions at the PTG and we have > plenty of work to keep the small Oslo team busy for the Rocky cycle. :-) > > Thanks to everyone who participated and I look forward to seeing how > much progress we've made at the next Summit and PTG. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Ken Giusti (kgiusti at gmail.com) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Ken Giusti (kgiusti at gmail.com) From pabelanger at redhat.com Tue Mar 13 14:54:26 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 10:54:26 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180305234513.GA26473@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> Message-ID: <20180313145426.GA14285@localhost.localdomain> On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > Greetings, > > A quick search of git shows your projects are using fedora-26 nodes for testing. > Please take a moment to look at gerrit[1] and help land patches. We'd like to > remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to > approve these patches. > > If you jobs are failing under fedora-27, please take the time to fix any issue > or update said patches to make them non-voting. > > We (openstack-infra) aim to only keep the latest fedora image online, which > changes aprox every 6 months. > > Thanks for your help and understanding, > Paul > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > Greetings, This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove our fedora-26 images next week and if jobs haven't been migrated you may start to see NODE_FAILURE messages while running jobs. Please take a moment to merge the open changes or update them to be non-voting while you work on fixes. Thanks again, Paul From cdent+os at anticdent.org Tue Mar 13 15:10:02 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 13 Mar 2018 15:10:02 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-11 Message-ID: html: https://anticdent.org/tc-report-18-11.html Much of the activity of the TC in the past week has been devoted to discussing and sometimes arguing about two pending resolutions: Location of Interop Tests and Extended Maintenance (links below). While there has been a lot of IRC chat, and back and forth on the gerrit reviews, it has resulted in things moving forward. Since I like to do this: a theme I would identify from this week's discussions is continued exploration of what the TC feels it can and should assert when making resolutions. This is especially apparent in the discussions surrounding Interop Tests. The various options run the gamut from describing and enforcing many details, through providing a limited (but relatively clear) set of options, to letting someone else decide. I've always wanted the TC to provide enabling but not overly limiting guidance that actively acknowledges concerns. # Location of Interop Tests There are three reviews related to the location of the Interop Tests (aka Trademark Tests): * [A detailed one, based on PTG discussion](https://review.openstack.org/#/c/521602/) * [A middle of the road one, simplifying the first](https://review.openstack.org/#/c/550571/) * [A (too) simple one](https://review.openstack.org/#/c/550863/) It's looking like the middle one has the most support now, but that is after a lot of discussion. On [Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T19:07:37) I introduced the middle of the road version to make sure the previous discussion was represented in a relatively clear way. Then on [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T15:20:31), a version which effectively moves responsibility to the InteropWG. Throughout this process there have been [hidden goals](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T08:41:13) whereby this minor(?) crisis in policy is being used to attempt to address shortcomings in the bigger picture. It's great to be working on the bigger picture, but hidden doesn't sound like the right approach. # Tracking TC Goals One of the outcomes from the PTG was an awareness that some granular and/or middle-distance TC goals tend to get lost. The TC is [going to try to use StoryBoard](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T15:06:21) to track these sort of things. The hope is that this will result in more active and visible progress. # Extended Maintenance A proposal to [leave branches open for patches](https://review.openstack.org/#/c/548916/) for longer has received at least as much attention as the Interop discussions. Some talk [starts on Thursday afternoon](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-08.log.html#t2018-03-08T16:17:03) and then carries on intermittently for the rest of time^w^w^w^w^wthrough today. The review has a great deal of interaction as well. There's general agreement on the principal ("let's not limit people from being able to patch branches for longer") but reaching consensus on the details has been more challenging. Different people have different goals. # What's a SIG for? [Discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T09:16:57) about [renaming](https://review.openstack.org/#/c/551413/) the recently [named PowerStackers group](https://review.openstack.org/#/c/540165/) eventually migrated into talking about what [SIGs are for or mean](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T09:31:56). There seems to be a few different interpretations, with some overlap: * [Upstream and downstream concern](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T09:41:17) or "breadth of potential paricipants". * [Not focused on producing code](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T09:56:04). * [Different people in the same "room"](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T09:43:40). * [In problem space rather than solution space](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T13:34:10). None of these are really complete. I think of SIGs as a way to break down boundaries, provide forums for discussion and make progress without worrying too much about bureaucracy. We probably don't need to define them to death. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Tue Mar 13 15:43:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 13 Mar 2018 10:43:10 -0500 Subject: [openstack-dev] [nova][notification] Full traceback in ExceptionPayload In-Reply-To: <1520944637.5767.7@smtp.office365.com> References: <1520598388.7809.6@smtp.office365.com> <1520611692.7809.8@smtp.office365.com> <1520944637.5767.7@smtp.office365.com> Message-ID: <52641b8a-7c65-b791-b2ba-f537060bf9b2@gmail.com> On 3/13/2018 7:37 AM, Balázs Gibizer wrote: > I filed the bp > https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications I added it to the weekly meeting agenda. I know you likely won't attend the meeting this week, but I can probably be your proxy on this one. -- Thanks, Matt From scheuran at linux.vnet.ibm.com Tue Mar 13 16:14:59 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Tue, 13 Mar 2018 17:14:59 +0100 Subject: [openstack-dev] [nova][ThirdParty-CI] Nova s390x CI currently broken Message-ID: Hello, the s390x CI for nova is currently broken again. The reason seems to be a recent change that merged in neutron. I’m looking into it... --- Andreas Scheuring (andreas_s) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Tue Mar 13 16:49:00 2018 From: sam47priya at gmail.com (Sam P) Date: Wed, 14 Mar 2018 01:49:00 +0900 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas Message-ID: Hi All, We started this discussion on IRC meeting few weeks ago and still no progress..;) ​(aspiers: thanks for the reminder!) Need mascot proposals for Masakari, see FAQ [1] for more info Current ideas: Origin of "Masakari" is related to hero from Japanese folklore [2]. Considering that relationship and to start the process, here are few ideas, (1) Asiatic black bear (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. [1] https://www.openstack.org/project-mascots/ [2] https://en.wikipedia.org/wiki/Kintar%C5%8D --- Regards, Sampath -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Tue Mar 13 16:50:19 2018 From: coolsvap at gmail.com (Swapnil Kulkarni) Date: Tue, 13 Mar 2018 22:20:19 +0530 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 7:36 AM, Jeffrey Zhang wrote: > Kolla core reviewer team, > > It is my pleasure to nominate caoyuan for kolla core team. > > caoyuan's output is fantastic over the last cycle. And he is the most > active non-core contributor on Kolla project for last 180 days[1]. He > focuses on configuration optimize and improve the pre-checks feature. > > Consider this nomination a +1 vote from me. > > A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 > is a veto. Voting is open for 7 days until Mar 12th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/180 > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > +1 From ekcs.openstack at gmail.com Tue Mar 13 18:51:55 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 13 Mar 2018 10:51:55 -0800 Subject: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client Message-ID: Hi Mistral folks and others, I'm working on Congress tempest tests [1] for integration with Mistral. In the tests, we use a Mistral service client to call Mistral APIs and compare results against those obtained by Mistral driver for Congress. Regarding the service client, Congress can either import directly from Mistral tempest plugin [2] or maintain its own copy within Congress tempest plugin. I'm not sure whether Mistral team expects the service client to be internal use only, so I hope to hear folks' thoughts on which approach is preferred. Thanks very much! Eric [1] https://review.openstack.org/#/c/538336/ [2] https://github.com/openstack/mistral-tempest-plugin/blob/master/mistral_tem pest_tests/services/v2/mistral_client.py From chris at openstack.org Tue Mar 13 17:54:49 2018 From: chris at openstack.org (Chris Hoge) Date: Tue, 13 Mar 2018 10:54:49 -0700 Subject: [openstack-dev] [k8s] Hosting location for OpenStack Kubernetes Provider Message-ID: At the PTG in Dublin, SIG-K8s started working towards migrating the external Kubernetes OpenStack cloud provider[1] work to be an OpenStack project. Coincident with that, an upstream patch[2] was proposed by WG-Cloud-Provider to create upstream Kubernetes repositories for the various cloud providers. I want to begin a conversation about where we want this provider code to live and how we want to manage it. Three main options are to: 1) Host the provider code within the OpenStack ecosystem. The advantages are that we can follow OpenStack community development practices, and we have a good list of people signed up to help maintain it. We would also have easier access to infra test resources. The downside is we pull the code further away from the Kubernetes community, possibly making it more difficult for end users to find and use in a way that is consistent with other external providers. 2) Host the provider code within the Kubernetes ecosystem. The advantage is that the code will be in a well-defined and well-known place, and members of the Kubernetes community who want to participate will be able to continue to use the community practices. We would still be able to take advantage of infra resources, but it would require more setup to trigger and report on jobs. 3) Host in OpenStack, and mirror in a Kubernetes repository. We would need to work with the K8s team to make sure this is an acceptable option, but would allow for a hybrid development model that could satisty the needs of members of both communities. This would require a committment from the K8s-SIG-OpenStack/OpenStack-SIG-K8s team to handling tickets and pull requests that come in to the Kubernetes hosted repository. My personal opinion is that we should take advantage of the Kubernetes hosting, and migrate the project to one of the repositories listed in the WG-Cloud-Provider patch. This wouldn't preclude moving it into OpenStack infra hosting at some point in the future and possibly adopting the hybrid approach down the line after more communication with K8s infrastructure leaders. There is a sense of urgency, as Dims has asked that we relieve him of the responsibility of hosing the external provider work in his personal GitHub repository. Please chime in with your opinions on this here so that we can work out an where the appropriate hosting for this project should be. Thanks, Chris Hoge K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead [1] https://github.com/dims/openstack-cloud-controller-manager [2] https://github.com/kubernetes/community/pull/1862 [3] https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg From jeremyfreudberg at gmail.com Tue Mar 13 18:32:24 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Tue, 13 Mar 2018 14:32:24 -0400 Subject: [openstack-dev] [sahara][all] Sahara Rocky Virtual PTG is SCHEDULED Message-ID: Hi again all, We will have a remote session tomorrow, March 14, beginning at 13:30 UTC. Best guess is that it will run two hours. Come and go as you need to. The session will be hosted by Bluejeans: https://bluejeans.com/6304900378 (Take a moment to make sure your browser and peripherals are configured nicely) All interested parties are welcome. I reiterate that all outcomes of the session will be logged to the dev list. Till then, Jeremy From pkovar at redhat.com Tue Mar 13 18:38:32 2018 From: pkovar at redhat.com (Petr Kovar) Date: Tue, 13 Mar 2018 19:38:32 +0100 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> On Thu, 8 Mar 2018 12:54:06 -0600 Jay S Bryant wrote: > Good overview.  Thank you! > > One additional goal I want to mention on the list, for awareness, is the > fact that we would like to eventually get some consistency to the pages > that the 'Contributor Guide' lands on for each of the projects.  Needs > to be a page that is friendly to new contributors, makes it easy to > learn about the project and is not overwhelming. > > What exactly that looks like isn't defined yet but I have talked to > Manila about this.  They were interested in working together on this.  > Cinder and Manila will work together to get something consistent put > together and then we can work on spreading that to other projects once > we have agreement from the SIG that the approach is agreeable. This is a good cross-project goal, I think. We discussed a similar approach in the docs room wrt providing templates to project teams that they can use to design their landing pages for admin, user, configuration docs; that would also include the main index page for project docs. As for the project-specific contributor guides, https://docs.openstack.org/doc-contrib-guide/project-guides.html specifies that any contributor content should go to doc/source/contributor/. This will allow us to use templates to generate lists of links, similarly to what we do for other content areas. Cheers, pk From jungleboyj at gmail.com Tue Mar 13 19:55:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 13 Mar 2018 14:55:22 -0500 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> Message-ID: On 3/13/2018 1:38 PM, Petr Kovar wrote: > On Thu, 8 Mar 2018 12:54:06 -0600 > Jay S Bryant wrote: > >> Good overview.  Thank you! >> >> One additional goal I want to mention on the list, for awareness, is the >> fact that we would like to eventually get some consistency to the pages >> that the 'Contributor Guide' lands on for each of the projects.  Needs >> to be a page that is friendly to new contributors, makes it easy to >> learn about the project and is not overwhelming. >> >> What exactly that looks like isn't defined yet but I have talked to >> Manila about this.  They were interested in working together on this. >> Cinder and Manila will work together to get something consistent put >> together and then we can work on spreading that to other projects once >> we have agreement from the SIG that the approach is agreeable. > This is a good cross-project goal, I think. We discussed a similar approach > in the docs room wrt providing templates to project teams that they can > use to design their landing pages for admin, user, configuration docs; that > would also include the main index page for project docs. > > As for the project-specific contributor guides, > https://docs.openstack.org/doc-contrib-guide/project-guides.html specifies > that any contributor content should go to doc/source/contributor/. This will > allow us to use templates to generate lists of links, similarly to what > we do for other content areas. > > Cheers, > pk > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Petr, Good point.  I was trying to think of how to make a better landing page for new contributors and you may have hit on the answer.  RIght now when you click through from  here: https://www.openstack.org/community  You land at the top level Cinder documentation page which is incredibly overwhelming for a new person:  https://docs.openstack.org/cinder/latest/ If the new contributor page instead lands here: https://docs.openstack.org/cinder/latest/contributor/index.html It would give me a page to craft for new users looking for information to get started. Thoughts on this approach? Kendall and Mike ... Does the above approach make sense? Jay From amy at demarco.com Tue Mar 13 20:02:23 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 13 Mar 2018 15:02:23 -0500 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> Message-ID: I think if we're going to have that go to the development contributors section (which makes sense) maybe we should also have ways of getting to the deployment and admin docs as well? Amy (spotz) On Tue, Mar 13, 2018 at 2:55 PM, Jay S Bryant wrote: > > > On 3/13/2018 1:38 PM, Petr Kovar wrote: > >> On Thu, 8 Mar 2018 12:54:06 -0600 >> Jay S Bryant wrote: >> >> Good overview. Thank you! >>> >>> One additional goal I want to mention on the list, for awareness, is the >>> fact that we would like to eventually get some consistency to the pages >>> that the 'Contributor Guide' lands on for each of the projects. Needs >>> to be a page that is friendly to new contributors, makes it easy to >>> learn about the project and is not overwhelming. >>> >>> What exactly that looks like isn't defined yet but I have talked to >>> Manila about this. They were interested in working together on this. >>> Cinder and Manila will work together to get something consistent put >>> together and then we can work on spreading that to other projects once >>> we have agreement from the SIG that the approach is agreeable. >>> >> This is a good cross-project goal, I think. We discussed a similar >> approach >> in the docs room wrt providing templates to project teams that they can >> use to design their landing pages for admin, user, configuration docs; >> that >> would also include the main index page for project docs. >> >> As for the project-specific contributor guides, >> https://docs.openstack.org/doc-contrib-guide/project-guides.html >> specifies >> that any contributor content should go to doc/source/contributor/. This >> will >> allow us to use templates to generate lists of links, similarly to what >> we do for other content areas. >> >> Cheers, >> pk >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > Petr, > > Good point. I was trying to think of how to make a better landing page > for new contributors and you may have hit on the answer. RIght now when > you click through from here: https://www.openstack.org/community You > land at the top level Cinder documentation page which is incredibly > overwhelming for a new person: https://docs.openstack.org/cinder/latest/ > > If the new contributor page instead lands here: > https://docs.openstack.org/cinder/latest/contributor/index.html It would > give me a page to craft for new users looking for information to get > started. > > Thoughts on this approach? > > Kendall and Mike ... Does the above approach make sense? > > Jay > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtrainor at redhat.com Tue Mar 13 20:38:02 2018 From: dtrainor at redhat.com (Dan Trainor) Date: Tue, 13 Mar 2018 14:38:02 -0600 Subject: [openstack-dev] [services][tripleo-validations] Process counts for Undercloud services Message-ID: Hi - tripleo-validations has a validation[0] that attempts to identify services that have more than (in its default setting) 8 processes running when the validation is executed. We've seen in the past, times where too many processes for each service may have had a significant performance impact particularly in Undercloud systems with a less than optimal amount of memory, but it's been a while since I've seen much discussion on this. By default, the validation will look at the following services, and fail if there are more than 8 processes of each running on the Undercloud: - heat-engine - mistral - ironic-inspector - ironic-conductor - nova-api - nova-scheduler - nova-conductor - nova-compute - glance-api - swift-proxy-server - swift-object-server - swift-container-server - zaqar-server Examples of services that this validation fails on immediately out of the box are: - heat-engine - mistral - nova-api What I'm trying to determine right now is if that (default) maximum number of processes (8) is still applicable, and/or reasonable. If not, what max_process_count value would be appropriate? I understand that this question is highly subjective based on a number of factors that go in to the consideration of an OpenStack environment. However, we're looking for some baselines for these values in an effort to make this validation more valuable. Thanks! -dant --- [0] https://github.com/openstack/tripleo-validations/blob/master/validations/undercloud-process-count.yaml -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Mar 13 21:02:49 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 13 Mar 2018 17:02:49 -0400 Subject: [openstack-dev] [manila] [ptg] queens retrospective at the rocky ptg Message-ID: <20180313210248.pyrcd5tnqcljwfs5@barron.net> At the Dublin PTG the manila team did a retrospective on the Queens release -- raw etherpad here [1]. We summarize it here to separate it from the otherwise forward-looking manila PTG summary (coming soon). # Keep Doing # - queens bug smashes, especially the Wuhan bug smash [2] + (mostly new) contributors from 5+ companies including 99cloud, fibrehome, chinamobile, H3c, easystack, Huawei - bug czar role & meeting slot [3] - trying to tag even earlier than deadlines # Do less of / Stop Doing # - blind rechecks + reviewers need to pay attention to recheck history and push back on merges that ignore intermittent issues + even if the issue is unrelated to the patch we should have an associated bug - letting reviews languish, especially for contributors outside US timezones + need more systematic approach (see suggestions below) to review backlog # Do More Of # - Root cause analysis and fixing of failing tests in gate - Groom bug lists, especially before bug smashes - Make contributor's guide with review checklists - Use mail-list more for asynch communication across timezones - Help people use IRC bouncers - Keep etherpad/dashboards for pending reviews - Improve docs for new driver contributors # Action Items # - Tom will develop etherpad for priority reviews - Ben will share past review etherpads / gerrit dashboards (Done: see raw etherpad [3] but Tom will work to get these unified and findable) - Ganso will create etherpad to collaborate on reviewer/contributor checklists - Tom will check how Cinder fixed log filtering problem -- Tom Barron [1] https://etherpad.openstack.org/p/manila-ptg-rocky-retro [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Queens-Wuhan [3] https://etherpad.openstack.org/p/manila-bug-triage-pad -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From melwittt at gmail.com Tue Mar 13 21:40:26 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 13 Mar 2018 14:40:26 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - Queens retrospective Message-ID: <1DD7E678-EBA2-46BF-8361-55B72D8DBF81@gmail.com> Howdy Stackers, I’ve created a summary etherpad [0] of the Queens retrospective session from the PTG and included a plain text export of it on this email. Thank you to all who participated and please do add comments to the etherpad where I might have missed a perspective or interpretation in my attempt to summarize the session. Best, -melanie [0] https://etherpad.openstack.org/p/nova-queens-retrospective-summary *Queens Retrospective: Rocky PTG Summary https://etherpad.openstack.org/p/nova-queens-retrospective *Key topics * What went well * Volume multiattach support is now available for libvirt+lvm and with CI testing * Added Takashi Natsume to the python-novaclient core team * osc-placement 1.0.0 is now available, so operators have a CLI to interact with the Placement service (with docs too) * The run up to RC1 was much less stressful/hectic than it was in Ocata/Pike, we had fewer major changes leading up to the feature freeze * We have increased the number of people committing to placement related code * We had fewer approved and greater completed blueprints indicating we have gotten better at doing what we said we were going to do * We have sane defaults on AArch64 architecture (like UEFI being default, proper 'cpu_model') * The websocket proxy security finally landed * Kudos to Eric Fried for his work on nested resource providers in placement * Kudos to Chris Dent for his updates to the dev mailing list on placement * Kudos to Matt Riedemann for serving us as PTL for the past four cycles * What needs to improve * Concern around nova-core team expansion, or rather, lack of expansion and confusion/mystery about how to become a member of the core team * Problems around having dev conversations and gathering outcomes of those conversations * Non-priority spec/blueprint and bug fix code taking very long to merge and not getting visibility * Concern that not enough people participate in providing retrospective comments * Concern that we didn't actually merge things earlier in the cycle as decided during the Pike retrospective * Bug triage * Concern around time management and how to quickly get up-to-speed on what to review * We have had some unexpected changes in behavior in Ocata like the Aggregate[Core|Ram|Disk]Filter no longer working, that caused pain for operators *Agreements and decisions * Make sure that people review osc-placement changes (I've added a section for this to the priorities etherpad [1]) * melwitt will rewrite the nova contributor documentation about what the core team looks for in potential new core team members to be clear and concise (with some how-to tips), and increase its visibility by making sure it's more directly linked from the docs home page * On dev conversations that happen on IRC or on hangouts, have someone from the conversation write up a summary of the conversation (if there was an outcome) and send it to the dev mailing list with appropriate tags, for example "[nova][placement][...]". People should feel encouraged to use the dev mailing list to have dev conversations and communicate outcomes. Conversations needn't be limited to only IRC or hangouts * For the most part, the priorities etherpad [1] can provide visibility for non-priority spec/blueprint work and trivial bug fix code (there are sections for both of these: "Non-priority approved blueprints" and "Trivial bug subteam") * melwitt to write nova contributor documentation explaining the use of the priorities etherpad and link * We need a way to quickly point contributors at the documentation ^, suggestions include a bot that adds a comment to a new contributor's patch that points them to the documentation or a NEW_CONTRIBUTOR_README.rst in the root directory of the nova project that points to the documentation * Discuss the design and implement "runways" for blueprints where we focus review attention on selected blueprints for a specified time-box with the goal being to avoid approving new non-priority work until we've merged old non-priority work * Commit to fewer blueprints and complete a higher percentage of them. This is about fulfilling expectations and serving contributors better as opposed to setting goals too high and letting people down On the other items: * At first, not many people added comments to the retrospective etherpad, but more people added comments as we got closer to the PTG, which was good. From what I can tell, most of the problems we have (including lack of participation in the retrospective etherpad) stem from a lack of communication, visibility, and transparency. It is my hope that diligent use of the priorities etherpad, runways, clear and concise documentation, and weekly reminders/summaries of the priorities etherpad will result in contributors seeing progress in their work and becoming more engaged. * The concern about how we decided we would merge things earlier in the cycle during the Pike retrospective but didn't actually follow through might be related to the item about difficulty around having dev conversations. This cycle, we've agreed to feel free to use the dev mailing list for dev conversations and communication of outcomes, so I hope that will result in obstacles being cleared more quickly and giving way to merging things earlier. * On bug triage, I'm going to spruce up documentation around the process and make sure it's linked near the nova dev documentation home page (currently I think it takes many clicks to find it). And I'm also thinking of doing weekly summaries and reminders to help people get engaged with bug triage. * On concern about time management and how to quickly find what to review, it is my hope that the priorities etherpad will be reviewers' "home page" for finding reviews and that a weekly summary/reminder will help keep it top-of-mind for everyone. * On the unexpected changes in Ocata causing operator pain, the Aggregate[Core|Ram|Disk]Filter behavior change was one that wasn't caught by our test coverage and was not intended to land without release notes/documentation or another solution. It was suggested that we could add test coverage where compute nodes are at capacity to catch issues with allocation ratios. [1] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking From emilien at redhat.com Tue Mar 13 21:52:55 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 13 Mar 2018 22:52:55 +0100 Subject: [openstack-dev] [tripleo] The Weekly Owl - 12th Edition Message-ID: Note: this is the twelfth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-March/127966.html +---------------------------------+ | General announcements | +---------------------------------+ +--> We're releasing the final version of TripleO Queens this week and are now preparing Rocky milestone 1. +--> PTG summaries are posted on the mailing-list or via blog post, see http://www.mariosandreou.com/tripleo/2018/03/07/openstack-rocky-ptg-dublin.html . +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is John and ruck is Matt. Please let them know any new CI issue. +--> RDO Cloud had some downtime over the weekend but things should be stable now. +--> Master promotion is 8 days, Queens is 8 days, Pike is 21 days and Ocata is 5 days. +--> Proposal to change devmode to re-use reproduce bits (see thread). +--> Focus is still on TripleO CI infrastructure hardening, see https://trello.com/c/abar9eup/542-tripleo-ci-infrastructure-hardening +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Very good progress on Pike to Queens upgrades workflows, patches under review. Same for FFU +--> Introduction of pre_upgrade_rolling_tasks interface +--> Excellent progress on CI jobs (undercloud upgrade progress, etc). +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Containerized undercloud is the major ongoing effort in the squad. Focus is on making OVB working and also upgrades. Target is rocky-m1. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> ceph-ansible support in progress +--> starting to look at process to create a new git repo per role for standalone ansible roles +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> Team is working on config-download integration for ceph and multi-cluster support. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Starting work on the network configuration wizard +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Investigations around containerizing OVS +--> Collaboration with UI squad for network config integration +--> lot and lot of planning for Rocky and beyond, see the etherpad for exciting future! +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> First meeting last week, discussions around CI, SElinux everywhere, Security Hardening, TripleO secrets management, Public TLS by default and more. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls can rotate their necks 270 degrees. A blood-pooling system collects blood to power their brains and eyes when neck movement cuts off circulation. Source: http://www.audubon.org/news/11-fun-facts-about-owls Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Mar 13 22:29:59 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 13 Mar 2018 17:29:59 -0500 Subject: [openstack-dev] [Openstack-sigs] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: Just one comment on this section before I forget again: #IRC Channels# We want to get rid of #openstack-101 and begin using #openstack-dev instead. The 101 channel isn't watched closely enough anymore and it makes more sense to move onboarding activities (like in OpenStack Upstream Institute) to a channel where there are people that can answer questions rather than asking those to move to a new channel. For those concerned about noise, OUI is run the weekend before the summit when most people are traveling to the Summit anyway. I would recommend sending folks to #openstack vs #openstack-dev by default. Amy (spotz) On Mon, Mar 5, 2018 at 2:00 PM, Kendall Nelson wrote: > Hello Everyone :) > > It was wonderful to see and talk with so many of you last week! For those > that couldn't attend our whole day of chats or those that couldn't attend > at all, I thought I would put forth a summary of our discussions which were > mostly noted in the etherpad[1] > > #Contributor Guide# > > - Walkthrough: We walked through every section of what exists and came up > with a variety of improvements on what is there. Most of these items have > been added to our StoryBoard project[2]. This came up again Tuesday in docs > sessions and I have added those items to StoryBoard as well. > > - Google Analytics: It was discussed we should do something about getting > the contributor portal[3] to appear higher in Google searches about > onboarding. Not sure what all this entails. NEEDS AN OWNER IF ANYONE WANTS > TO VOLUNTEER. > > #Mission Statement# > > We updated our mission statement[4]! It now states: > > To provide a place for new contributors to come for information and > advice. This group will also analyze and document successful contribution > models while seeking out and providing information to new members of the > community. > > #Weekly Meeting# > > We discussed beginning a weekly meeting- optimized for APAC/Europe and > settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed here[5]. > For now I added a section to our wiki for agenda organization[6]. The two > main items we want to cover on a weekly basis are new contributor patches > in gerrit and if anything has come up on ask.openstack.org about > contributors so those will be standing agenda items. > > #Forum Session# > > We discussed proposing some forum sessions in order to get more > involvement from operators. Currently, our activities focus on development > activities and we would like to diversify. When this SIG was first proposed > we wanted to have two chairs- one to represent developers and one to > represent operators. We will propose a session or two when the call for > forum proposals go out (should be today). > > #IRC Channels# > We want to get rid of #openstack-101 and begin using #openstack-dev > instead. The 101 channel isn't watched closely enough anymore and it makes > more sense to move onboarding activities (like in OpenStack Upstream > Institute) to a channel where there are people that can answer questions > rather than asking those to move to a new channel. For those concerned > about noise, OUI is run the weekend before the summit when most people are > traveling to the Summit anyway. > > #Ongoing Onboarding Efforts# > > - GSOC: Unfortunately we didn't get accepted this year. We will try again > next year. > > - Outreachy: Applications for the next round of interns are due March > 22nd, 2018 [7]. Decisions will be made by April and then internships run > May to August. > > - WoO Mentoring: The format of mentoring is changing from 1x1 to cohorts > focused on a single goal. If you are interested in helping out, please > contact me! I NEED HELP :) > > - Contributor guide: Please see the above section. > > - OpenStack Upstream Institute: It will be run, as usual, the weekend > before the Summit in Vancouver. Depending on how much progress is made on > the contributor guide, we will make use of it as opposed to slides like > previous renditions. There have also been a number of OpenStack Days > requesting we run it there as well. More details of those to come. > > #Project Liaisons# > > The list is filling out nicely, but we still need more coverage. If you > know someone from a project not listed that might be willing to help, > please reach out to them and get them added to our list [8]. > > I thiiiiiink that is just about everything. Hopefully I at least covered > everything important :) > > Thanks Everyone! > > - Kendall Nelson (diablo_rojo) > > [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG > [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 > [3] Contributor Portal https://www.openstack.org/community/ > [4] Mission Statement Update https://review.openstack.org/#/c/548054/ > [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ > [6] Meeting Agenda https://wiki.openstack.org/wiki/First_Contact_SIG# > Meeting_Agenda > [7] Outreachy https://www.outreachy.org/apply/ > [8] Project Liaisons https://wiki.openstack.org/wiki/First_Contact_SIG# > Project_Liaisons > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Mar 13 23:10:45 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 14 Mar 2018 10:10:45 +1100 Subject: [openstack-dev] [tripleo] Blueprints for Rocky In-Reply-To: References: Message-ID: <20180313231044.GA31208@thor.bakeyournoodle.com> On Tue, Mar 13, 2018 at 07:58:48AM -0600, Alex Schultz wrote: > Hey everyone, > > So we currently have 63 blueprints for currently targeted for > Rocky[0]. Please make sure that any blueprints you are interested in > delivering have an assignee set and have been approved. I would like > to have the ones we plan on delivering for Rocky to be updated by > April 3, 2018. Any blueprints that have not been updated will be > moved out to the next cycle after this date. My BP: https://blueprints.launchpad.net/tripleo/+spec/multiarch-support doesn't look like it needs an update but just in case I missed something it's still very much targeted at Rocky-1, and the ball is in my court :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kennelson11 at gmail.com Tue Mar 13 23:13:02 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 13 Mar 2018 23:13:02 +0000 Subject: [openstack-dev] [First Contact][SIG] Weekly Meeting Message-ID: Hello! [1] has been merged and we have an agenda [2] so we are full steam ahead for the upcoming meeting! Our inaugural First Contact SIG meeting will be in #openstack-meeting at 0800 UTC Wednesday! Hope to see you all in ~9 hours! -Kendall (diablo_rojo) [1]https://review.openstack.org/#/c/549849/ [2] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Tue Mar 13 23:57:24 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 13 Mar 2018 18:57:24 -0500 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> Message-ID: <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> Amy, The top level page for projects is referenced under documentation from here:  https://docs.openstack.org/queens/projects.html So, I think we have that one covered for people who are just looking for the top level documentation. Jay On 3/13/2018 3:02 PM, Amy Marrich wrote: > I think if we're going to have that go to the development contributors > section (which makes sense) maybe we should also have ways of getting > to the deployment and admin docs as well? > > Amy (spotz) > > On Tue, Mar 13, 2018 at 2:55 PM, Jay S Bryant > wrote: > > > > On 3/13/2018 1:38 PM, Petr Kovar wrote: > > On Thu, 8 Mar 2018 12:54:06 -0600 > Jay S Bryant > wrote: > > Good overview.  Thank you! > > One additional goal I want to mention on the list, for > awareness, is the > fact that we would like to eventually get some consistency > to the pages > that the 'Contributor Guide' lands on for each of the > projects.  Needs > to be a page that is friendly to new contributors, makes > it easy to > learn about the project and is not overwhelming. > > What exactly that looks like isn't defined yet but I have > talked to > Manila about this.  They were interested in working > together on this. > Cinder and Manila will work together to get something > consistent put > together and then we can work on spreading that to other > projects once > we have agreement from the SIG that the approach is agreeable. > > This is a good cross-project goal, I think. We discussed a > similar approach > in the docs room wrt providing templates to project teams that > they can > use to design their landing pages for admin, user, > configuration docs; that > would also include the main index page for project docs. > > As for the project-specific contributor guides, > https://docs.openstack.org/doc-contrib-guide/project-guides.html > > specifies > that any contributor content should go to > doc/source/contributor/. This will > allow us to use templates to generate lists of links, > similarly to what > we do for other content areas. > > Cheers, > pk > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Petr, > > Good point.  I was trying to think of how to make a better landing > page for new contributors and you may have hit on the answer.  > RIght now when you click through from  here: > https://www.openstack.org/community > You land at the top level > Cinder documentation page which is incredibly overwhelming for a > new person: https://docs.openstack.org/cinder/latest/ > > > If the new contributor page instead lands here: > https://docs.openstack.org/cinder/latest/contributor/index.html > > It would give me a page to craft for new users looking for > information to get started. > > Thoughts on this approach? > > Kendall and Mike ... Does the above approach make sense? > > Jay > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Tue Mar 13 23:58:59 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 13 Mar 2018 19:58:59 -0400 Subject: [openstack-dev] Poll: S Release Naming Message-ID: <20180313235859.GA14573@localhost.localdomain> Greetings all, It is time again to cast your vote for the naming of the S Release. This time is little different as we've decided to use a public polling option over per user private URLs for voting. This means, everybody should proceed to use the following URL to cast their vote: https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 Because this is a public poll, results will currently be only viewable by myself until the poll closes. Once closed, I'll post the URL making the results viewable to everybody. This was done to avoid everybody seeing the results while the public poll is running. The poll will officially end on 2018-03-21 23:59:59[1], and results will be posted shortly after. [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst --- According to the Release Naming Process, this poll is to determine the community preferences for the name of the R release of OpenStack. It is possible that the top choice is not viable for legal reasons, so the second or later community preference could wind up being the name. Release Name Criteria Each release name must start with the letter of the ISO basic Latin alphabet following the initial letter of the previous release, starting with the initial release of "Austin". After "Z", the next name should start with "A" again. The name must be composed only of the 26 characters of the ISO basic Latin alphabet. Names which can be transliterated into this character set are also acceptable. The name must refer to the physical or human geography of the region encompassing the location of the OpenStack design summit for the corresponding release. The exact boundaries of the geographic region under consideration must be declared before the opening of nominations, as part of the initiation of the selection process. The name must be a single word with a maximum of 10 characters. Words that describe the feature should not be included, so "Foo City" or "Foo Peak" would both be eligible as "Foo". Names which do not meet these criteria but otherwise sound really cool should be added to a separate section of the wiki page and the TC may make an exception for one or more of them to be considered in the Condorcet poll. The naming official is responsible for presenting the list of exceptional names for consideration to the TC before the poll opens. Exact Geographic Region The Geographic Region from where names for the S release will come is Berlin Proposed Names Spree (a river that flows through the Saxony, Brandenburg and Berlin states of Germany) SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) Spandau (One of the twelve boroughs of Berlin) Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently abbreviated as 🍺) Steglitz (a locality in the South Western part of the city) Springer (Berlin is headquarters of Axel Springer publishing house) Staaken (a locality within the Spandau borough) Schoenholz (A zone in the Niederschönhausen district of Berlin) Shellhaus (A famous office building) Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) Schiller (A park in the Mitte borough) Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) (The adjective form, Saatwinkler is also a really cool bridge but that form is too long) Sonne (Sonnenallee is the name of a large street in Berlin crossing the former wall, also translates as "sun") Savigny (Common place in City-West) Soorstreet (Street in Berlin restrict Charlottenburg) Solar (Skybar in Berlin) See (Seestraße or "See Street" in Berlin) Thanks, Paul From Dinesh.Bhor at nttdata.com Wed Mar 14 01:16:29 2018 From: Dinesh.Bhor at nttdata.com (Bhor, Dinesh) Date: Wed, 14 Mar 2018 01:16:29 +0000 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: Message-ID: Hi Sampath San, There is one more option which we discussed in yesterdays masakari meeting [1]: St. Bernard(Dog) [2]. [1] http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-03-13-04.01.log.html#l-38 [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) Thank you, Dinesh Bhor ________________________________ From: Sam P Sent: 13 March 2018 22:19:00 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Masakari Project mascot ideas Hi All, We started this discussion on IRC meeting few weeks ago and still no progress..;) ​(aspiers: thanks for the reminder!) Need mascot proposals for Masakari, see FAQ [1] for more info Current ideas: Origin of "Masakari" is related to hero from Japanese folklore [2]. Considering that relationship and to start the process, here are few ideas, (1) Asiatic black bear (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. [1] https://www.openstack.org/project-mascots/ [2] https://en.wikipedia.org/wiki/Kintar%C5%8D --- Regards, Sampath ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cr_hui at 126.com Wed Mar 14 01:52:28 2018 From: cr_hui at 126.com (crh) Date: Wed, 14 Mar 2018 09:52:28 +0800 (CST) Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team In-Reply-To: <5E7A3D1BF5FD014E86E5F971CF446EFF565770EC@DGGEML501-MBS.china.huawei.com> References: <5E7A3D1BF5FD014E86E5F971CF446EFF565770EC@DGGEML501-MBS.china.huawei.com> Message-ID: <71735f01.1f6d.162223569fa.Coremail.cr_hui@126.com> +1. It is so cool to see the new core reviewer -- Best regards, Ronghui Cao, Ph.D. Candidate College of Information Science and Engineering Hunan University, Changsha 410082, Hunan, China At 2018-03-12 09:12:48, "joehuang" wrote: +1. Baisen has contributed lots of patches in Tricircle. Best Regards Chaoyi Huang (joehuang) From: Vega Cai [luckyvega.g at gmail.com] Sent: 12 March 2018 9:04 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From 905382874 at qq.com Wed Mar 14 02:04:04 2018 From: 905382874 at qq.com (=?gb18030?B?x7PMsiAgvqrMzg==?=) Date: Wed, 14 Mar 2018 10:04:04 +0800 Subject: [openstack-dev] =?gb18030?b?KzGjukZ3OiBbdHJpY2lyY2xlXSBOb21pbmF0?= =?gb18030?q?e_change_in_tricircle_coreteam?= In-Reply-To: <70842537.3455.162223d9b21.Coremail.linghucongsong@163.com> References: <70842537.3455.162223d9b21.Coremail.linghucongsong@163.com> Message-ID: +1 -------- Forwarding messages -------- From: "Vega Cai" Date: 2018-03-12 09:04:41 To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BRZhiyuan 【网易自营|30天无忧退货】无印良品制造商直供便携拖鞋等好物,限时29元起>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shzhzichen at gmail.com Wed Mar 14 02:10:14 2018 From: shzhzichen at gmail.com (=?UTF-8?B?6ZmI5Lqa5YWJ?=) Date: Wed, 14 Mar 2018 10:10:14 +0800 Subject: [openstack-dev] =?utf-8?b?ICsx77yaRnc6IFt0cmljaXJjbGVdIE5vbWlu?= =?utf-8?q?ate_change_in_tricircle_coreteam?= Message-ID: +1 -------- Forwarding messages -------- From: "Vega Cai" Date: 2018-03-12 09:04:41 To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Mar 14 02:11:16 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 14 Mar 2018 02:11:16 +0000 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda (No Time Change For US) In-Reply-To: References: Message-ID: <1CC272501B5BC543A05DB90AA509DED5E8B5EF@fmsmsx122.amr.corp.intel.com> Hi Howard, Can we discuss the possibility of using a filter/weigher that invokes Cyborg API, as we discussed during the Cyborg/Nova discussion in the PTG? This is line 56 in https://etherpad.openstack.org/p/cyborg-ptg-rocky-nova-cyborg-interaction . Regards, Sundar From: Zhipeng Huang [mailto:zhipengh512 at gmail.com] Sent: Monday, March 12, 2018 1:28 AM To: OpenStack Development Mailing List (not for usage questions) Cc: Konstantinos Samaras-Tsakiris ; Dutch Althoff Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda (No Time Change For US) Hi Team, We will resume the team meeting this week. The meeting starting time is still ET 10:00am/PT 7:00am, whereas in China it is moved one hour early to 10:00pm. For Europe please refer to UTC1400 as the baseline. This week we will have a special 2 hour meeting. In the first one hour we will have Shaohe demo the PoC Intel dev team had conducted, and in the second half we will confirm the task and milestones for Rocky based upon the PTG discussion (summary sent out last Friday). ZOOM link will be provided before the meeting :) If there are any other topics anyone would like to propose, feel free to reply to this email thread. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Mar 14 02:14:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 14 Mar 2018 10:14:43 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda (No Time Change For US) In-Reply-To: <1CC272501B5BC543A05DB90AA509DED5E8B5EF@fmsmsx122.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED5E8B5EF@fmsmsx122.amr.corp.intel.com> Message-ID: Yes that would be one of the issue we need to discuss after the PoC demo :) On Wed, Mar 14, 2018 at 10:11 AM, Nadathur, Sundar < sundar.nadathur at intel.com> wrote: > Hi Howard, > > Can we discuss the possibility of using a filter/weigher that invokes > Cyborg API, as we discussed during the Cyborg/Nova discussion in the PTG? > > > > This is line 56 in https://etherpad.openstack.org/p/cyborg-ptg-rocky-nova- > cyborg-interaction . > > > > Regards, > > Sundar > > > > *From:* Zhipeng Huang [mailto:zhipengh512 at gmail.com] > *Sent:* Monday, March 12, 2018 1:28 AM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Cc:* Konstantinos Samaras-Tsakiris ; > Dutch Althoff > *Subject:* [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda > (No Time Change For US) > > > > Hi Team, > > > > We will resume the team meeting this week. The meeting starting time is > still ET 10:00am/PT 7:00am, whereas in China it is moved one hour early to > 10:00pm. For Europe please refer to UTC1400 as the baseline. > > > > This week we will have a special 2 hour meeting. In the first one hour we > will have Shaohe demo the PoC Intel dev team had conducted, and in the > second half we will confirm the task and milestones for Rocky based upon > the PTG discussion (summary sent out last Friday). > > > > ZOOM link will be provided before the meeting :) > > > > If there are any other topics anyone would like to propose, feel free to > reply to this email thread. > > > > -- > > Zhipeng (Howard) Huang > > > > Standard Engineer > > IT Standard & Patent/IT Product Line > > Huawei Technologies Co,. Ltd > > Email: huangzhipeng at huawei.com > > Office: Huawei Industrial Base, Longgang, Shenzhen > > > > (Previous) > > Research Assistant > > Mobile Ad-Hoc Network Lab, Calit2 > > University of California, Irvine > > Email: zhipengh at uci.edu > > Office: Calit2 Building Room 2402 > > > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From 935540343 at qq.com Wed Mar 14 02:17:52 2018 From: 935540343 at qq.com (=?ISO-8859-1?B?X18gbWFuZ28u?=) Date: Wed, 14 Mar 2018 10:17:52 +0800 Subject: [openstack-dev] OpenStack - Install gnocchi error. In-Reply-To: References: Message-ID: After the installation (apt-get install gnocchi-api gnocchi- gnocchiclient), When I tried to launch the gnocc-api, I got the message. Failed to start gnocchi-api. Service: Unit gnocchi-api. Service not found. I checked /etc/init.d and there is no script gnocchi-api (although gnocchi-metricd is, and it's working properly). Why is that? Please help me to solve it. Thank you. ------------------ Original ------------------ From: "Julien Danjou"; Date: Tue, Mar 13, 2018 05:55 PM To: "__ mango."<935540343 at qq.com>; Cc: "openstack-dev"; Subject: Re: [openstack-dev] OpenStack - Install gnocchi error. On Tue, Mar 13 2018, __ mango. wrote: > hi, > I refer to: https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html installation gnocchi, > unable to find gnocchi - API related services is this why? > Please help answer my question, thank you! Can you provide more details as what's your problem? thank you! -- Julien Danjou // Free Software hacker // https://julien.danjou.info -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmj1981123 at 163.com Wed Mar 14 02:18:12 2018 From: zmj1981123 at 163.com (zmj1981123) Date: Wed, 14 Mar 2018 10:18:12 +0800 (CST) Subject: [openstack-dev] +1 //Subject: [tricircle] Nominate change in tricircle core team Message-ID: <34515b1a.31ec.162224cfa75.Coremail.zmj1981123@163.com> +1 thanks for baisen work for tricircle! -------- Forwarding messages -------- From: "Vega Cai" Date: 2018-03-12 09:04:41 To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team Hi team, I would like to nominate Baisen Song (songbaisen) for tricircle core reviewer. Baisen has actively joined the discussion of feature development and has contributed important patches since Queens, like resource deletion reliability and openstack-sdk new version adaption. I really think his experience will help us substantially improve tricircle. BR Zhiyuan -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Wed Mar 14 02:22:54 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Wed, 14 Mar 2018 02:22:54 +0000 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: , Message-ID: Hi, Total 4 people attended last IRC meeting and all of them have voted for St.Bernard Dog. If someone has missed to vote, please vote for mascot now. Options: 1) Asiatic black bear 2) Gekko : Geckos is able to regrow it's tail when the tail is lost. 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM instances) Thank you. Regards, Tushar Patil ________________________________ From: Bhor, Dinesh Sent: Wednesday, March 14, 2018 10:16:29 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas Hi Sampath San, There is one more option which we discussed in yesterdays masakari meeting [1]: St. Bernard(Dog) [2]. [1] http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-03-13-04.01.log.html#l-38 [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) Thank you, Dinesh Bhor ________________________________ From: Sam P Sent: 13 March 2018 22:19:00 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Masakari Project mascot ideas Hi All, We started this discussion on IRC meeting few weeks ago and still no progress..;) (aspiers: thanks for the reminder!) Need mascot proposals for Masakari, see FAQ [1] for more info Current ideas: Origin of "Masakari" is related to hero from Japanese folklore [2]. Considering that relationship and to start the process, here are few ideas, (1) Asiatic black bear (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. [1] https://www.openstack.org/project-mascots/ [http://www.openstack.org/themes/openstack/images/openstack-logo-full.png] Project Mascots - OpenStack is open source software for ... www.openstack.org We are OpenStack. We’re also passionately developing more than 60 projects within OpenStack. To support each project’s unique identity and visually demonstrate ... [2] https://en.wikipedia.org/wiki/Kintar%C5%8D --- Regards, Sampath ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Wed Mar 14 03:10:32 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 14 Mar 2018 03:10:32 +0000 Subject: [openstack-dev] [infra][all] New Zuul Depends-On syntax In-Reply-To: References: <87efmfz05l.fsf@meyer.lemoncheese.net> <005844f0-ea22-3e0b-fd2e-c1c889d9293b@inaugust.com> <2b04ea61-2b2c-d740-37e7-47215b10e3a0@nemebean.com> <87zi51v5uu.fsf@meyer.lemoncheese.net> <7bea8147-4d21-bbb3-7a28-a179a4a132af@redhat.com> <871si4czfe.fsf@meyer.lemoncheese.net> <20180219150341.676l7dxwskwu3uej@yuggoth.org> Message-ID: <1520996547.3ovlpswfr7.tristanC@fedora> On February 20, 2018 1:35 am, Emilien Macchi wrote: > On Mon, Feb 19, 2018 at 7:03 AM, Jeremy Stanley wrote: > [...] > >> This is hopefully only a temporary measure? I think I've heard it >> mentioned that planning is underway to switch that CI system to Zuul >> v3 (perhaps after 3.0.0 officially releases soon). >> > > Adding Tristan and Fabien in copy, they know better about the roadmap. > -- Hi, We are indeed waiting for the official Zuul 3.0.0 release to ship the next version of Software Factory and deploy it for rdoproject.org. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From namnh at vn.fujitsu.com Wed Mar 14 03:53:59 2018 From: namnh at vn.fujitsu.com (namnh at vn.fujitsu.com) Date: Wed, 14 Mar 2018 03:53:59 +0000 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180313145426.GA14285@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> Message-ID: <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> Hello Paul, I am Nam from Barbican team. I would like to notify a problem when using fedora-27. Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this version and it is the main reason for failure Barbican database upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of mariadb before removing fedora-26. [1] https://bugs.launchpad.net/barbican/+bug/1734329 [2] https://jira.mariadb.org/browse/MDEV-13508 Thanks, Nam > -----Original Message----- > From: Paul Belanger [mailto:pabelanger at redhat.com] > Sent: Tuesday, March 13, 2018 9:54 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from > fedora-26 to fedora-27 > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > > Greetings, > > > > A quick search of git shows your projects are using fedora-26 nodes for > testing. > > Please take a moment to look at gerrit[1] and help land patches. We'd > > like to remove fedora-26 nodes in the next week and to avoid broken > > jobs you'll need to approve these patches. > > > > If you jobs are failing under fedora-27, please take the time to fix > > any issue or update said patches to make them non-voting. > > > > We (openstack-infra) aim to only keep the latest fedora image online, > > which changes aprox every 6 months. > > > > Thanks for your help and understanding, Paul > > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > > > Greetings, > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove > our fedora-26 images next week and if jobs haven't been migrated you may > start to see NODE_FAILURE messages while running jobs. Please take a > moment to merge the open changes or update them to be non-voting while > you work on fixes. > > Thanks again, > Paul > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dong.wenjuan at zte.com.cn Wed Mar 14 05:57:11 2018 From: dong.wenjuan at zte.com.cn (dong.wenjuan at zte.com.cn) Date: Wed, 14 Mar 2018 13:57:11 +0800 (CST) Subject: [openstack-dev] =?utf-8?b?562U5aSNOiAgW3RyaXBsZW9dIEJsdWVwcmlu?= =?utf-8?q?ts_for_Rocky?= In-Reply-To: References: CAFsb3b7K3_q_3DgFoUnBy+9tG-AwJ7TLC1zyAR_gJw6xnEsXhA@mail.gmail.com Message-ID: <201803141357110506682@zte.com.cn> Hi all, I proposed a BP about the integration with Vitrage: https://blueprints.launchpad.net/tripleo/+spec/tripleo-vitrage-integration And I posted a spec patch to the gerrit: https://review.openstack.org/#/c/552425/ Please help to review, any comments are welcome. Thanks~ BR, dwj 原始邮件 发件人:AlexSchultz 收件人:OpenStack Development Mailing List (not for usage questions) 日 期 :2018年03月13日 22:02 主 题 :[openstack-dev] [tripleo] Blueprints for Rocky Hey everyone, So we currently have 63 blueprints for currently targeted for Rocky[0]. Please make sure that any blueprints you are interested in delivering have an assignee set and have been approved. I would like to have the ones we plan on delivering for Rocky to be updated by April 3, 2018. Any blueprints that have not been updated will be moved out to the next cycle after this date. Thanks, -Alex [0] https://blueprints.launchpad.net/tripleo/rocky __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Wed Mar 14 06:03:06 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Wed, 14 Mar 2018 08:03:06 +0200 Subject: [openstack-dev] [tripleo] TLS by default Message-ID: Hello, As part of the proposed changed by the Security Squad [1], we'd like the deployment to use TLS by default. The first target is to get the undercloud to use it, so a patch has been proposed recently [2] [3]. So, just wanted to give a heads up to people. This should be just fine from a quickstart/testing point of view, since we explicitly set the value for autogenerating certificates in the undercloud [4] [5]. Note that there are also plans to change these defaults for the containerized undercloud and the overcloud. BR [1] https://etherpad.openstack.org/p/tripleo-security-squad [2] https://review.openstack.org/#/c/552382/ [3] https://review.openstack.org/552781 [4] https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/extras-common/defaults/main.yml#L15 [5] https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/undercloud-deploy/templates/undercloud.conf.j2#L117 -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Wed Mar 14 06:12:28 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Wed, 14 Mar 2018 15:12:28 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Horizon Team, I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, and submitted a patch for it. Could you please help to review the patch. https://bugs.launchpad.net/horizon/+bug/1755339 https://review.openstack.org/#/c/552259/ Thank you very much. Best Regards, Xinni On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny wrote: > Hi Kaz, > > Thanks for cleaning this up. I put +1 on both of these patches > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara > wrote: > >> Hi Ivan & Horizon folks, >> >> >> Now we are submitting a couple of patches to have the new xstatic modules. >> Let me request you to have review the following patches. >> We need Horizon PTL's +1 to move these forward. >> >> project-config >> https://review.openstack.org/#/c/551978/ >> >> governance >> https://review.openstack.org/#/c/551980/ >> >> Thanks in advance:) >> >> Regards, >> Kaz >> >> >> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski : >> > Yes, please do that. We can then discuss in the review about technical >> > details. >> > >> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> wrote: >> >> >> >> Hi, Akihiro >> >> >> >> Thanks for the quick reply. >> >> >> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be >> >> modified. >> >> It is much better to enhance horizon plugin settings, >> >> and I think maybe there could be one option like ADD_XSTATIC_MODULES. >> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. >> >> I am considering to add a bug report to describe it at first, and give >> a >> >> patch later maybe. >> >> Is that ok with the Horizon team? >> >> >> >> Best Regards. >> >> Xinni >> >> >> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> wrote: >> >>> >> >>> Hi Xinni, >> >>> >> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >> >>> > Hello Horizon Team, >> >>> > >> >>> > I would like to hear about your opinions about how to add new >> xstatic >> >>> > modules to horizon settings. >> >>> > >> >>> > As for Heat-dashboard project embedded 3rd-party files issue, thanks >> >>> > for >> >>> > your advices in Dublin PTG, we are now removing them and >> referencing as >> >>> > new >> >>> > xstatic-* libs. >> >>> >> >>> Thanks for moving this forward. >> >>> >> >>> > So we installed the new xstatic files (not uploaded as openstack >> >>> > official >> >>> > repos yet) in our development environment now, but hesitate to >> decide >> >>> > how to >> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in >> >>> > openstack_dashboard.settings so that the static files could be >> >>> > automatically >> >>> > collected by *collectstatic* process. >> >>> > >> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >>> > openstack_dashboard/utils/settings.py and the relevant static fils >> are >> >>> > added >> >>> > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. >> >>> > We may want new plugin setting keywords ( something similar to >> >>> > ADD_JS_FILES) >> >>> > to update horizon XSTATIC_MODULES (or directly update >> >>> > STATICFILES_DIRS). >> >>> >> >>> IMHO it is better to allow horizon plugins to add xstatic modules >> >>> through horizon plugin settings. I don't think it is a good idea to >> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >> >>> usages. It makes difficult to track why and where a xstatic module in >> >>> BASE_XSTATIC_MODULES is used. >> >>> Multiple horizon plugins can add a same entry, so horizon code to >> >>> handle plugin settings should merge multiple entries to a single one >> >>> hopefully. >> >>> My vote is to enhance the horizon plugin settings. >> >>> >> >>> Akihiro >> >>> >> >>> > >> >>> > Looking forward to hearing any suggestions from you guys, and >> >>> > Best Regards, >> >>> > >> >>> > Xinni Ge >> >>> > >> >>> > >> >>> > ____________________________________________________________ >> ______________ >> >>> > OpenStack Development Mailing List (not for usage questions) >> >>> > Unsubscribe: >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> > >> >>> >> >>> >> >>> ____________________________________________________________ >> ______________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> -- >> >> 葛馨霓 Xinni Ge >> >> >> >> ____________________________________________________________ >> ______________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From 935540343 at qq.com Wed Mar 14 06:15:37 2018 From: 935540343 at qq.com (=?gb18030?B?X18gbWFuZ28u?=) Date: Wed, 14 Mar 2018 14:15:37 +0800 Subject: [openstack-dev] [ceilometer][gnocchi]-gnocchi installation failed. Message-ID: hi, I refer to: https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html to install gnocchi, Operation (apt-get installed gnocchi-api gnocchi-gnocchiclient), When I tried to launch the gnocc-api, I got the message. No gnocchi- API starts. Service :gnocchi-api unit. The service was not found. I checked /etc/init.d and no script gnocchi-api(although gnocchi-metricd is, and it works). Why is that? Please help me to solve this problem. Thank you very much. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Wed Mar 14 06:34:24 2018 From: sam47priya at gmail.com (Sam P) Date: Wed, 14 Mar 2018 15:34:24 +0900 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: Message-ID: Nice idea. Thanks.. ​> 3) St. > Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM instances) ​+1​ ​ I will confirm in advance whether we can use this as our mascot. --- Regards, Sampath On Wed, Mar 14, 2018 at 11:22 AM, Patil, Tushar wrote: > Hi, > > > Total 4 people attended last IRC meeting and all of them have voted for > St.Bernard Dog. > > > If someone has missed to vote, please vote for mascot now. > > > Options: > 1) Asiatic black bear > > 2) Gekko : Geckos is able to regrow it's tail when the tail is lost. > ​​ > 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM > instances) > > Thank you. > > > Regards, > > Tushar Patil > > > > ------------------------------ > *From:* Bhor, Dinesh > *Sent:* Wednesday, March 14, 2018 10:16:29 AM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [masakari] Masakari Project mascot ideas > > > Hi Sampath San, > > > There is one more option which we discussed in yesterdays masakari meeting > [1]: > > St. Bernard(Dog) [2]. > > > [1] http://eavesdrop.openstack.org/meetings/masakari/2018/ > masakari.2018-03-13-04.01.log.html#l-38 > > > [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) > > > Thank you, > > Dinesh Bhor > > > ------------------------------ > *From:* Sam P > *Sent:* 13 March 2018 22:19:00 > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* [openstack-dev] [masakari] Masakari Project mascot ideas > > Hi All, > > We started this discussion on IRC meeting few weeks ago and still no > progress..;) > (aspiers: thanks for the reminder!) > > Need mascot proposals for Masakari, see FAQ [1] for more info > > Current ideas: Origin of "Masakari" is related to hero from Japanese > folklore [2]. > Considering that relationship and to start the process, here are few > ideas, > (1) Asiatic black bear > > (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. > > [1] https://www.openstack.org/project-mascots/ > > Project Mascots - OpenStack is open source software for ... > > www.openstack.org > We are OpenStack. We’re also passionately developing more than 60 projects > within OpenStack. To support each project’s unique identity and visually > demonstrate ... > > [2] https://en.wikipedia.org/wiki/Kintar%C5%8D > > --- Regards, > Sampath > > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Mar 14 07:09:20 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 14 Mar 2018 12:39:20 +0530 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: Message-ID: +1 for St. Bernard Thanks, Abhishek On Wed, Mar 14, 2018 at 12:04 PM, Sam P wrote: > Nice idea. Thanks.. > ​> > 3) St. > Bernard: St. Bernard is famous as rescue dog (Masakari rescues > VM instances) > ​+1​ > ​ I will confirm in advance whether we can use this as our mascot. > > --- Regards, > Sampath > > > On Wed, Mar 14, 2018 at 11:22 AM, Patil, Tushar > wrote: > >> Hi, >> >> >> Total 4 people attended last IRC meeting and all of them have voted for >> St.Bernard Dog. >> >> >> If someone has missed to vote, please vote for mascot now. >> >> >> Options: >> 1) Asiatic black bear >> >> 2) Gekko : Geckos is able to regrow it's tail when the tail is lost. >> ​​ >> 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM >> instances) >> >> Thank you. >> >> >> Regards, >> >> Tushar Patil >> >> >> >> ------------------------------ >> *From:* Bhor, Dinesh >> *Sent:* Wednesday, March 14, 2018 10:16:29 AM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [masakari] Masakari Project mascot ideas >> >> >> Hi Sampath San, >> >> >> There is one more option which we discussed in yesterdays masakari >> meeting [1]: >> >> St. Bernard(Dog) [2]. >> >> >> [1] http://eavesdrop.openstack.org/meetings/masakari/2018/ma >> sakari.2018-03-13-04.01.log.html#l-38 >> >> >> [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) >> >> >> Thank you, >> >> Dinesh Bhor >> >> >> ------------------------------ >> *From:* Sam P >> *Sent:* 13 March 2018 22:19:00 >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* [openstack-dev] [masakari] Masakari Project mascot ideas >> >> Hi All, >> >> We started this discussion on IRC meeting few weeks ago and still no >> progress..;) >> (aspiers: thanks for the reminder!) >> >> Need mascot proposals for Masakari, see FAQ [1] for more info >> >> Current ideas: Origin of "Masakari" is related to hero from Japanese >> folklore [2]. >> Considering that relationship and to start the process, here are few >> ideas, >> (1) Asiatic black bear >> >> (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. >> >> [1] https://www.openstack.org/project-mascots/ >> >> Project Mascots - OpenStack is open source software for ... >> >> www.openstack.org >> We are OpenStack. We’re also passionately developing more than 60 >> projects within OpenStack. To support each project’s unique identity and >> visually demonstrate ... >> >> [2] https://en.wikipedia.org/wiki/Kintar%C5%8D >> >> --- Regards, >> Sampath >> >> >> ______________________________________________________________________ >> Disclaimer: This email and any attachments are sent in strictest >> confidence >> for the sole use of the addressee and may contain legally privileged, >> confidential, and proprietary data. If you are not the intended recipient, >> please advise the sender by replying promptly to this email and then >> delete >> and destroy this email and any attachments without any further use, >> copying >> or forwarding. >> >> ______________________________________________________________________ >> Disclaimer: This email and any attachments are sent in strictest >> confidence >> for the sole use of the addressee and may contain legally privileged, >> confidential, and proprietary data. If you are not the intended recipient, >> please advise the sender by replying promptly to this email and then >> delete >> and destroy this email and any attachments without any further use, >> copying >> or forwarding. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From holkina at selectel.ru Wed Mar 14 07:26:36 2018 From: holkina at selectel.ru (Tatiana Kholkina) Date: Wed, 14 Mar 2018 10:26:36 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Sure, there is an ability to enable ARP spoofing for the port/network, but it is impossible to make it enabled by default for all ports. It looks a bit complicated to me and I think it would be better to have an ability to set default port security via config file. Best regards, Tatiana 2018-03-13 15:10 GMT+03:00 Claudiu Belu : > Hi, > > Indeed ARP spoofing is prevented by default, but AFAIK, if you want it > enabled for a port / network, you can simply disable the security groups on > that neutron network / port. > > Best regards, > > Claudiu Belu > > ------------------------------ > *From:* Татьяна Холкина [holkina at selectel.ru] > *Sent:* Tuesday, March 13, 2018 12:54 PM > *To:* openstack-dev at lists.openstack.org > *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing > > Hi, > I'm using an ocata release of OpenStack where the option > prevent_arp_spoofing can be managed via conf. But later in pike it was > removed and it was decided to prevent spoofing by default. > There are cases where security features should be disabled. As I can see > now we can use a port_security option for these cases. But this option > should be set for a particular port or network on create. The default value > is set to True [1] and itt is impossible to change it. I'd like to > suggest to get default value for port_security [2] from config option. > It would be nice to know your opinion. > > [1] https://github.com/openstack/neutron-lib/blob/ > stable/queens/neutron_lib/api/definitions/port_security.py#L21 > [2] https://github.com/openstack/neutron/blob/stable/ > queens/neutron/objects/extensions/port_security.py#L24 > > Best regards, > Tatiana > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Wed Mar 14 07:47:55 2018 From: julien at danjou.info (Julien Danjou) Date: Wed, 14 Mar 2018 08:47:55 +0100 Subject: [openstack-dev] OpenStack - Install gnocchi error. In-Reply-To: (mango.'s message of "Wed, 14 Mar 2018 10:17:52 +0800") References: Message-ID: On Wed, Mar 14 2018, __ mango. wrote: > After the installation (apt-get install gnocchi-api gnocchi- gnocchiclient), > When I tried to launch the gnocc-api, I got the message. > Failed to start gnocchi-api. Service: Unit gnocchi-api. Service not found. > I checked /etc/init.d and there is no script gnocchi-api (although gnocchi-metricd is, and it's working properly). > > Why is that? Please help me to solve it. Thank you. Sounds like a Ubuntu packaging issue potentially. Did you try reading the documentation? https://gnocchi.xyz/operating.html#running-api-as-a-wsgi-application -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From slawek at kaplonski.pl Wed Mar 14 08:21:20 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Wed, 14 Mar 2018 09:21:20 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <20180313235859.GA14573@localhost.localdomain> References: <20180313235859.GA14573@localhost.localdomain> Message-ID: <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Hi, Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Paul Belanger w dniu 14.03.2018, o godz. 00:58: > > Greetings all, > > It is time again to cast your vote for the naming of the S Release. This time > is little different as we've decided to use a public polling option over per > user private URLs for voting. This means, everybody should proceed to use the > following URL to cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-03-21 23:59:59[1], and results will be > posted shortly after. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the R release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > > The Geographic Region from where names for the S release will come is Berlin > > Proposed Names > > Spree (a river that flows through the Saxony, Brandenburg and Berlin states of > Germany) > > SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) > > Spandau (One of the twelve boroughs of Berlin) > > Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently > abbreviated as 🍺) > > Steglitz (a locality in the South Western part of the city) > > Springer (Berlin is headquarters of Axel Springer publishing house) > > Staaken (a locality within the Spandau borough) > > Schoenholz (A zone in the Niederschönhausen district of Berlin) > > Shellhaus (A famous office building) > > Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) > > Schiller (A park in the Mitte borough) > > Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) > (The adjective form, Saatwinkler is also a really cool bridge but > that form is too long) > > Sonne (Sonnenallee is the name of a large street in Berlin crossing the former > wall, also translates as "sun") > > Savigny (Common place in City-West) > > Soorstreet (Street in Berlin restrict Charlottenburg) > > Solar (Skybar in Berlin) > > See (Seestraße or "See Street" in Berlin) > > Thanks, > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From j.harbott at x-ion.de Wed Mar 14 08:34:07 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Wed, 14 Mar 2018 09:34:07 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Message-ID: 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : > Hi, > > Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) Comparing with previous polls, these should be personalized links that need to be sent out to each voter individually, so I agree that this looks like a mistake. >> Wiadomość napisana przez Paul Belanger w dniu 14.03.2018, o godz. 00:58: >> >> Greetings all, >> >> It is time again to cast your vote for the naming of the S Release. This time >> is little different as we've decided to use a public polling option over per >> user private URLs for voting. This means, everybody should proceed to use the >> following URL to cast their vote: >> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 >> >> Because this is a public poll, results will currently be only viewable by myself >> until the poll closes. Once closed, I'll post the URL making the results >> viewable to everybody. This was done to avoid everybody seeing the results while >> the public poll is running. >> >> The poll will officially end on 2018-03-21 23:59:59[1], and results will be >> posted shortly after. >> >> [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst >> --- >> >> According to the Release Naming Process, this poll is to determine the >> community preferences for the name of the R release of OpenStack. It is >> possible that the top choice is not viable for legal reasons, so the second or >> later community preference could wind up being the name. >> >> Release Name Criteria >> >> Each release name must start with the letter of the ISO basic Latin alphabet >> following the initial letter of the previous release, starting with the >> initial release of "Austin". After "Z", the next name should start with >> "A" again. >> >> The name must be composed only of the 26 characters of the ISO basic Latin >> alphabet. Names which can be transliterated into this character set are also >> acceptable. >> >> The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack design summit for the >> corresponding release. The exact boundaries of the geographic region under >> consideration must be declared before the opening of nominations, as part of >> the initiation of the selection process. >> >> The name must be a single word with a maximum of 10 characters. Words that >> describe the feature should not be included, so "Foo City" or "Foo Peak" >> would both be eligible as "Foo". >> >> Names which do not meet these criteria but otherwise sound really cool >> should be added to a separate section of the wiki page and the TC may make >> an exception for one or more of them to be considered in the Condorcet poll. >> The naming official is responsible for presenting the list of exceptional >> names for consideration to the TC before the poll opens. >> >> Exact Geographic Region >> >> The Geographic Region from where names for the S release will come is Berlin >> >> Proposed Names >> >> Spree (a river that flows through the Saxony, Brandenburg and Berlin states of >> Germany) >> >> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) >> >> Spandau (One of the twelve boroughs of Berlin) >> >> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently >> abbreviated as 🍺) >> >> Steglitz (a locality in the South Western part of the city) >> >> Springer (Berlin is headquarters of Axel Springer publishing house) >> >> Staaken (a locality within the Spandau borough) >> >> Schoenholz (A zone in the Niederschönhausen district of Berlin) >> >> Shellhaus (A famous office building) >> >> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) >> >> Schiller (A park in the Mitte borough) >> >> Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) >> (The adjective form, Saatwinkler is also a really cool bridge but >> that form is too long) >> >> Sonne (Sonnenallee is the name of a large street in Berlin crossing the former >> wall, also translates as "sun") >> >> Savigny (Common place in City-West) >> >> Soorstreet (Street in Berlin restrict Charlottenburg) >> >> Solar (Skybar in Berlin) >> >> See (Seestraße or "See Street" in Berlin) >> >> Thanks, >> Paul >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lijie at unitedstack.com Wed Mar 14 08:34:54 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Wed, 14 Mar 2018 16:34:54 +0800 Subject: [openstack-dev] [nova] about rebuild instance booted from volume Message-ID: Hi,all This is the spec about backup a instance booted from volume, anyone who is interested in booted from volume can help to review this. Any suggestion is welcome. The link is here. Re:the backup spec:https://review.openstack.org/#/c/530214/ Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Wed Mar 14 08:42:24 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Wed, 14 Mar 2018 16:42:24 +0800 Subject: [openstack-dev] [nova] about rebuild instance booted from volume Message-ID: Hi,all This is the spec about rebuild a instance booted from volume.In the spec,there is a question about if we should delete the old root_volume.Anyone who is interested in booted from volume can help to review this. Any suggestion is welcome.Thank you! The link is here. Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Mar 14 09:05:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 14 Mar 2018 10:05:34 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> Message-ID: <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> Jens Harbott wrote: > 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : >> Hi, >> >> Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) > > Comparing with previous polls, these should be personalized links that > need to be sent out to each voter individually, so I agree that this > looks like a mistake. We crashed CIVS for the last naming with a private poll sent to all the Foundation membership, so the TC decided to use public (open) polling this time around. Anyone with the link can vote, nothing was sent to each of the voters individually. The "Already voted" error might be due to CIVS limiting public polling to one entry per IP, and a colleague of yours already voted... Maybe try from another IP address ? -- Thierry Carrez (ttx) From slawek at kaplonski.pl Wed Mar 14 09:16:30 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Wed, 14 Mar 2018 10:16:30 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> Message-ID: <88B4EEE3-8058-48AA-AB7E-5A77E6D932A3@kaplonski.pl> Indeed. I now tried from different IP address and I was able to vote. Thx a lot for help. — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Thierry Carrez w dniu 14.03.2018, o godz. 10:05: > > Jens Harbott wrote: >> 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : >>> Hi, >>> >>> Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) >> >> Comparing with previous polls, these should be personalized links that >> need to be sent out to each voter individually, so I agree that this >> looks like a mistake. > > We crashed CIVS for the last naming with a private poll sent to all the > Foundation membership, so the TC decided to use public (open) polling > this time around. Anyone with the link can vote, nothing was sent to > each of the voters individually. > > The "Already voted" error might be due to CIVS limiting public polling > to one entry per IP, and a colleague of yours already voted... Maybe try > from another IP address ? > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Wed Mar 14 09:31:44 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 14 Mar 2018 17:31:44 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda (No Time Change For US) In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5E8B5EF@fmsmsx122.amr.corp.intel.com> Message-ID: Update: 1. ZOOM link: - Part 1: https://zoom.us/j/511883630 (starting UTC1400 ) - Part 2: https://zoom.us/j/741227793 2. Agenda: - PoC demo from Shaohe - Rocky Work Assignment: (nova-cyborg interaction, programmability, multi-tenancy/quota, more drivers, metadata, GPU, documentation, testing) On Wed, Mar 14, 2018 at 10:14 AM, Zhipeng Huang wrote: > Yes that would be one of the issue we need to discuss after the PoC demo :) > > On Wed, Mar 14, 2018 at 10:11 AM, Nadathur, Sundar < > sundar.nadathur at intel.com> wrote: > >> Hi Howard, >> >> Can we discuss the possibility of using a filter/weigher that invokes >> Cyborg API, as we discussed during the Cyborg/Nova discussion in the PTG? >> >> >> >> This is line 56 in https://etherpad.openstack.org >> /p/cyborg-ptg-rocky-nova-cyborg-interaction . >> >> >> >> Regards, >> >> Sundar >> >> >> >> *From:* Zhipeng Huang [mailto:zhipengh512 at gmail.com] >> *Sent:* Monday, March 12, 2018 1:28 AM >> *To:* OpenStack Development Mailing List (not for usage questions) < >> openstack-dev at lists.openstack.org> >> *Cc:* Konstantinos Samaras-Tsakiris > @cern.ch>; Dutch Althoff >> *Subject:* [openstack-dev] [cyborg]Weekly Team Meeting 2018.03.14 Agenda >> (No Time Change For US) >> >> >> >> Hi Team, >> >> >> >> We will resume the team meeting this week. The meeting starting time is >> still ET 10:00am/PT 7:00am, whereas in China it is moved one hour early to >> 10:00pm. For Europe please refer to UTC1400 as the baseline. >> >> >> >> This week we will have a special 2 hour meeting. In the first one hour we >> will have Shaohe demo the PoC Intel dev team had conducted, and in the >> second half we will confirm the task and milestones for Rocky based upon >> the PTG discussion (summary sent out last Friday). >> >> >> >> ZOOM link will be provided before the meeting :) >> >> >> >> If there are any other topics anyone would like to propose, feel free to >> reply to this email thread. >> >> >> >> -- >> >> Zhipeng (Howard) Huang >> >> >> >> Standard Engineer >> >> IT Standard & Patent/IT Product Line >> >> Huawei Technologies Co,. Ltd >> >> Email: huangzhipeng at huawei.com >> >> Office: Huawei Industrial Base, Longgang, Shenzhen >> >> >> >> (Previous) >> >> Research Assistant >> >> Mobile Ad-Hoc Network Lab, Calit2 >> >> University of California, Irvine >> >> Email: zhipengh at uci.edu >> >> Office: Calit2 Building Room 2402 >> >> >> >> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From newypei at gmail.com Wed Mar 14 09:36:38 2018 From: newypei at gmail.com (Yipei Niu) Date: Wed, 14 Mar 2018 17:36:38 +0800 Subject: [openstack-dev] [tricircle] Nominate change in tricircle core team (crh) Message-ID: +1. Best regards, Yipei -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bhagyashri.Shewale at nttdata.com Wed Mar 14 09:42:19 2018 From: Bhagyashri.Shewale at nttdata.com (Shewale, Bhagyashri) Date: Wed, 14 Mar 2018 09:42:19 +0000 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: , , Message-ID: +1 for St. Bernard Regards, Bhagyashri Shewale ________________________________________ From: Patil, Tushar Sent: Wednesday, March 14, 2018 7:52:54 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas Hi, Total 4 people attended last IRC meeting and all of them have voted for St.Bernard Dog. If someone has missed to vote, please vote for mascot now. Options: 1) Asiatic black bear 2) Gekko : Geckos is able to regrow it's tail when the tail is lost. 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM instances) Thank you. Regards, Tushar Patil ________________________________ From: Bhor, Dinesh Sent: Wednesday, March 14, 2018 10:16:29 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas Hi Sampath San, There is one more option which we discussed in yesterdays masakari meeting [1]: St. Bernard(Dog) [2]. [1] http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-03-13-04.01.log.html#l-38 [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) Thank you, Dinesh Bhor ________________________________ From: Sam P Sent: 13 March 2018 22:19:00 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [masakari] Masakari Project mascot ideas Hi All, We started this discussion on IRC meeting few weeks ago and still no progress..;) (aspiers: thanks for the reminder!) Need mascot proposals for Masakari, see FAQ [1] for more info Current ideas: Origin of "Masakari" is related to hero from Japanese folklore [2]. Considering that relationship and to start the process, here are few ideas, (1) Asiatic black bear (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. [1] https://www.openstack.org/project-mascots/ [http://www.openstack.org/themes/openstack/images/openstack-logo-full.png] Project Mascots - OpenStack is open source software for ... www.openstack.org We are OpenStack. We’re also passionately developing more than 60 projects within OpenStack. To support each project’s unique identity and visually demonstrate ... [2] https://en.wikipedia.org/wiki/Kintar%C5%8D --- Regards, Sampath ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From shaohe.feng at intel.com Wed Mar 14 09:50:49 2018 From: shaohe.feng at intel.com (Feng, Shaohe) Date: Wed, 14 Mar 2018 09:50:49 +0000 Subject: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion. In-Reply-To: References: <7B5303F69BB16B41BB853647B3E5BD7054026FB2@SHSMSX101.ccr.corp.intel.com> <7B5303F69BB16B41BB853647B3E5BD7054027D8E@SHSMSX101.ccr.corp.intel.com> Message-ID: <7B5303F69BB16B41BB853647B3E5BD70540347A2@SHSMSX101.ccr.corp.intel.com> Hi all. This is the agent of today’s discussion. https://etherpad.openstack.org/p/cyborg-nova-poc BR Feng, Shaohe From: Zhipeng Huang [mailto:zhipengh512 at gmail.com] Sent: 2018年3月8日 12:08 To: Feng, Shaohe Cc: openstack-dev at lists.openstack.org; openstack-operators at lists.openstack.org; Du, Dolpher ; Ding, Jian-feng ; Sun, Yih Leong ; Nadathur, Sundar ; Dutch ; Rushil Chugh ; Nguyen Hung Phuong ; Justin Kilpatrick ; Ranganathan, Shobha ; zhuli ; bao.yumeng at zte.com.cn; Li Liu ; xiaodongpan at tencent.com; kong.wei2 at zte.com.cn; li.xiang2 at zte.com.cn Subject: Re: [openstack-dev][cyborg][glance][nova]cyborg FPGA management flow disscusion. Thanks Shaohe, Let's schedule a video conf session next week. On Thu, Mar 8, 2018 at 11:41 AM, Feng, Shaohe > wrote: Hi All: The POC is here: https://github.com/shaohef/cyborg BR Shaohe Feng _____________________________________________ From: Feng, Shaohe Sent: 2018年2月12日 15:06 To: openstack-dev at lists.openstack.org; openstack-operators at lists.openstack.org Cc: Du, Dolpher >; Zhipeng Huang >; Ding, Jian-feng >; Sun, Yih Leong >; Nadathur, Sundar >; Dutch >; Rushil Chugh >; Nguyen Hung Phuong >; Justin Kilpatrick >; Ranganathan, Shobha >; zhuli >; bao.yumeng at zte.com.cn; xiaodongpan at tencent.com; kong.wei2 at zte.com.cn; li.xiang2 at zte.com.cn; Feng, Shaohe > Subject: [openstack-dev][cyborg][glance][nova]cyborg FPGA management flow disscusion. Now I am working on an FPGA management POC with Dolpher. We have finished some code, and have discussion with Li Liu and some cyborg developer guys. Here are some discussions: image management 1. User should upload the FPGA image to glance and set the tags as follow: There are two suggestions to upload an FPGA image. A. use raw glance api like: $ openstack image create --file mypath/FPGA.img fpga.img $ openstack image set --tag FPGA --property vendor=intel --property type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22 The image must have "FPGA" tag and accelerator type(such as type=crypto). B. cyborg support a new api to upload a image. This API will wrap glance api and include the above steps, also make image record in it's local DB. 2. Cyborg agent/conductor get the FPGA image info from glance. There are also two suggestions to get the FPGA image info. A. use raw glance api. Cyborg will get the images by FPGA tag and timestamp periodically and store them in it's local cache. It will use the images tags and properties to form placement taits and resource_class name. B. store the imformations when call cybort's new upload API. 3. Image download. call glance image download API to local file. and make a corresponding md5 files for checksum. GAP in image management: missing related glance image client in cyborg. resource report management for scheduler. 1. Cyborg agent/conductor need synthesize all useful information from FPGA driver and image information. The traits will be like: CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO, The resource_class will be like: CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF {"inventories": "CUSTOM_FPGA_INTEL_PF": { "allocation_ratio": 1.0, "max_unit": 4, "min_unit": 1, "reserved": 0, "step_size": 1, "total": 4 } } Accelerator claim and release: 1. Cybort will support the releated API for accelerator claim and release. It can pass the follow parameters: nodename: Which host that accelerator located on, it is required. type: This accelerator type, cyborg can get image uuid by it. it is optional. image uuid: the uuid of FPGA bitstream image, . it is optional. traits: the traits info that cyborg reports to placement. resource_class: the resource_class name that reports to placement. And return the address for the accelerator. At present, it is the PCIE_ADDRESS. 2. When claim an accelerator, type and image is None, cybort will not program the fpga for user. FPGA accelerator program API: We still need to support an independent program API for some specific scenarios. Such as as a FPGA developer, I will change my verilog logical frequently and need to do verification on my guest. I upload my new bitstream image to glance, and call cyborg to program my FPGA accelerator. End user operations follow: 1. upload an bitstream image to glance if necessary and set its tags(at least FPGA is requied) and property. sucn as: --tag FPGA --property vendor=intel --property type=crypto 2. list the FPGA related traits and resource_class names by placement API. such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits. 3. create a new falvor wiht his expected traits and resource_class as extra spec. such as: "resourcesn:CUSTOM_FPGA_INTEL_PF=2" n is an integer or empty string. "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO". 4. create the VM with this flavor. BR Shaohe Feng -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Mar 14 10:35:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 14 Mar 2018 10:35:13 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] placement update 18-10 In-Reply-To: References: Message-ID: On Mon, 12 Mar 2018, Tetsuro Nakamura wrote: > # Questions >> What's the status of shared resource providers? Did we even talk >> about that in Dublin? > > > In terms of bug fixes related to allocation candidates, I'll try to answer > that question :) Thanks very much for doing this. > * https://review.openstack.org/#/c/533396 > AllocationCandidates.get_by_filters ignores shared RPs when the RC exists > in both places I've just manually rebased the stack that includes the above to account for the move of the resource provider objects which has caused merged conflicts all over the place. > Besides these bugs, how we collaborate and merge existing logic of shared > resource provider and now being constructed logic of nested resource > provider remains one of the challenges in Rocky in my understanding. Indeed. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tpb at dyncloud.net Wed Mar 14 11:16:48 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 14 Mar 2018 07:16:48 -0400 Subject: [openstack-dev] [manila][ptg] Rocky PTG summary Message-ID: <20180314111648.r3mzqm6obcmztfoa@barron.net> We had a good showing [1] at the Rocky PTG in Dublin. Most of us see each other face-to-face rarely and we had some (even long time) contributors come to the PTG for the first time or join manila from other projects! We had a good time together [2], took on some tough subjects, and planned out our approach to Rocky. The following summarizes our main discussions. For the raw discussion topic/log etherpad see [3] or for video of the team in action see [4]. This summary has also been rendered in this etherpad: https://etherpad.openstack.org/p/manila-rocky-ptg-summary Please follow up in the etherpad with corrections or additions, especially where we've missed a perspective or interpretation. == Queens Retrospective == Summary [5] shows focus on maintaining quality and integrity of the project while at the same time seeking ways to encourage developer participation, new driver engagement, and adoption of manila in real deployments. == Rocky Schedule == - We'll keep the same project specific deadlines as Queens: * Spec freeze at Rocky-1 milestone * New Driver Submission Freeze at Rocky-2 milestone * Feature Proposal Freeze at release-7 week (two weeks before Rocky-3 milestone) == Cross Project Goals == - Manila met the queens goals (policy in code [6] and split of tempest into its own repos [7]). - For Rocky mox removal goal [8] we have no direct usage of mox anymore but need to track the transitive dependency of the manila-ui plugin on mox via horizon [9] - We have already met the minimum Rocky mutable configuration goal [10] in that we have general support for toggle of debug logging without restart. We agreed that additional mutable configuration options should be proposed on a case-by-case basis, with use-cases and supporting arguments to the effect that they are indeed safe to be treated as mutable. == Documentation Gaps == - amito's experience introducing the new Infinidat driver in Queens shows significant gaps in our doc for new drivers - jungleboyj proposed that cinder will clean up its onboarding doc including its wiki for how to contribute a driver [11] - amito will work with the manila community to port over this information and identify any remaining gaps - patrickeast will be adding a Pure back end in Rocky and can help identify gaps - we agreed to work with cinder to drive consistency in 'Contributor Guide' format and subject matter. == Python 3 == - Distros are dropping support for python 2, completely, between now and 2020 so OpenStack projects we need to start getting ready now [12] - Our main exposure is in manila-ui where we still run unit tests with python 2 only - Also need to add a good set of python 3 tempest tests for manila proper - CentOS jobs will need to be replaced with stable Fedora jobs - vkmc will drive this; overall goal may take more than one release == NFSExportsHelper == - bswartz has a better implementation - in discussion he developed a preliminary plan for migrating users from the old to the new implementation - impacts generic and lvm drivers, arguably reference only - bswartz will communicate any impact to openstack-dev, openstack-operators, and openstack-users mailing lists == Quota Resource Usage Tracking == - we inherited our reservation/commit/rollback system from Cinder who in turn took theirs from Nova - it is buggy, making reservations in one service and doing commit/rollback in scattered places in another service. Customer bugs with quotas are painful and confidence that they are actually fixed is low. - melwitt and dansmith explained how Nova has now abandoned this system in favor of actual resource counting in the api service - we intend to explore the possibility of implementing a similar system as the new Nova approach; cinder is exploring this as well - can be implemented as bug fixes if it's clean and easy to understand == Replacing rootwrap with privsep == - What's in it for manila? - Nova says it improves performance; Cinder says it harms performance :) - It serializes operations so the performance impact depends on how long the elevated privilege operations run. - We need to study our codebase more to understand impact; not a Rocky goal for us to implement this. == Huawei proposal to support more access rule attributes == * access levels like all_squash / no all_squash Most but not all vendors can support these. We agreed that although opaque metadata on access rules _could_ be used to allow manila forks to implement such support opaquely to manila proper, this is a generally useful characteristic, not something only useful for Huawei private cloud. So it should be implemented using new public extra specs and back end capability checking in the scheduler in order to avoid error cases with back ends that cannot support the capabilities in question. * ordering semantics for access rules to dis-ambiguate rule sets where incompatible access modes (like r/w and r/o) are applied to the same range of addresses - We recognized that there may be cases where a cloud or distribution may need to extend the upstream manila with features that are not supported upstream and observed that in general wsgi extensions would not be sufficient to meet these needs. Metatdata fields that "mean something" to the forked distribution but which are opaque to manila proper could be used to address these needs. - Huawei will submit specs for these access rule attributes, as well as for metadata for access rules (though the latter will not be used for *these* features) and we will prioritize their review. == Huawei proposal to support optional share size == - Huawei public cloud users would like to be able to create a share without specifying a share size, so proposal is to have a default share size for such cases. - Manila community does not want this general capability b/c we think many users would assume the share can grow without limit if no size is specified and that is not what this proposal would do. == Feasability of other potential features == - 'auto' value for overprovisioning ratio. * Cinder has this in flight (erlon). * patrickeast might pick this up since he's doing a new driver for manila that does its own autoprovisioning and that's in the path of the cinder work. - manage/unmanage for DHSS=True Requires two-steps (1) manage share servers, (2) manage shares and care must be taken w.r.t. unmanaged resources in managed share servers. No objection in principle to supporting this but the design is non-trivial. Probably not a Rocky target. - create-from-snapshot to different pools than the original No objection in principle but the spec may be tricky: need to handle driver/scheduler interaction so that pools incompatible with any given create-from-snapshot request are not chosen. Probably not a Rocky target, watch similar cinder work. == HA for software defined Share Servers == - current software defined Shared Servers (as with the generic and Windows back ends) introduce a single point of failure in the data path that arguably make them unacceptable for production use - in Queens tbarron proposed a spec [13] wherein small self-managing pacemaker-corosync clusters of service VMs to address the SPOF issue - His target was a scale-out, DHSS=True version of the Ceph NFS driver * currently we deploy with TripleO as processes/containers running the controller node pacemaker/corosync cluster * avoids the SPOF issue at the cost of: + no scale out + DHSS-only deployment - New approach for the ceph-nfs back end * de-couple the manila driver and the open source back end software * ceph mimic will enable running ceph-nfs (ganesha) active-active * implement the back end to run ceph daemons (including ceph-nfs) under kubernetes, with kubernetes HA * manila driver will interact with ceph mgr over rest interface to create back end share servers per tenant as well as share CRUD on the back end * manila will pass neutron details for share networks to the ceph manager, ceph manager will annotate pod creation requests for nfs-gw with details that kuryr will use to connect ceph-nfs gateways directly to tenant private networks - This will take a couple releases to develop but has potential for a fully open source, production quality, software defined DHSS=True back end - Discussion suggested that when ready this back end could serve as reference DHSS=True driver but we'd need to figure a way to run (scaled down version?) in gate - bswartz started a good discussion about kubernetes HA, write caching, and container restart that has been picked up here [14] - tbarron's queens spec could still be developed for alternate software defined back ends that implement share servers via service VMs such as generic == Scenario Test Improvements == - Several contributors are proposing enhancements. - Port Valeriy's spec over to Rocky and keep it up to date as part of the reviews for the enhancements. - Spec deadline does not apply to this test-only work. == Openstack Client Integration == - We are behind other projects in having no OSC support. - May be able to get an outreachy intern to work on this. - Technical issues: * need some basic compatability with manila microversions * need to be able to run manila standalone as well - Can pursue this opportunistically, outside of release waterfall spec approval cadence. == Shade Support == - tbarron, patrickeast, others find shade and ansible roles built on shade super useful but manila has not shade support - We agreed to support efforts to provide shade / ansible support for manila share service opportunistically, outside normal release cadence == Races == - We are seeing random CI failures in the dummy driver - These may due to races in the manager since no back end drivers are exercised. - Let's file bugs on these and investigate. == Manila UI and django requirements bump [15] == - Last time we bumped minimum django requirements it didn't go smoothly in manila ui - Mostly just a headsup == Testing multisegment binding in the gate [16] == - Challenging - Community thinks gate testing is probably not a great use of community resources but third parties with customers using this for their drivers should be motivated to do so. == Support for non-nova consumers of file shares == - Hot area, increases the value of vendors and back ends investing in manila drivers if they can be used outside OpenStack. - Options: * OpenStack / K8s side-by-side: manila as part of full OpenStack provides shares to k8s, perhaps using kuryr to extend neutron networks into k8s * standalone manila / cinder as software defined storage appliance running with mysql and rabbitmq but without keystone (NOAUTH) and the rest of OpenStack * manilalib (like cinderlib) inside a persistence service (for model updates, driver private data) that can be used by a stateless CSI driver * other ... * use of any of the above with OpenSDS [17]. Manila / Cinder core xing-yang is now working full time on SDS. == Mount Automation == - Pursued nova support in mitaka, were blocked, never pursued further. - We may get this "for free" when providing mounts to container workloads (as in kubernetes where mounts are done by the hosts and containers get automated bind mounts to the shares). - For traditional nova workloads heat / ansible may be our best available option. == IPv6 Fulfillment == - In Queens EMC and NetApp and lvm back ends added IPv6 support. - In Rocky Huawei, Pure, Ceph Native, and Ceph NFS expect to add IPv6 support. == MOD_WSGI == - We discussed when to use it in CI jobs and agreed that we need to cover both MOD_WSGI and non MOD_WSGI cases, and we are currently doing that. == Migration to StoryBoard [18] == - launchpad is up to ~1.75 m bugs now; storyboard starts numbering at 2 m; import of our launchpad bugs into storyboard will be easier if we act sooner rather than later - We will look at the storyboard sandbox [19] and see if it can meet our needs. - diablorojo will do a test migration and when she reports back we can consider next steps == Zuul v3 migration == - we need to add changes related to jobs config in manila-tempest-plugin - Can maked progress incrementally - no need to e.g. break 3rd party jobs by forcing everyone to change at once. - Start with infra based jobs == Priorities for the Rocky Release Cycle == - Get manila-ui ready for python 3 and start converting tempest jobs - Get parity with cinder on documentation and forge agreement with them on remaining gaps and plan of action. - Explore nova style quota usage system; can implement opportunistically after spec deadlines if we have agreement on a solution. - Review NFSExportsHelper and migration proposal, merge if possible. - Review Huawei extra access rule attribute specs, merge if possible. - Improve testing, especially scenario tests - OSC client, shade / ansible (pursue opportunistically) - new IPv6 driver support - investigate StoryBoard migration, move ahead if feasible - pursue path to production quality open source DHSS=True back end == Action Items == - tbarron will develop etherpad for priority reviews and summary dashboard incorporating the past review etherpads / gerrit dashboards that bswartz supplied - ganso will create an ehterpad for collaboration on development of reviewer/contributor checklists - tbarron will check how cinder fixed log filtering issue - tbarron will update releases.openstack.org with manila-specific schedule - amito, patrickeast will help us identify gaps in new driver doc - vkmc will track removal of mox in Horizon - bswartz will communicate impact of new NFSExportsHelper to email lists - anyone: explore new-style quota usage system; implement it opportunistically - vkmc will see if we can get an outreachy intern to work on OSC for manila - diablorojo will do a test migration of manila projects to storyboard and report back == Footnotes == [1] 10+ people in the room at almost all times, sometimes almost double. [2] For pictures see: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128099.html [3] https://etherpad.openstack.org/p/manila-rocky-ptg [4] https://youtu.be/HEX9znj4-wM [5] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128232.html [6] https://governance.openstack.org/tc/goals/queens/policy-in-code.html [7] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html [8] https://governance.openstack.org/tc/goals/rocky/mox_removal.html [9] https://etherpad.openstack.org/p/horizon-unittest-mock-migration [10] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html [11] https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver [12] https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects [13] https://review.openstack.org/#/c/504987/ [14] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128064.html [15] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127421.html [16] https://bugs.launchpad.net/manila/+bug/1747695 [17] https://docs.google.com/presentation/d/1zix__I4bUyZQpGe31Wlmv0pyvBOXacULmVbQlHaNQvo/edit?usp=sharing [18] https://docs.openstack.org/infra/storyboard/migration.html [19] https://storyboard-dev.openstack.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dtantsur at redhat.com Wed Mar 14 11:50:06 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Mar 2018 12:50:06 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <20180313235859.GA14573@localhost.localdomain> References: <20180313235859.GA14573@localhost.localdomain> Message-ID: Hi, I suspect that S-Bahn may be a protected (copyright, trademark, whatever) name. Did you have a chance to check it? On 03/14/2018 12:58 AM, Paul Belanger wrote: > Greetings all, > > It is time again to cast your vote for the naming of the S Release. This time > is little different as we've decided to use a public polling option over per > user private URLs for voting. This means, everybody should proceed to use the > following URL to cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-03-21 23:59:59[1], and results will be > posted shortly after. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the R release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > > The Geographic Region from where names for the S release will come is Berlin > > Proposed Names > > Spree (a river that flows through the Saxony, Brandenburg and Berlin states of > Germany) > > SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) > > Spandau (One of the twelve boroughs of Berlin) > > Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently > abbreviated as 🍺) > > Steglitz (a locality in the South Western part of the city) > > Springer (Berlin is headquarters of Axel Springer publishing house) > > Staaken (a locality within the Spandau borough) > > Schoenholz (A zone in the Niederschönhausen district of Berlin) > > Shellhaus (A famous office building) > > Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) > > Schiller (A park in the Mitte borough) > > Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) > (The adjective form, Saatwinkler is also a really cool bridge but > that form is too long) > > Sonne (Sonnenallee is the name of a large street in Berlin crossing the former > wall, also translates as "sun") > > Savigny (Common place in City-West) > > Soorstreet (Street in Berlin restrict Charlottenburg) > > Solar (Skybar in Berlin) > > See (Seestraße or "See Street" in Berlin) > > Thanks, > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Wed Mar 14 11:51:45 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Mar 2018 12:51:45 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> References: <20180313235859.GA14573@localhost.localdomain> <7E7A7CA7-7A5D-4428-95CF-6E47F31F96F3@kaplonski.pl> <49541945-e517-83ee-bec8-216ad669fea3@openstack.org> Message-ID: <2af68aff-1172-5d30-e9ba-1c30300ce96d@redhat.com> On 03/14/2018 10:05 AM, Thierry Carrez wrote: > Jens Harbott wrote: >> 2018-03-14 9:21 GMT+01:00 Sławomir Kapłoński : >>> Hi, >>> >>> Are You sure this link is good? I just tried it and I got info that "Already voted" which isn't true in fact :) >> >> Comparing with previous polls, these should be personalized links that >> need to be sent out to each voter individually, so I agree that this >> looks like a mistake. > > We crashed CIVS for the last naming with a private poll sent to all the > Foundation membership, so the TC decided to use public (open) polling > this time around. Anyone with the link can vote, nothing was sent to > each of the voters individually. > > The "Already voted" error might be due to CIVS limiting public polling > to one entry per IP, and a colleague of yours already voted... Maybe try > from another IP address ? > I don't think every small company has an unlimited pool of IP addresses.. Neither do people working from home from a big internet provider. From dtantsur at redhat.com Wed Mar 14 11:52:41 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Mar 2018 12:52:41 +0100 Subject: [openstack-dev] [tripleo] TLS by default In-Reply-To: References: Message-ID: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> Just to clarify: only for public endpoints, right? I don't think e.g. ironic-python-agent can talk to self-signed certificates yet. On 03/14/2018 07:03 AM, Juan Antonio Osorio wrote: > Hello, > > As part of the proposed changed by the Security Squad [1], we'd like the > deployment to use TLS by default. > > The first target is to get the undercloud to use it, so a patch has been > proposed recently [2] [3]. So, just wanted to give a heads up to people. > > This should be just fine from a quickstart/testing point of view, since we > explicitly set the value for autogenerating certificates in the undercloud [4] [5]. > > Note that there are also plans to change these defaults for the containerized > undercloud and the overcloud. > > BR > > [1] https://etherpad.openstack.org/p/tripleo-security-squad > [2] https://review.openstack.org/#/c/552382/ > [3] https://review.openstack.org/552781 > [4] > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/extras-common/defaults/main.yml#L15 > [5] > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/undercloud-deploy/templates/undercloud.conf.j2#L117 > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jean-philippe at evrard.me Wed Mar 14 12:01:46 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 14 Mar 2018 12:01:46 +0000 Subject: [openstack-dev] [openstack-ansible] Meetings change (PTG discussion follow-up) In-Reply-To: References: Message-ID: Hello, We discussed the problem of the miscommunication at the PTG, and we agreed the focus of the week solved many things for clarity. I am not sure we need to send a ML summary, if all is recorded in the meeting each week: people can just browse meetings for this info. I have no strong opinion about office hours: - it is more formal than our ad-hoc dicussions (more than necessary?).; - it helps for timezones: We can have different office hours (US based/Europe based) for discussing things; - it can be reported during Tuesday's meeting. Let's discuss this on the chan and validate next Tuesday I guess :) Because there were no strong against the meeting time change/format we should go ahead on the change of the community meetings. So from now on, until daylight saving applies to Europe, we should have the meetings on Tuesday at 5PM UTC. Thanks everyone! On 6 March 2018 at 16:08, Amy Marrich wrote: > JP, > > When the Community meeting was moved to once a month there was a lot of > resulting miscommunication as a result. If a weekly review is going to be > sent to the mailing list with channel discussions is going to be sent out, I > think that's a good alternative but the conversations still need to take > place and as many people involved as possible. What about having office > hours? > > Amy (spotz) > > On Tue, Mar 6, 2018 at 9:51 AM, Jean-Philippe Evrard > wrote: >> >> Hello, >> >> During the PTG, we've discussed about changing our meetings. >> I'd like to have a written evidence in our mailing lists, showing what >> we discussed, and what we proposed to change. I propose we validate >> those changes if they get no opposition in the next 7 days (deadline: >> 13 March). >> >> What we discussed was: >> - Should the meetings be rescheduled, and at what time; >> - Should the meetings be happening in alternance for US/Europe >> friendly timezones; >> - What is the purpose/expected outcome of those meetings; >> - What is the reason the attendance is low. >> >> The summary is the following: >> - The expected outcome of bug triage is currently (drumroll....) >> actively triaging bugs which produces better deliverables (what a >> surprise!). >> - The expected outcome of the community meeting is to discuss about >> what we actively need to work on together, but we are doing these kind >> of conversations, ad-hoc, in the channel. So if we summarize things on >> a regular basis to make sure everyone is aware of the conversations, >> we should be good. >> - The timezone friendly things won't impact the attendance positively. >> - Right now, the Europe meetings can be postponed of one hour, but >> decision should be re-discussed with daylight saving. >> - A lot of ppl have meetings at 4PM UTC right now. >> >> As such, here is the PTG proposed change: >> - Moving the bug triage meeting to 5PM UTC until next daylight saving >> change. >> - Keep the "Focus of the week" section of the bug triage, to list what >> we discussed in the week (if more conversations have to happen, they >> can happen just after the bug triage) >> - Removing the community meeting. >> >> Any opposition there? If we are all okay, I will update our procedures >> next week. >> >> Best regards, >> JP >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaosorior at gmail.com Wed Mar 14 12:07:52 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Wed, 14 Mar 2018 14:07:52 +0200 Subject: [openstack-dev] [tripleo] TLS by default In-Reply-To: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> References: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> Message-ID: Correct, only public endpoints. On Wed, Mar 14, 2018 at 1:52 PM, Dmitry Tantsur wrote: > Just to clarify: only for public endpoints, right? I don't think e.g. > ironic-python-agent can talk to self-signed certificates yet. > > > On 03/14/2018 07:03 AM, Juan Antonio Osorio wrote: > >> Hello, >> >> As part of the proposed changed by the Security Squad [1], we'd like the >> deployment to use TLS by default. >> >> The first target is to get the undercloud to use it, so a patch has been >> proposed recently [2] [3]. So, just wanted to give a heads up to people. >> >> This should be just fine from a quickstart/testing point of view, since >> we explicitly set the value for autogenerating certificates in the >> undercloud [4] [5]. >> >> Note that there are also plans to change these defaults for the >> containerized undercloud and the overcloud. >> >> BR >> >> [1] https://etherpad.openstack.org/p/tripleo-security-squad >> [2] https://review.openstack.org/#/c/552382/ >> [3] https://review.openstack.org/552781 >> [4] https://github.com/openstack/tripleo-quickstart-extras/blob/ >> master/roles/extras-common/defaults/main.yml#L15 >> [5] https://github.com/openstack/tripleo-quickstart-extras/blob/ >> master/roles/undercloud-deploy/templates/undercloud.conf.j2#L117 >> -- >> Juan Antonio Osorio R. >> e-mail: jaosorior at gmail.com >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Mar 14 12:11:18 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Mar 2018 13:11:18 +0100 Subject: [openstack-dev] [tripleo] Blueprints for Rocky In-Reply-To: References: Message-ID: <8cfe456a-3be3-4d84-4bc7-7c84ab94b50e@redhat.com> Hi Alex, I have two small ironic-related blueprints pending approval: https://blueprints.launchpad.net/tripleo/+spec/ironic-rescue https://blueprints.launchpad.net/tripleo/+spec/networking-generic-switch and one larger: https://blueprints.launchpad.net/tripleo/+spec/ironic-inspector-overcloud Could you please check them? I would also like to talk about possibility to enable cleaning by default in the undercloud, but I guess it deserves a separate thread. On 03/13/2018 02:58 PM, Alex Schultz wrote: > Hey everyone, > > So we currently have 63 blueprints for currently targeted for > Rocky[0]. Please make sure that any blueprints you are interested in > delivering have an assignee set and have been approved. I would like > to have the ones we plan on delivering for Rocky to be updated by > April 3, 2018. Any blueprints that have not been updated will be > moved out to the next cycle after this date. > > Thanks, > -Alex > > [0] https://blueprints.launchpad.net/tripleo/rocky > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jeremyfreudberg at gmail.com Wed Mar 14 12:14:38 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 14 Mar 2018 08:14:38 -0400 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> Message-ID: Hi Dmitry, According to Wikipedia [0] the trademark was removed. The citation [1] is actually inaccurate; it was not the final ruling. Regardless [2] seems to reflect the final result which is that the trademark is cancelled. Hope this helps. [0] https://en.wikipedia.org/wiki/S-train#Germany,_Austria_and_Switzerland (end of paragraph [1] http://juris.bundespatentgericht.de/cgi-bin/rechtsprechung/document.py?Gericht=bpatg&Art=en&Datum=Aktuell&nr=23159&pos=1&anz=323&Blank=1.pdf [2] http://www.eurailpress.de/news/bahnbetrieb/single-view/news/bundesgerichtshof-db-verliert-marke-s-bahn.html On Wed, Mar 14, 2018 at 7:50 AM, Dmitry Tantsur wrote: > Hi, > > I suspect that S-Bahn may be a protected (copyright, trademark, whatever) > name. Did you have a chance to check it? > > > On 03/14/2018 12:58 AM, Paul Belanger wrote: > >> Greetings all, >> >> It is time again to cast your vote for the naming of the S Release. This >> time >> is little different as we've decided to use a public polling option over >> per >> user private URLs for voting. This means, everybody should proceed to use >> the >> following URL to cast their vote: >> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be >> 3fcdf1&akey=8cfdc1f5df5fe4d3 >> >> Because this is a public poll, results will currently be only viewable by >> myself >> until the poll closes. Once closed, I'll post the URL making the results >> viewable to everybody. This was done to avoid everybody seeing the >> results while >> the public poll is running. >> >> The poll will officially end on 2018-03-21 23:59:59[1], and results will >> be >> posted shortly after. >> >> [1] http://git.openstack.org/cgit/openstack/governance/tree/refe >> rence/release-naming.rst >> --- >> >> According to the Release Naming Process, this poll is to determine the >> community preferences for the name of the R release of OpenStack. It is >> possible that the top choice is not viable for legal reasons, so the >> second or >> later community preference could wind up being the name. >> >> Release Name Criteria >> >> Each release name must start with the letter of the ISO basic Latin >> alphabet >> following the initial letter of the previous release, starting with the >> initial release of "Austin". After "Z", the next name should start with >> "A" again. >> >> The name must be composed only of the 26 characters of the ISO basic Latin >> alphabet. Names which can be transliterated into this character set are >> also >> acceptable. >> >> The name must refer to the physical or human geography of the region >> encompassing the location of the OpenStack design summit for the >> corresponding release. The exact boundaries of the geographic region under >> consideration must be declared before the opening of nominations, as part >> of >> the initiation of the selection process. >> >> The name must be a single word with a maximum of 10 characters. Words that >> describe the feature should not be included, so "Foo City" or "Foo Peak" >> would both be eligible as "Foo". >> >> Names which do not meet these criteria but otherwise sound really cool >> should be added to a separate section of the wiki page and the TC may make >> an exception for one or more of them to be considered in the Condorcet >> poll. >> The naming official is responsible for presenting the list of exceptional >> names for consideration to the TC before the poll opens. >> >> Exact Geographic Region >> >> The Geographic Region from where names for the S release will come is >> Berlin >> >> Proposed Names >> >> Spree (a river that flows through the Saxony, Brandenburg and Berlin >> states of >> Germany) >> >> SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) >> >> Spandau (One of the twelve boroughs of Berlin) >> >> Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently >> abbreviated as 🍺) >> >> Steglitz (a locality in the South Western part of the city) >> >> Springer (Berlin is headquarters of Axel Springer publishing house) >> >> Staaken (a locality within the Spandau borough) >> >> Schoenholz (A zone in the Niederschönhausen district of Berlin) >> >> Shellhaus (A famous office building) >> >> Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) >> >> Schiller (A park in the Mitte borough) >> >> Saatwinkel (The name of a super tiny beach, and its surrounding >> neighborhood) >> (The adjective form, Saatwinkler is also a really cool bridge >> but >> that form is too long) >> >> Sonne (Sonnenallee is the name of a large street in Berlin crossing the >> former >> wall, also translates as "sun") >> >> Savigny (Common place in City-West) >> >> Soorstreet (Street in Berlin restrict Charlottenburg) >> >> Solar (Skybar in Berlin) >> >> See (Seestraße or "See Street" in Berlin) >> >> Thanks, >> Paul >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Wed Mar 14 12:27:55 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Wed, 14 Mar 2018 13:27:55 +0100 Subject: [openstack-dev] [kuryr] [os-vif] kuryr-kubernetes gates unblocked Message-ID: <1521030475.12217.9.camel@redhat.com> Hi, kuryr-kubernetes gates were broken by recent try to switch from neutron-legacy DevStack to plain neutron [1]. Meanwhile it modified DevStack jobs we were relying on and introduced us another failure. Now neutron-legacy change was reverted [2] and fix for the second issue [3] is getting merged. Once it's in kuryr-kubernetes gates in both kuryr and os-vif repos should be working again. I apologize that it took so long, but it was multi-level issue and it required a lot of debugging from us. Thanks, Michal [1] https://github.com/openstack-dev/devstack/commit/d9c1275c5df55e822a 7df6880a9a1430ab4f24a0 [2] https://github.com/openstack-dev/devstack/com mit/9f50f541385c929262a2e9c05093881960fe7d8f [3] https://review.openstac k.org/#/c/552701/ From eumel at arcor.de Wed Mar 14 12:33:41 2018 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 14 Mar 2018 13:33:41 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> Message-ID: Hi, it's critical, I would say. They canceled the registration just today [1], but there are still copyrights for name parts like "S-Bahn Halleipzig". I would remove it from the voting list to prevent us from trouble. S-Bahn is not really a location in Berlin. If it does, S-Bahn is broken ;-) kind regards Frank (from Berlin) [1] https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=007392194 Am 2018-03-14 13:14, schrieb Jeremy Freudberg: > Hi Dmitry, > > According to Wikipedia [0] the trademark was removed. The citation [1] > is actually inaccurate; it was not the final ruling. Regardless [2] > seems to reflect the final result which is that the trademark is > cancelled. > > Hope this helps. > > [0] > https://en.wikipedia.org/wiki/S-train#Germany,_Austria_and_Switzerland > (end of paragraph > [1] > http://juris.bundespatentgericht.de/cgi-bin/rechtsprechung/document.py?Gericht=bpatg&Art=en&Datum=Aktuell&nr=23159&pos=1&anz=323&Blank=1.pdf > [2] > http://www.eurailpress.de/news/bahnbetrieb/single-view/news/bundesgerichtshof-db-verliert-marke-s-bahn.html > > On Wed, Mar 14, 2018 at 7:50 AM, Dmitry Tantsur > wrote: > >> Hi, >> >> I suspect that S-Bahn may be a protected (copyright, trademark, >> whatever) name. Did you have a chance to check it? >> >> On 03/14/2018 12:58 AM, Paul Belanger wrote: >> >>> Greetings all, >>> >>> It is time again to cast your vote for the naming of the S >>> Release. This time >>> is little different as we've decided to use a public polling >>> option over per >>> user private URLs for voting. This means, everybody should proceed >>> to use the >>> following URL to cast their vote: >>> >>> >>> >> > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 >>> [1] >>> >>> Because this is a public poll, results will currently be only >>> viewable by myself >>> until the poll closes. Once closed, I'll post the URL making the >>> results >>> viewable to everybody. This was done to avoid everybody seeing the >>> results while >>> the public poll is running. >>> >>> The poll will officially end on 2018-03-21 23:59:59[1], and >>> results will be >>> posted shortly after. >>> >>> [1] >>> >> > http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst >>> [2] >>> --- >>> >>> According to the Release Naming Process, this poll is to determine >>> the >>> community preferences for the name of the R release of OpenStack. >>> It is >>> possible that the top choice is not viable for legal reasons, so >>> the second or >>> later community preference could wind up being the name. >>> >>> Release Name Criteria >>> >>> Each release name must start with the letter of the ISO basic >>> Latin alphabet >>> following the initial letter of the previous release, starting >>> with the >>> initial release of "Austin". After "Z", the next name should start >>> with >>> "A" again. >>> >>> The name must be composed only of the 26 characters of the ISO >>> basic Latin >>> alphabet. Names which can be transliterated into this character >>> set are also >>> acceptable. >>> >>> The name must refer to the physical or human geography of the >>> region >>> encompassing the location of the OpenStack design summit for the >>> corresponding release. The exact boundaries of the geographic >>> region under >>> consideration must be declared before the opening of nominations, >>> as part of >>> the initiation of the selection process. >>> >>> The name must be a single word with a maximum of 10 characters. >>> Words that >>> describe the feature should not be included, so "Foo City" or "Foo >>> Peak" >>> would both be eligible as "Foo". >>> >>> Names which do not meet these criteria but otherwise sound really >>> cool >>> should be added to a separate section of the wiki page and the TC >>> may make >>> an exception for one or more of them to be considered in the >>> Condorcet poll. >>> The naming official is responsible for presenting the list of >>> exceptional >>> names for consideration to the TC before the poll opens. >>> >>> Exact Geographic Region >>> >>> The Geographic Region from where names for the S release will come >>> is Berlin >>> >>> Proposed Names >>> >>> Spree (a river that flows through the Saxony, Brandenburg and >>> Berlin states of >>> Germany) >>> >>> SBahn (The Berlin S-Bahn is a rapid transit system in and around >>> Berlin) >>> >>> Spandau (One of the twelve boroughs of Berlin) >>> >>> Stein (Steinstraße or "Stein Street" in Berlin, can also be >>> conveniently >>> abbreviated as 🍺) >>> >>> Steglitz (a locality in the South Western part of the city) >>> >>> Springer (Berlin is headquarters of Axel Springer publishing >>> house) >>> >>> Staaken (a locality within the Spandau borough) >>> >>> Schoenholz (A zone in the Niederschönhausen district of Berlin) >>> >>> Shellhaus (A famous office building) >>> >>> Suedkreuz ("southern cross" - a railway station in >>> Tempelhof-Schöneberg) >>> >>> Schiller (A park in the Mitte borough) >>> >>> Saatwinkel (The name of a super tiny beach, and its surrounding >>> neighborhood) >>> (The adjective form, Saatwinkler is also a really cool >>> bridge but >>> that form is too long) >>> >>> Sonne (Sonnenallee is the name of a large street in Berlin >>> crossing the former >>> wall, also translates as "sun") >>> >>> Savigny (Common place in City-West) >>> >>> Soorstreet (Street in Berlin restrict Charlottenburg) >>> >>> Solar (Skybar in Berlin) >>> >>> See (Seestraße or "See Street" in Berlin) >>> >>> Thanks, >>> Paul >>> >>> >> > __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> [4] >> >> > __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> [4] > > > > Links: > ------ > [1] > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > [2] > http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > [3] > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > [4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Wed Mar 14 12:39:42 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 14 Mar 2018 13:39:42 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> Message-ID: <7c9f78be-84d5-9ccf-849f-13e50c047ee0@redhat.com> On 03/14/2018 01:33 PM, Frank Kloeker wrote: > Hi, > > it's critical, I would say. They canceled the registration just today [1], but > there are still copyrights for name parts like  "S-Bahn Halleipzig". I would > remove it from the voting list to prevent us from trouble. S-Bahn is not really > a location in Berlin. If it does, S-Bahn is broken ;-) And it often is /me looks at S42 schedule :D Actually, I agree, a means of transport is not quite a location. > > kind regards > > Frank (from Berlin) > > [1] https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=007392194 > > Am 2018-03-14 13:14, schrieb Jeremy Freudberg: >> Hi Dmitry, >> >> According to Wikipedia [0] the trademark was removed. The citation [1] >> is actually inaccurate; it was not the final ruling. Regardless [2] >> seems to reflect the final result which is that the trademark is >> cancelled. >> >> Hope this helps. >> >> [0] >> https://en.wikipedia.org/wiki/S-train#Germany,_Austria_and_Switzerland >> (end of paragraph >> [1] >> http://juris.bundespatentgericht.de/cgi-bin/rechtsprechung/document.py?Gericht=bpatg&Art=en&Datum=Aktuell&nr=23159&pos=1&anz=323&Blank=1.pdf >> >> [2] >> http://www.eurailpress.de/news/bahnbetrieb/single-view/news/bundesgerichtshof-db-verliert-marke-s-bahn.html >> >> >> On Wed, Mar 14, 2018 at 7:50 AM, Dmitry Tantsur >> wrote: >> >>> Hi, >>> >>> I suspect that S-Bahn may be a protected (copyright, trademark, >>> whatever) name. Did you have a chance to check it? >>> >>> On 03/14/2018 12:58 AM, Paul Belanger wrote: >>> >>>> Greetings all, >>>> >>>> It is time again to cast your vote for the naming of the S >>>> Release. This time >>>> is little different as we've decided to use a public polling >>>> option over per >>>> user private URLs for voting. This means, everybody should proceed >>>> to use the >>>> following URL to cast their vote: >>>> >>>> >>>> >>> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 >> >>>> [1] >>>> >>>> Because this is a public poll, results will currently be only >>>> viewable by myself >>>> until the poll closes. Once closed, I'll post the URL making the >>>> results >>>> viewable to everybody. This was done to avoid everybody seeing the >>>> results while >>>> the public poll is running. >>>> >>>> The poll will officially end on 2018-03-21 23:59:59[1], and >>>> results will be >>>> posted shortly after. >>>> >>>> [1] >>>> >>> >> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst >> >>>> [2] >>>> --- >>>> >>>> According to the Release Naming Process, this poll is to determine >>>> the >>>> community preferences for the name of the R release of OpenStack. >>>> It is >>>> possible that the top choice is not viable for legal reasons, so >>>> the second or >>>> later community preference could wind up being the name. >>>> >>>> Release Name Criteria >>>> >>>> Each release name must start with the letter of the ISO basic >>>> Latin alphabet >>>> following the initial letter of the previous release, starting >>>> with the >>>> initial release of "Austin". After "Z", the next name should start >>>> with >>>> "A" again. >>>> >>>> The name must be composed only of the 26 characters of the ISO >>>> basic Latin >>>> alphabet. Names which can be transliterated into this character >>>> set are also >>>> acceptable. >>>> >>>> The name must refer to the physical or human geography of the >>>> region >>>> encompassing the location of the OpenStack design summit for the >>>> corresponding release. The exact boundaries of the geographic >>>> region under >>>> consideration must be declared before the opening of nominations, >>>> as part of >>>> the initiation of the selection process. >>>> >>>> The name must be a single word with a maximum of 10 characters. >>>> Words that >>>> describe the feature should not be included, so "Foo City" or "Foo >>>> Peak" >>>> would both be eligible as "Foo". >>>> >>>> Names which do not meet these criteria but otherwise sound really >>>> cool >>>> should be added to a separate section of the wiki page and the TC >>>> may make >>>> an exception for one or more of them to be considered in the >>>> Condorcet poll. >>>> The naming official is responsible for presenting the list of >>>> exceptional >>>> names for consideration to the TC before the poll opens. >>>> >>>> Exact Geographic Region >>>> >>>> The Geographic Region from where names for the S release will come >>>> is Berlin >>>> >>>> Proposed Names >>>> >>>> Spree (a river that flows through the Saxony, Brandenburg and >>>> Berlin states of >>>> Germany) >>>> >>>> SBahn (The Berlin S-Bahn is a rapid transit system in and around >>>> Berlin) >>>> >>>> Spandau (One of the twelve boroughs of Berlin) >>>> >>>> Stein (Steinstraße or "Stein Street" in Berlin, can also be >>>> conveniently >>>> abbreviated as 🍺) >>>> >>>> Steglitz (a locality in the South Western part of the city) >>>> >>>> Springer (Berlin is headquarters of Axel Springer publishing >>>> house) >>>> >>>> Staaken (a locality within the Spandau borough) >>>> >>>> Schoenholz (A zone in the Niederschönhausen district of Berlin) >>>> >>>> Shellhaus (A famous office building) >>>> >>>> Suedkreuz ("southern cross" - a railway station in >>>> Tempelhof-Schöneberg) >>>> >>>> Schiller (A park in the Mitte borough) >>>> >>>> Saatwinkel (The name of a super tiny beach, and its surrounding >>>> neighborhood) >>>> (The adjective form, Saatwinkler is also a really cool >>>> bridge but >>>> that form is too long) >>>> >>>> Sonne (Sonnenallee is the name of a large street in Berlin >>>> crossing the former >>>> wall, also translates as "sun") >>>> >>>> Savigny (Common place in City-West) >>>> >>>> Soorstreet (Street in Berlin restrict Charlottenburg) >>>> >>>> Solar (Skybar in Berlin) >>>> >>>> See (Seestraße or "See Street" in Berlin) >>>> >>>> Thanks, >>>> Paul >>>> >>>> >>> >> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> [4] >>> >>> >> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> [4] >> >> >> >> Links: >> ------ >> [1] >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 >> >> [2] >> http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst >> >> [3] http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> [4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jeremyfreudberg at gmail.com Wed Mar 14 12:49:39 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 14 Mar 2018 08:49:39 -0400 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> Message-ID: My apologies... Frank paints a more accurate picture of the trademark situation than I have done. Let's defer to Frank on this one. On Mar 14, 2018 8:33 AM, "Frank Kloeker" wrote: Hi, it's critical, I would say. They canceled the registration just today [1], but there are still copyrights for name parts like "S-Bahn Halleipzig". I would remove it from the voting list to prevent us from trouble. S-Bahn is not really a location in Berlin. If it does, S-Bahn is broken ;-) kind regards Frank (from Berlin) [1] https://register.dpma.de/DPMAregister/marke/registerHABM?AKZ=007392194 Am 2018-03-14 13:14, schrieb Jeremy Freudberg: > Hi Dmitry, > > According to Wikipedia [0] the trademark was removed. The citation [1] > is actually inaccurate; it was not the final ruling. Regardless [2] > seems to reflect the final result which is that the trademark is > cancelled. > > Hope this helps. > > [0] > https://en.wikipedia.org/wiki/S-train#Germany,_Austria_and_Switzerland > (end of paragraph > [1] > http://juris.bundespatentgericht.de/cgi-bin/rechtsprechung/ > document.py?Gericht=bpatg&Art=en&Datum=Aktuell&nr=23159&pos= > 1&anz=323&Blank=1.pdf > [2] > http://www.eurailpress.de/news/bahnbetrieb/single-view/news/ > bundesgerichtshof-db-verliert-marke-s-bahn.html > > On Wed, Mar 14, 2018 at 7:50 AM, Dmitry Tantsur > wrote: > > Hi, >> >> I suspect that S-Bahn may be a protected (copyright, trademark, >> whatever) name. Did you have a chance to check it? >> >> On 03/14/2018 12:58 AM, Paul Belanger wrote: >> >> Greetings all, >>> >>> It is time again to cast your vote for the naming of the S >>> Release. This time >>> is little different as we've decided to use a public polling >>> option over per >>> user private URLs for voting. This means, everybody should proceed >>> to use the >>> following URL to cast their vote: >>> >>> >>> >>> >> https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3 > fcdf1&akey=8cfdc1f5df5fe4d3 > >> [1] >>> >>> >>> Because this is a public poll, results will currently be only >>> viewable by myself >>> until the poll closes. Once closed, I'll post the URL making the >>> results >>> viewable to everybody. This was done to avoid everybody seeing the >>> results while >>> the public poll is running. >>> >>> The poll will officially end on 2018-03-21 23:59:59[1], and >>> results will be >>> posted shortly after. >>> >>> [1] >>> >>> >> http://git.openstack.org/cgit/openstack/governance/tree/refe > rence/release-naming.rst > >> [2] >>> >>> --- >>> >>> According to the Release Naming Process, this poll is to determine >>> the >>> community preferences for the name of the R release of OpenStack. >>> It is >>> possible that the top choice is not viable for legal reasons, so >>> the second or >>> later community preference could wind up being the name. >>> >>> Release Name Criteria >>> >>> Each release name must start with the letter of the ISO basic >>> Latin alphabet >>> following the initial letter of the previous release, starting >>> with the >>> initial release of "Austin". After "Z", the next name should start >>> with >>> "A" again. >>> >>> The name must be composed only of the 26 characters of the ISO >>> basic Latin >>> alphabet. Names which can be transliterated into this character >>> set are also >>> acceptable. >>> >>> The name must refer to the physical or human geography of the >>> region >>> encompassing the location of the OpenStack design summit for the >>> corresponding release. The exact boundaries of the geographic >>> region under >>> consideration must be declared before the opening of nominations, >>> as part of >>> the initiation of the selection process. >>> >>> The name must be a single word with a maximum of 10 characters. >>> Words that >>> describe the feature should not be included, so "Foo City" or "Foo >>> Peak" >>> would both be eligible as "Foo". >>> >>> Names which do not meet these criteria but otherwise sound really >>> cool >>> should be added to a separate section of the wiki page and the TC >>> may make >>> an exception for one or more of them to be considered in the >>> Condorcet poll. >>> The naming official is responsible for presenting the list of >>> exceptional >>> names for consideration to the TC before the poll opens. >>> >>> Exact Geographic Region >>> >>> The Geographic Region from where names for the S release will come >>> is Berlin >>> >>> Proposed Names >>> >>> Spree (a river that flows through the Saxony, Brandenburg and >>> Berlin states of >>> Germany) >>> >>> SBahn (The Berlin S-Bahn is a rapid transit system in and around >>> Berlin) >>> >>> Spandau (One of the twelve boroughs of Berlin) >>> >>> Stein (Steinstraße or "Stein Street" in Berlin, can also be >>> conveniently >>> abbreviated as 🍺) >>> >>> Steglitz (a locality in the South Western part of the city) >>> >>> Springer (Berlin is headquarters of Axel Springer publishing >>> house) >>> >>> Staaken (a locality within the Spandau borough) >>> >>> Schoenholz (A zone in the Niederschönhausen district of Berlin) >>> >>> Shellhaus (A famous office building) >>> >>> Suedkreuz ("southern cross" - a railway station in >>> Tempelhof-Schöneberg) >>> >>> Schiller (A park in the Mitte borough) >>> >>> Saatwinkel (The name of a super tiny beach, and its surrounding >>> neighborhood) >>> (The adjective form, Saatwinkler is also a really cool >>> bridge but >>> that form is too long) >>> >>> Sonne (Sonnenallee is the name of a large street in Berlin >>> crossing the former >>> wall, also translates as "sun") >>> >>> Savigny (Common place in City-West) >>> >>> Soorstreet (Street in Berlin restrict Charlottenburg) >>> >>> Solar (Skybar in Berlin) >>> >>> See (Seestraße or "See Street" in Berlin) >>> >>> Thanks, >>> Paul >>> >>> >>> >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> [4] >>> >> >> >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe [3] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> [4] >> > > > > Links: > ------ > [1] > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3 > fcdf1&akey=8cfdc1f5df5fe4d3 > [2] > http://git.openstack.org/cgit/openstack/governance/tree/refe > rence/release-naming.rst > [3] http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > [4] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Mar 14 13:02:00 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 14 Mar 2018 14:02:00 +0100 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: References: <20180313235859.GA14573@localhost.localdomain> Message-ID: Dmitry Tantsur wrote: > I suspect that S-Bahn may be a protected (copyright, trademark, > whatever) name. Did you have a chance to check it? If you look at the release naming process, trademark vetting is done once the ranking of preferred names is established (to limit the cost of name vetting): https://governance.openstack.org/tc/reference/release-naming.html -- Thierry Carrez (ttx) From doug at doughellmann.com Wed Mar 14 13:08:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 14 Mar 2018 09:08:32 -0400 Subject: [openstack-dev] [keystone] [oslo] new unified limit library In-Reply-To: References: <5AA0D066.1070600@fastmail.com> Message-ID: <1521032873-sup-8793@lrrr.local> Excerpts from Lance Bragstad's message of 2018-03-12 11:45:28 -0500: > I missed the document describing the process for this sort of thing [0]. > So I'm back tracking a bit to go through a more formal process. > > [0] > http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html > > # Proposed new library oslo.limit > > This is a proposal to create a new library dedicated to enabling more > consistent quota and limit enforcement across OpenStack. > > ## Proposed library mission > > Enforcing quotas and limits across OpenStack has traditionally been a > tough problem to solve. Determining enforcement requires quota knowledge > from the service along with information about the project owning the > resource. Up until the Queens release, quota calculation and enforcement > has been left to the services to implement, forcing them to understand > complexities of keystone project structure. During the Pike and Queens > PTG, there were several productive discussions towards redesigning the > current approach to quota enforcement. Because keystone is the authority > of project structure, it makes sense to allow keystone to hold the > association between a resource limit and a project. This means services > still need to calculate quota and usage, but the problem should be > easier for services to implement since developers shouldn't need to > re-implement possible hierarchies of projects and their associated > limits. Instead, we can offload some of that work to a common library > for services to consume that handles enforcing quota calculation based > on limits associated to projects in keystone. This proposal is to have a > new library called oslo.limit that fills that need. > > ## Consuming projects > > The services consuming this work will be any service that currently > implements a quota system, or plans to implement one. Since keystone > already supports unified limits and association of limits to projects, > the implementation for consuming projects is easier. instead of having > to re-write that implementation, developers need to ensure quota > calculation to passed to the oslo.limit library somewhere in the API's > validation layer. The pattern described here is very similar to the > pattern currently used by services that leverage oslo.policy for > authorization decisions. > > ## Alternative libraries > > It looks like there was an existing library that attempted to solve some > of these problems, called delimiter [1]. It looks like delimiter could > be used to talk to keystone about quota enforcement, where as the > existing approach with oslo.limit would be to use keystone directly. > Someone more familiar with the library (harlowja?) can probably shed > more light on it's intended uses (I couldn't find much documentation), > but the presentation linked in a previous note was helpful. > > [1] https://github.com/openstack/delimiter > > ## Proposed adoption model/plan > > The unified limit API [2] in keystone is currently marked as > experimental, but the keystone team is actively collecting and > addressing feedback that will result in stabilizing the API. > Stabilization changes that effect the oslo.limit library will also be > addressed before version 1.0.0 is released. From there, we can look to > incorporate the library into various services that either have an > existing quota implementation, or services that have a quota requirement > but no implementation. > > This should help us refine the interfaces between services and > oslo.limit, while providing a facade to handle complexities of project > hierarchies. This should enable adoption by simplifying the process and > making it easier for quota to be implemented in a consistent way across > services. > > [2] > https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html > > ## Reviewer activity > > At first thought, it makes sense to model the reviewer structure after > the oslo.policy library, where the core team consists of people not only > interested in limits and quota, but also people familiar with the > keystone implementation of the unified limits API. > > ## Implementation > > ### Primary Authors: > >   Lance Bragstad (lbragstad at gmail.com) lbragstad >   You? > > ### Other contributors: > >   You? > > ## Work Items > > * Create a new library called oslo.limit > * Create a core group for the project > * Define the minimum we need to enforce quota calculations in oslo.limit > * Propose an implementation that allows services to test out quota > enforcement via unified limits > > ## References > > Rocky PTG Etherpad for unified limits: > https://etherpad.openstack.org/p/unified-limits-rocky-ptg > > ## Revision History > > Introduced in Rocky > > On 03/07/2018 11:55 PM, Joshua Harlow wrote: > > So the following was a prior effort: > > > > https://github.com/openstack/delimiter > > > > Maybe just continue down the path of that and/or take that whole repo > > over and iterate (or adjust the prior code, or ...)?? Or if not that's > > ok to, ya'll get to decide. > > > > https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal > > > > > > Lance Bragstad wrote: > >> Hi all, > >> > >> Per the identity-integration track at the PTG [0], I proposed a new oslo > >> library for services to use for hierarchical quota enforcement [1]. Let > >> me know if you have any questions or concerns about the library. If the > >> oslo team would like, I can add an agenda item for next weeks oslo > >> meeting to discuss. > >> > >> Thanks, > >> > >> Lance > >> > >> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg > >> [1] https://review.openstack.org/#/c/550491/ > >> > >> > >> > >> __________________________________________________________________________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev +1 to the plan. It would be good to have this added to the oslo-specs repo so it's easier to find in the future. Doug From alexandre.van-kempen at inria.fr Wed Mar 14 13:23:14 2018 From: alexandre.van-kempen at inria.fr (avankemp) Date: Wed, 14 Mar 2018 14:23:14 +0100 Subject: [openstack-dev] [FEMDC] Wed. 14 Mar. - IRC Meeting 15:00 UTC Message-ID: <111A9C87-59A7-45DF-8609-F8C3A3EEF359@inria.fr> Dear all, A gentle reminder for our today meeting at 15:00 UTC A draft of the agenda is available at line 260 you are very welcome to add any item. https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From ansmith at redhat.com Wed Mar 14 13:44:40 2018 From: ansmith at redhat.com (Andy Smith) Date: Wed, 14 Mar 2018 09:44:40 -0400 Subject: [openstack-dev] [tripleo] Blueprints for Rocky In-Reply-To: <8cfe456a-3be3-4d84-4bc7-7c84ab94b50e@redhat.com> References: <8cfe456a-3be3-4d84-4bc7-7c84ab94b50e@redhat.com> Message-ID: Hi Alex, The tripleo-messaging blueprint is pending approval: https://blueprints.launchpad.net/tripleo/+spec/tripleo-messaging Good progress has been made and working towards being ready for Rocky-1. Thanks, Andy On Wed, Mar 14, 2018 at 8:11 AM, Dmitry Tantsur wrote: > Hi Alex, > > I have two small ironic-related blueprints pending approval: > https://blueprints.launchpad.net/tripleo/+spec/ironic-rescue > https://blueprints.launchpad.net/tripleo/+spec/networking-generic-switch > > and one larger: > https://blueprints.launchpad.net/tripleo/+spec/ironic-inspector-overcloud > > Could you please check them? > > I would also like to talk about possibility to enable cleaning by default > in the undercloud, but I guess it deserves a separate thread. > > > On 03/13/2018 02:58 PM, Alex Schultz wrote: > >> Hey everyone, >> >> So we currently have 63 blueprints for currently targeted for >> Rocky[0]. Please make sure that any blueprints you are interested in >> delivering have an assignee set and have been approved. I would like >> to have the ones we plan on delivering for Rocky to be updated by >> April 3, 2018. Any blueprints that have not been updated will be >> moved out to the next cycle after this date. >> >> Thanks, >> -Alex >> >> [0] https://blueprints.launchpad.net/tripleo/rocky >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Mar 14 13:46:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 14 Mar 2018 08:46:41 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 3/14/2018 3:42 AM, 李杰 wrote: > >             This is the spec about  rebuild a instance booted from > volume.In the spec,there is a >       question about if we should delete the old root_volume.Anyone who > is interested in >       booted from volume can help to review this. Any suggestion is > welcome.Thank you! >       The link is here. >       Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Copying the operators list and giving some more context. This spec is proposing to add support for rebuild with a new image for volume-backed servers, which today is just a 400 failure in the API since the compute doesn't support that scenario. With the proposed solution, the backing root volume would be deleted and a new volume would be created from the new image, similar to how boot from volume works. The question raised in the spec is whether or not nova should delete the root volume even if its delete_on_termination flag is set to False. The semantics get a bit weird here since that flag was not meant for this scenario, it's meant to be used when deleting the server to which the volume is attached. Rebuilding a server is not deleting it, but we would need to replace the root volume, so what do we do with the volume we're replacing? Do we say that delete_on_termination only applies to deleting a server and not rebuild and therefore nova can delete the root volume during a rebuild? If we don't delete the volume during rebuild, we could end up leaving a lot of volumes lying around that the user then has to clean up, otherwise they'll eventually go over quota. We need user (and operator) feedback on this issue and what they would expect to happen. -- Thanks, Matt From Tim.Bell at cern.ch Wed Mar 14 14:10:33 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 14 Mar 2018 14:10:33 +0000 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Matt, To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. Tim -----Original Message----- From: Matt Riedemann Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 14 March 2018 at 14:55 To: "openstack-dev at lists.openstack.org" , openstack-operators Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume On 3/14/2018 3:42 AM, 李杰 wrote: > > This is the spec about rebuild a instance booted from > volume.In the spec,there is a > question about if we should delete the old root_volume.Anyone who > is interested in > booted from volume can help to review this. Any suggestion is > welcome.Thank you! > The link is here. > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ Copying the operators list and giving some more context. This spec is proposing to add support for rebuild with a new image for volume-backed servers, which today is just a 400 failure in the API since the compute doesn't support that scenario. With the proposed solution, the backing root volume would be deleted and a new volume would be created from the new image, similar to how boot from volume works. The question raised in the spec is whether or not nova should delete the root volume even if its delete_on_termination flag is set to False. The semantics get a bit weird here since that flag was not meant for this scenario, it's meant to be used when deleting the server to which the volume is attached. Rebuilding a server is not deleting it, but we would need to replace the root volume, so what do we do with the volume we're replacing? Do we say that delete_on_termination only applies to deleting a server and not rebuild and therefore nova can delete the root volume during a rebuild? If we don't delete the volume during rebuild, we could end up leaving a lot of volumes lying around that the user then has to clean up, otherwise they'll eventually go over quota. We need user (and operator) feedback on this issue and what they would expect to happen. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Wed Mar 14 14:27:35 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 14 Mar 2018 09:27:35 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Message-ID: On 3/14/2018 9:10 AM, Tim Bell wrote: > Matt, > > To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. > > Tim Tim, You are correct.  You can't delete volumes with snapshots. Jay > > -----Original Message----- > From: Matt Riedemann > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Wednesday, 14 March 2018 at 14:55 > To: "openstack-dev at lists.openstack.org" , openstack-operators > Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume > > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From blair.bethwaite at gmail.com Wed Mar 14 14:47:40 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 14 Mar 2018 14:47:40 +0000 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: Please do not default to deleting it, otherwise someone will eventually be back here asking why an irate user has just lost data. The better scenario is that the rebuild will fail (early - before impact to the running instance) with a quota error. Cheers, On Thu., 15 Mar. 2018, 00:46 Matt Riedemann, wrote: > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Mar 14 14:51:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 14 Mar 2018 10:51:59 -0400 Subject: [openstack-dev] [reno] moved to storyboard Message-ID: <1521039106-sup-1670@lrrr.local> The bug tracker for reno has moved to storyboard: https://storyboard.openstack.org/#!/project/933 Doug From pkovar at redhat.com Wed Mar 14 15:04:41 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 14 Mar 2018 16:04:41 +0100 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> Message-ID: <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> On Tue, 13 Mar 2018 18:57:24 -0500 Jay S Bryant wrote: > Amy, > > The top level page for projects is referenced under documentation from > here:  https://docs.openstack.org/queens/projects.html > > So, I think we have that one covered for people who are just looking for > the top level documentation. Yes, we have that covered. Just to clarify this a bit further, we also have project lists like https://docs.openstack.org/queens/install/, https://docs.openstack.org/queens/admin/ and https://docs.openstack.org/queens/configuration/, what's missing is https://docs.openstack.org/queens/contributor/. Cheers, pk > On 3/13/2018 3:02 PM, Amy Marrich wrote: > > I think if we're going to have that go to the development contributors > > section (which makes sense) maybe we should also have ways of getting > > to the deployment and admin docs as well? > > > > Amy (spotz) > > > > On Tue, Mar 13, 2018 at 2:55 PM, Jay S Bryant > > wrote: > > > > > > > > On 3/13/2018 1:38 PM, Petr Kovar wrote: > > > > On Thu, 8 Mar 2018 12:54:06 -0600 > > Jay S Bryant > > wrote: > > > > Good overview.  Thank you! > > > > One additional goal I want to mention on the list, for > > awareness, is the > > fact that we would like to eventually get some consistency > > to the pages > > that the 'Contributor Guide' lands on for each of the > > projects.  Needs > > to be a page that is friendly to new contributors, makes > > it easy to > > learn about the project and is not overwhelming. > > > > What exactly that looks like isn't defined yet but I have > > talked to > > Manila about this.  They were interested in working > > together on this. > > Cinder and Manila will work together to get something > > consistent put > > together and then we can work on spreading that to other > > projects once > > we have agreement from the SIG that the approach is agreeable. > > > > This is a good cross-project goal, I think. We discussed a > > similar approach > > in the docs room wrt providing templates to project teams that > > they can > > use to design their landing pages for admin, user, > > configuration docs; that > > would also include the main index page for project docs. > > > > As for the project-specific contributor guides, > > https://docs.openstack.org/doc-contrib-guide/project-guides.html > > > > specifies > > that any contributor content should go to > > doc/source/contributor/. This will > > allow us to use templates to generate lists of links, > > similarly to what > > we do for other content areas. > > > > Cheers, > > pk > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > Petr, > > > > Good point.  I was trying to think of how to make a better landing > > page for new contributors and you may have hit on the answer.  > > RIght now when you click through from  here: > > https://www.openstack.org/community > > You land at the top level > > Cinder documentation page which is incredibly overwhelming for a > > new person: https://docs.openstack.org/cinder/latest/ > > > > > > If the new contributor page instead lands here: > > https://docs.openstack.org/cinder/latest/contributor/index.html > > > > It would give me a page to craft for new users looking for > > information to get started. > > > > Thoughts on this approach? > > > > Kendall and Mike ... Does the above approach make sense? > > > > Jay > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > -- Petr Kovar Sr. Technical Writer | Customer Content Services Red Hat Czech, Brno From jeremyfreudberg at gmail.com Wed Mar 14 15:17:09 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 14 Mar 2018 11:17:09 -0400 Subject: [openstack-dev] [sahara][all] Outcomes of Sahara Rocky virtual PTG session Message-ID: Hi again all, As promised, the outcomes of our recent virtual PTG session are made public. Please view the Etherpad containing those outcomes here: https://etherpad.openstack.org/p/sahara-rocky-VIRTUAL-ptg (The Dublin outcomes are on this Etherpad: https://etherpad.openstack.org/p/sahara-rocky-ptg ) Thanks to all who participated. I do think that it was a productive enough session that we do not need to schedule another one. If, however, someone has further points to raise, please let the team know. Looking forward to an excellent cycle, Jeremy From mbooth at redhat.com Wed Mar 14 15:35:22 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 14 Mar 2018 15:35:22 +0000 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 14 March 2018 at 13:46, Matt Riedemann wrote: > On 3/14/2018 3:42 AM, 李杰 wrote: > >> >> This is the spec about rebuild a instance booted from >> volume.In the spec,there is a >> question about if we should delete the old root_volume.Anyone who >> is interested in >> booted from volume can help to review this. Any suggestion is >> welcome.Thank you! >> The link is here. >> Re:the rebuild spec:https://review.openstack.org/#/c/532407/ >> > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API since > the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and a > new volume would be created from the new image, similar to how boot from > volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server and > not rebuild and therefore nova can delete the root volume during a rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, otherwise > they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > My 2c was to overwrite, not delete the volume[1]. I believe this preserves both sets of semantics: the server is rebuilt, and the volume is not deleted. The user will still lose their data, of course, but that's implied by the rebuild they explicitly requested. The volume id will remain the same. [1] I suspect this would require new functionality in cinder to re-initialize from image. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Mar 14 15:38:37 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 14 Mar 2018 10:38:37 -0500 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> Message-ID: On 3/14/2018 10:04 AM, Petr Kovar wrote: > On Tue, 13 Mar 2018 18:57:24 -0500 > Jay S Bryant wrote: > >> Amy, >> >> The top level page for projects is referenced under documentation from >> here:  https://docs.openstack.org/queens/projects.html >> >> So, I think we have that one covered for people who are just looking for >> the top level documentation. > Yes, we have that covered. Just to clarify this a bit further, we also have > project lists like https://docs.openstack.org/queens/install/, > https://docs.openstack.org/queens/admin/ and > https://docs.openstack.org/queens/configuration/, what's missing is > https://docs.openstack.org/queens/contributor/. > > Cheers, > pk > Petr, Do we need a contributor link per-release?  I thought in past discussions that the contributor info should always go to latest and that was why this is slightly different. Jay > >> On 3/13/2018 3:02 PM, Amy Marrich wrote: >>> I think if we're going to have that go to the development contributors >>> section (which makes sense) maybe we should also have ways of getting >>> to the deployment and admin docs as well? >>> >>> Amy (spotz) >>> >>> On Tue, Mar 13, 2018 at 2:55 PM, Jay S Bryant >> > wrote: >>> >>> >>> >>> On 3/13/2018 1:38 PM, Petr Kovar wrote: >>> >>> On Thu, 8 Mar 2018 12:54:06 -0600 >>> Jay S Bryant >> > wrote: >>> >>> Good overview.  Thank you! >>> >>> One additional goal I want to mention on the list, for >>> awareness, is the >>> fact that we would like to eventually get some consistency >>> to the pages >>> that the 'Contributor Guide' lands on for each of the >>> projects.  Needs >>> to be a page that is friendly to new contributors, makes >>> it easy to >>> learn about the project and is not overwhelming. >>> >>> What exactly that looks like isn't defined yet but I have >>> talked to >>> Manila about this.  They were interested in working >>> together on this. >>> Cinder and Manila will work together to get something >>> consistent put >>> together and then we can work on spreading that to other >>> projects once >>> we have agreement from the SIG that the approach is agreeable. >>> >>> This is a good cross-project goal, I think. We discussed a >>> similar approach >>> in the docs room wrt providing templates to project teams that >>> they can >>> use to design their landing pages for admin, user, >>> configuration docs; that >>> would also include the main index page for project docs. >>> >>> As for the project-specific contributor guides, >>> https://docs.openstack.org/doc-contrib-guide/project-guides.html >>> >>> specifies >>> that any contributor content should go to >>> doc/source/contributor/. This will >>> allow us to use templates to generate lists of links, >>> similarly to what >>> we do for other content areas. >>> >>> Cheers, >>> pk >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> Petr, >>> >>> Good point.  I was trying to think of how to make a better landing >>> page for new contributors and you may have hit on the answer. >>> RIght now when you click through from  here: >>> https://www.openstack.org/community >>> You land at the top level >>> Cinder documentation page which is incredibly overwhelming for a >>> new person: https://docs.openstack.org/cinder/latest/ >>> >>> >>> If the new contributor page instead lands here: >>> https://docs.openstack.org/cinder/latest/contributor/index.html >>> >>> It would give me a page to craft for new users looking for >>> information to get started. >>> >>> Thoughts on this approach? >>> >>> Kendall and Mike ... Does the above approach make sense? >>> >>> Jay >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> > From vondra at homeatcloud.cz Wed Mar 14 15:43:37 2018 From: vondra at homeatcloud.cz (=?UTF-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Wed, 14 Mar 2018 16:43:37 +0100 Subject: [openstack-dev] [Openstack-operators] [nova] about rebuild instance booted from volume In-Reply-To: References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> Message-ID: <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> Hi! I say delete! Delete them all! Really, it's called delete_on_termination and should be ignored on Rebuild. We have a VPS service implemented on top of OpenStack and do throw the old contents away on Rebuild. When the user has the Backup service paid, they can restore a snapshot. Backup is implemented as volume snapshot, then clone volume, then upload to image (glance is on a different disk array). I also sometimes multi-attach a volume manually to a service node and just dd an image onto it. If it was to be implemented this way, then there would be no deleting a volume with delete_on_termination, just overwriting. But the effect is the same. IMHO you can have snapshots of volumes that have been deleted. Just some backends like our 3PAR don't allow it, but it's not disallowed in the API contract. Tomas from Homeatcloud -----Original Message----- From: Saverio Proto [mailto:zioproto at gmail.com] Sent: Wednesday, March 14, 2018 3:19 PM To: Tim Bell; Matt Riedemann Cc: OpenStack Development Mailing List (not for usage questions); openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume My idea is that if delete_on_termination flag is set to False the Volume should never be deleted by Nova. my 2 cents Saverio 2018-03-14 15:10 GMT+01:00 Tim Bell : > Matt, > > To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. > > Tim > > > -----Original Message----- > From: Matt Riedemann > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Wednesday, 14 March 2018 at 14:55 > To: "openstack-dev at lists.openstack.org" > , openstack-operators > > Subject: Re: [openstack-dev] [nova] about rebuild instance booted from > volume > > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > s _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From mbayer at redhat.com Wed Mar 14 15:53:32 2018 From: mbayer at redhat.com (Michael Bayer) Date: Wed, 14 Mar 2018 11:53:32 -0400 Subject: [openstack-dev] [oslo.db] upcoming warnings in MySQL 5.6, 5.7 for BLOB columns Message-ID: hey all - Just looking to see if we think this will impact openstack. MySQL 5.6 and 5.7, but not yet MariaDB, now emits an erroneous warning when you try to send a binary value to the database, because it sees the client connection is supposed to use the utf8 or utf8mb4 charsets, assumes all data must be in that charset, then warns because the binary data does not necessarily conform to utf8 (which it has no need to, it's binary). Sounds weird, right, to make it easier the demo looks just like this: import pymysql import uuid conn = pymysql.connect( user="scott", passwd="tiger", host="mysql56", db="test", charset="utf8mb4") cursor = conn.cursor() cursor.execute(""" CREATE TABLE IF NOT EXISTS `profiles` ( `id` varchar(32) COLLATE utf8mb4_unicode_ci NOT NULL, `city` blob NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci """) cursor.execute( "INSERT INTO profiles (id, city) VALUES (%(id)s, %(city)s)", { 'id': uuid.uuid4().hex, 'city': pymysql.Binary( b'z\xf9\x87jS?\xd4i\xa5\xa3\r\xa7\x1e\xed\x16\xe0\xb5\x05R\xa4\xec\x16\x8f\x06\xb5\xea+\xaf<\x00\\\x94I9A\xe0\x82\xa7\x13\x0c\x8c' ) } ) when using PyMySQL 0.8.0 (not 0.7.1) you then get a warning Warning: (1300, "Invalid utf8mb4 character string: 'F9876A'"). So, Oracle upstream clearly is never going to fix this if you look at the typically dismal discussion at [1]. I poked the PyMySQL project at [2] to see what we can do. Long term is that SQLAlchemy will add the special "_binary" prefix to binary-bound parameter tokens to avoid the warning, however right now PyMySQL supports a flag "binary_prefix" that will do it for us on the driver side. For Openstack, i need to know if we are in fact passing binary data to databases in some project or another. What we can do is add the supply of this flag to oslo.db so that it is present automatically for the PyMySQL driver, as well as checking the PyMySQL version for compatibility. If folks are seeing this warning already or are using BLOB / binary columns in their project please ping me and we will get this added to oslo.db. From mbayer at redhat.com Wed Mar 14 15:54:17 2018 From: mbayer at redhat.com (Michael Bayer) Date: Wed, 14 Mar 2018 11:54:17 -0400 Subject: [openstack-dev] [oslo.db] upcoming warnings in MySQL 5.6, 5.7 for BLOB columns In-Reply-To: References: Message-ID: Forgot the links: [1] https://bugs.mysql.com/bug.php?id=79317 [2] https://github.com/PyMySQL/PyMySQL/issues/644 On Wed, Mar 14, 2018 at 11:53 AM, Michael Bayer wrote: > hey all - > > Just looking to see if we think this will impact openstack. MySQL 5.6 > and 5.7, but not yet MariaDB, now emits an erroneous warning when you > try to send a binary value to the database, because it sees the client > connection is supposed to use the utf8 or utf8mb4 charsets, assumes > all data must be in that charset, then warns because the binary data > does not necessarily conform to utf8 (which it has no need to, it's > binary). > > Sounds weird, right, to make it easier the demo looks just like this: > > import pymysql > import uuid > > conn = pymysql.connect( > user="scott", passwd="tiger", host="mysql56", > db="test", charset="utf8mb4") > cursor = conn.cursor() > cursor.execute(""" > CREATE TABLE IF NOT EXISTS `profiles` ( > `id` varchar(32) COLLATE utf8mb4_unicode_ci NOT NULL, > `city` blob NOT NULL > ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci > """) > cursor.execute( > "INSERT INTO profiles (id, city) VALUES (%(id)s, %(city)s)", > { > 'id': uuid.uuid4().hex, > 'city': pymysql.Binary( > b'z\xf9\x87jS?\xd4i\xa5\xa3\r\xa7\x1e\xed\x16\xe0\xb5\x05R\xa4\xec\x16\x8f\x06\xb5\xea+\xaf<\x00\\\x94I9A\xe0\x82\xa7\x13\x0c\x8c' > ) > } > ) > > when using PyMySQL 0.8.0 (not 0.7.1) you then get a warning > > Warning: (1300, "Invalid utf8mb4 character string: 'F9876A'"). > > > So, Oracle upstream clearly is never going to fix this if you look at > the typically dismal discussion at [1]. I poked the PyMySQL project > at [2] to see what we can do. Long term is that SQLAlchemy will add > the special "_binary" prefix to binary-bound parameter tokens to avoid > the warning, however right now PyMySQL supports a flag "binary_prefix" > that will do it for us on the driver side. > > For Openstack, i need to know if we are in fact passing binary data to > databases in some project or another. What we can do is add the > supply of this flag to oslo.db so that it is present automatically for > the PyMySQL driver, as well as checking the PyMySQL version for > compatibility. > > If folks are seeing this warning already or are using BLOB / binary > columns in their project please ping me and we will get this added to > oslo.db. From doug at doughellmann.com Wed Mar 14 15:57:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 14 Mar 2018 11:57:39 -0400 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> Message-ID: <1521043010-sup-2656@lrrr.local> Excerpts from Jay S Bryant's message of 2018-03-14 10:38:37 -0500: > > On 3/14/2018 10:04 AM, Petr Kovar wrote: > > On Tue, 13 Mar 2018 18:57:24 -0500 > > Jay S Bryant wrote: > > > >> Amy, > >> > >> The top level page for projects is referenced under documentation from > >> here:  https://docs.openstack.org/queens/projects.html > >> > >> So, I think we have that one covered for people who are just looking for > >> the top level documentation. > > Yes, we have that covered. Just to clarify this a bit further, we also have > > project lists like https://docs.openstack.org/queens/install/, > > https://docs.openstack.org/queens/admin/ and > > https://docs.openstack.org/queens/configuration/, what's missing is > > https://docs.openstack.org/queens/contributor/. > > > > Cheers, > > pk > > > Petr, > > Do we need a contributor link per-release?  I thought in past > discussions that the contributor info should always go to latest and > that was why this is slightly different. > > Jay We can have a per-series page that lists all projects and links to their "latest" contributor doc page. Doug From pkovar at redhat.com Wed Mar 14 15:58:31 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 14 Mar 2018 16:58:31 +0100 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> Message-ID: <20180314165831.aae52a8ea6b8fa80f09cfe58@redhat.com> On Wed, 14 Mar 2018 10:38:37 -0500 Jay S Bryant wrote: > > > On 3/14/2018 10:04 AM, Petr Kovar wrote: > > On Tue, 13 Mar 2018 18:57:24 -0500 > > Jay S Bryant wrote: > > > >> Amy, > >> > >> The top level page for projects is referenced under documentation from > >> here:  https://docs.openstack.org/queens/projects.html > >> > >> So, I think we have that one covered for people who are just looking for > >> the top level documentation. > > Yes, we have that covered. Just to clarify this a bit further, we also have > > project lists like https://docs.openstack.org/queens/install/, > > https://docs.openstack.org/queens/admin/ and > > https://docs.openstack.org/queens/configuration/, what's missing is > > https://docs.openstack.org/queens/contributor/. > > > > Cheers, > > pk > > > Petr, > > Do we need a contributor link per-release?  I thought in past > discussions that the contributor info should always go to latest and > that was why this is slightly different. Right, that's a good point! I guess we should really just have https://docs.openstack.org/latest/contributor/ then, perhaps https://docs.openstack.org/contributor/ would be even better (with a redirect). This means we need to treat templates for contributor project docs as a special case and just point to /latest/contributor from all release-specific docs.o.o landing pages. Cheers, pk > >> On 3/13/2018 3:02 PM, Amy Marrich wrote: > >>> I think if we're going to have that go to the development contributors > >>> section (which makes sense) maybe we should also have ways of getting > >>> to the deployment and admin docs as well? > >>> > >>> Amy (spotz) > >>> > >>> On Tue, Mar 13, 2018 at 2:55 PM, Jay S Bryant >>> > wrote: > >>> > >>> > >>> > >>> On 3/13/2018 1:38 PM, Petr Kovar wrote: > >>> > >>> On Thu, 8 Mar 2018 12:54:06 -0600 > >>> Jay S Bryant >>> > wrote: > >>> > >>> Good overview.  Thank you! > >>> > >>> One additional goal I want to mention on the list, for > >>> awareness, is the > >>> fact that we would like to eventually get some consistency > >>> to the pages > >>> that the 'Contributor Guide' lands on for each of the > >>> projects.  Needs > >>> to be a page that is friendly to new contributors, makes > >>> it easy to > >>> learn about the project and is not overwhelming. > >>> > >>> What exactly that looks like isn't defined yet but I have > >>> talked to > >>> Manila about this.  They were interested in working > >>> together on this. > >>> Cinder and Manila will work together to get something > >>> consistent put > >>> together and then we can work on spreading that to other > >>> projects once > >>> we have agreement from the SIG that the approach is agreeable. > >>> > >>> This is a good cross-project goal, I think. We discussed a > >>> similar approach > >>> in the docs room wrt providing templates to project teams that > >>> they can > >>> use to design their landing pages for admin, user, > >>> configuration docs; that > >>> would also include the main index page for project docs. > >>> > >>> As for the project-specific contributor guides, > >>> https://docs.openstack.org/doc-contrib-guide/project-guides.html > >>> > >>> specifies > >>> that any contributor content should go to > >>> doc/source/contributor/. This will > >>> allow us to use templates to generate lists of links, > >>> similarly to what > >>> we do for other content areas. > >>> > >>> Cheers, > >>> pk > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> Petr, > >>> > >>> Good point.  I was trying to think of how to make a better landing > >>> page for new contributors and you may have hit on the answer. > >>> RIght now when you click through from  here: > >>> https://www.openstack.org/community > >>> You land at the top level > >>> Cinder documentation page which is incredibly overwhelming for a > >>> new person: https://docs.openstack.org/cinder/latest/ > >>> > >>> > >>> If the new contributor page instead lands here: > >>> https://docs.openstack.org/cinder/latest/contributor/index.html > >>> > >>> It would give me a page to craft for new users looking for > >>> information to get started. > >>> > >>> Thoughts on this approach? > >>> > >>> Kendall and Mike ... Does the above approach make sense? > >>> > >>> Jay > >>> > >>> > >>> > >>> __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > > > -- Petr Kovar Sr. Technical Writer | Customer Content Services Red Hat Czech, Brno From jaypipes at gmail.com Wed Mar 14 16:02:50 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 14 Mar 2018 12:02:50 -0400 Subject: [openstack-dev] [oslo.db] upcoming warnings in MySQL 5.6, 5.7 for BLOB columns In-Reply-To: References: Message-ID: <1a01db18-52db-a8c0-ea0e-cb199e5ea639@gmail.com> Neither nova nor placement use any BLOB columns. Best, -jay On 03/14/2018 11:53 AM, Michael Bayer wrote: > hey all - > > Just looking to see if we think this will impact openstack. MySQL 5.6 > and 5.7, but not yet MariaDB, now emits an erroneous warning when you > try to send a binary value to the database, because it sees the client > connection is supposed to use the utf8 or utf8mb4 charsets, assumes > all data must be in that charset, then warns because the binary data > does not necessarily conform to utf8 (which it has no need to, it's > binary). > > Sounds weird, right, to make it easier the demo looks just like this: > > import pymysql > import uuid > > conn = pymysql.connect( > user="scott", passwd="tiger", host="mysql56", > db="test", charset="utf8mb4") > cursor = conn.cursor() > cursor.execute(""" > CREATE TABLE IF NOT EXISTS `profiles` ( > `id` varchar(32) COLLATE utf8mb4_unicode_ci NOT NULL, > `city` blob NOT NULL > ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci > """) > cursor.execute( > "INSERT INTO profiles (id, city) VALUES (%(id)s, %(city)s)", > { > 'id': uuid.uuid4().hex, > 'city': pymysql.Binary( > b'z\xf9\x87jS?\xd4i\xa5\xa3\r\xa7\x1e\xed\x16\xe0\xb5\x05R\xa4\xec\x16\x8f\x06\xb5\xea+\xaf<\x00\\\x94I9A\xe0\x82\xa7\x13\x0c\x8c' > ) > } > ) > > when using PyMySQL 0.8.0 (not 0.7.1) you then get a warning > > Warning: (1300, "Invalid utf8mb4 character string: 'F9876A'"). > > > So, Oracle upstream clearly is never going to fix this if you look at > the typically dismal discussion at [1]. I poked the PyMySQL project > at [2] to see what we can do. Long term is that SQLAlchemy will add > the special "_binary" prefix to binary-bound parameter tokens to avoid > the warning, however right now PyMySQL supports a flag "binary_prefix" > that will do it for us on the driver side. > > For Openstack, i need to know if we are in fact passing binary data to > databases in some project or another. What we can do is add the > supply of this flag to oslo.db so that it is present automatically for > the PyMySQL driver, as well as checking the PyMySQL version for > compatibility. > > If folks are seeing this warning already or are using BLOB / binary > columns in their project please ping me and we will get this added to > oslo.db. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From scheuran at linux.vnet.ibm.com Wed Mar 14 16:29:16 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 14 Mar 2018 17:29:16 +0100 Subject: [openstack-dev] [nova][ThirdParty-CI] Nova s390x CI currently broken In-Reply-To: References: Message-ID: <7DDB6DAF-C75A-4AC4-8AC3-C4994311912F@linux.vnet.ibm.com> A brief update: The root cause is that Neutron patch [1] broke Neutron DHCP and L3 agent on s390x (both use pyroute2 for network namespace management now). The issue needs to get fixed in pyroute2 itself. I opened a PR [2]. Ideally a new version gets released soon. [1] https://github.com/openstack/neutron/commit/c4d4336 [2] https://github.com/svinota/pyroute2/pull/469 --- Andreas Scheuring (andreas_s) On 13. Mar 2018, at 17:14, Andreas Scheuring wrote: Hello, the s390x CI for nova is currently broken again. The reason seems to be a recent change that merged in neutron. I’m looking into it... --- Andreas Scheuring (andreas_s) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Wed Mar 14 17:05:59 2018 From: ayoung at redhat.com (Adam Young) Date: Wed, 14 Mar 2018 13:05:59 -0400 Subject: [openstack-dev] Replacing Keystone Admin Accounts Message-ID: As we attempt to close the gap on Bug 968696, we have to make sure we are headed forward in a path that won't get us stuck. It seems that many people use Admin-every accounts for many things that they are not really meant for. Such as performing Operations that should be scoped to a project, like creating networks in Neutron or Block devices in Cinder. With the service scoping of role assignments, we have both the opportunity and responsibility to rework how these operations are authorized. Back in the time when we were discussing and engineering Hierarchical Multi-tenancy (HMT) the operators told us that they did not want to have to rescope tokens in order to provide help for their users. I remember getting this both verbally and in writing, although I cannot find the message now. If we created basic policy rules that allowed a Nova service account to list all servers (for example) but not to change those servers without getting a token scoped to that specific project, would it break a lot of tooling? The other use case we've found is the need to clean up project-scoped resources. Once a project has been deleted in Keystone, it is impossible to get a project scoped token to delete the resources in cinder, glance, and so on. It seems like these operations need to be on a per-system (service? endpoint) basis for the foreseeable future. Is this acceptable? Are there any alterntives that people would rather see implemented? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Mar 14 17:11:30 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 14 Mar 2018 12:11:30 -0500 Subject: [openstack-dev] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: <1521043010-sup-2656@lrrr.local> References: <20180313193832.b6d422c986c9bbee5c2d4d73@redhat.com> <72033a1f-77c4-2ae0-e674-0b8463f34983@gmail.com> <20180314160441.fc454fc44f70a4fefc4e6d88@redhat.com> <1521043010-sup-2656@lrrr.local> Message-ID: On 3/14/2018 10:57 AM, Doug Hellmann wrote: > Excerpts from Jay S Bryant's message of 2018-03-14 10:38:37 -0500: >> On 3/14/2018 10:04 AM, Petr Kovar wrote: >>> On Tue, 13 Mar 2018 18:57:24 -0500 >>> Jay S Bryant wrote: >>> >>>> Amy, >>>> >>>> The top level page for projects is referenced under documentation from >>>> here:  https://docs.openstack.org/queens/projects.html >>>> >>>> So, I think we have that one covered for people who are just looking for >>>> the top level documentation. >>> Yes, we have that covered. Just to clarify this a bit further, we also have >>> project lists like https://docs.openstack.org/queens/install/, >>> https://docs.openstack.org/queens/admin/ and >>> https://docs.openstack.org/queens/configuration/, what's missing is >>> https://docs.openstack.org/queens/contributor/. >>> >>> Cheers, >>> pk >>> >> Petr, >> >> Do we need a contributor link per-release?  I thought in past >> discussions that the contributor info should always go to latest and >> that was why this is slightly different. >> >> Jay > We can have a per-series page that lists all projects and links to their > "latest" contributor doc page. Agreed.  That would make the most sense. > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Wed Mar 14 18:07:27 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 14 Mar 2018 11:07:27 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - cells Message-ID: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Hi everyone, I’ve created a summary etherpad [0] for the nova cells session from the PTG and included a plain text export of it on this email. Thanks, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-cells-summary *Cells: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L11 *Key topics * How to handle a "down" cell * How to handle each cell having a separate ceph cluster * How do we plan to progress on removing "upcalls" *Agreements and decisions * In order to list instances even when we can't connect to a cell database, we'll construct something minimal from the API database and we'll add a column to the instance_mappings table such as "queued_for_delete" to determine which are the non-deleted instances and then list them. * tssurya will write a spec for the new column. * We're not going to pursue the approach of having backup URLs for cell databases to fall back on when a cell is "down". * An attempt to delete an instance in a "down" cell should result in a 500 or 503 error. * An attempt to create an instance should be blocked if the project has instances in a "down" cell (the instance_mappings table has a "project_id" column) because we cannot count instances in "down" cells for the quota check. * At this time, we won't pursue the idea of adding an allocation "type" concept to placement (which could be leveraged for counting cores/ram resource usage for quotas). * The topic of each cell having a separate ceph cluster and having each cell cache images in the imagebackend led to the topic of the "cinder imagebackend" again. * Implementing a cinder imagebackend in nova would be an enormous undertaking that realistically isn't going to happen. * A pragmatic solution was suggested to make boot-from-volume a first class citizen and make automatic boot-from-volume work well, so that we let cinder handle the caching of images in this scenario (and of course handle all of the other use cases for cinder imagebackend). This would eventually lead to the deprecation of the ceph imagebackend. Further discussion is required on this. * On removing upcalls, progress in placement will help address the remaining upcalls. * dansmith will work on filtering compute hosts using the volume availability zone to address the cinder/cross_az_attach issue. mriedem and bauzas will review. * For the xenapi host aggregate upcall, the xenapi subteam will remove it as a patch on top of their live-migration support patch series. * For the server group late affinity check up-call for server create and evacuate, the plan is to handle it race-free with placement/scheduler. However, affinity modeling in placement isn't slated for work in Rocky, so the late affinity check upcall will have to be removed in S, at the earliest. From cdent+os at anticdent.org Wed Mar 14 18:26:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 14 Mar 2018 18:26:07 +0000 (GMT) Subject: [openstack-dev] [nova] Rocky PTG summary - cells In-Reply-To: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> References: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Message-ID: On Wed, 14 Mar 2018, melanie witt wrote: > I’ve created a summary etherpad [0] for the nova cells session from the PTG and included a plain text export of it on this email. Nice summary. Apparently I wasn't there or paying attention when something was decided: > * An attempt to delete an instance in a "down" cell should result in a 500 or 503 error. Depending on how we look at it, this doesn't really align with what 500 or 503 are supposed to be used. They are supposed to indicate that the web server is broken in some fashion: 500 being an unexpected and uncaught exception in the web server, 503 that the web server is either overloaded or down for maintenance. So, you could argue that 409 is the right thing here (as seems to always happen when we discuss these things). You send a DELETE to kill the instance, but the current state of the instance is "on a cell that can't be reached" which is in "conflict" with the state required to do a DELETE. If a 5xx is really necessary, for whatever reason, then 503 is a better choice than 500 because it at least signals that the broken thing is sort of "over there somewhere" rather than the web server having an error (which is what 500 is supposed to mean). -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lhinds at redhat.com Wed Mar 14 18:35:50 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 14 Mar 2018 18:35:50 +0000 Subject: [openstack-dev] [security] Tomorrow's meeting and LCOO Message-ID: Hello, Something has come up that determines I won't be able to attend the meeting tomorrow and more importantly chair it. However I would not want to be a bottleneck to good progress underway. If someone would like to step up and chair for just this meeting, the agenda is below: https://etherpad.openstack.org/p/security-agenda Also keep in mind, we now meet in #openstack-meeting at 15:00, instead of 17:00. If not, we will defer and meet the week after. Last point, someone called eeiden pinged me on IRC, but have since logged out. They LCOO has an interest in working with the security SIG, which is most welcome. If anyone knows eeiden, can you ask him / her to contact us on this list and we can get initial discussions going and hopefully bring them into the meeting too. Cheers, Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Wed Mar 14 18:51:45 2018 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 14 Mar 2018 13:51:45 -0500 Subject: [openstack-dev] [security] Tomorrow's meeting and LCOO In-Reply-To: References: Message-ID: Hey Luke, I can chair the meeting tomorrow if that works. I will also ping eeiden about getting some LCOO discussion going as well. On Wed, Mar 14, 2018 at 1:35 PM, Luke Hinds wrote: > Hello, > > Something has come up that determines I won't be able to attend the > meeting tomorrow and more importantly chair it. > > However I would not want to be a bottleneck to good progress underway. > > If someone would like to step up and chair for just this meeting, the > agenda is below: > > https://etherpad.openstack.org/p/security-agenda > > Also keep in mind, we now meet in #openstack-meeting at 15:00, instead of > 17:00. > > If not, we will defer and meet the week after. > > Last point, someone called eeiden pinged me on IRC, but have since logged > out. They LCOO has an interest in working with the security SIG, which is > most welcome. If anyone knows eeiden, can you ask him / her to contact us > on this list and we can get initial discussions going and hopefully bring > them into the meeting too. > > Cheers, > > Luke > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Wed Mar 14 18:58:04 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 14 Mar 2018 18:58:04 +0000 Subject: [openstack-dev] [self-healing] Dublin PTG summary, and request for feedback Message-ID: <20180314185804.yqccn2jqyk26lnxk@pacific.linksys.moosehall> Hi all, I just posted a summary of the Self-healing SIG session at the Dublin PTG: http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000317.html If you are interested in the topic of self-healing within OpenStack, you are warmly invited to subscribe to the openstack-sigs mailing list: http://lists.openstack.org/pipermail/openstack-sigs/ and/or join the #openstack-self-healing channel on Freenode IRC. We are actively gathering feedback to help steer the SIG's focus in the right direction, so all thoughts are very welcome, especially from operators, since the primary goal of the SIG is to make life easier for operators. I have also just created an etherpad for brainstorming topics for the Forum in Vancouver: https://etherpad.openstack.org/p/YVR-self-healing-brainstorming Feel free to put braindumps in there :-) Thanks! Adam From pabelanger at redhat.com Wed Mar 14 19:20:40 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 14 Mar 2018 15:20:40 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> Message-ID: <20180314192040.GA22694@localhost.localdomain> On Wed, Mar 14, 2018 at 03:53:59AM +0000, namnh at vn.fujitsu.com wrote: > Hello Paul, > > I am Nam from Barbican team. I would like to notify a problem when using fedora-27. > > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this version and it is the main reason for failure Barbican database upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of mariadb before removing fedora-26. > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 > [2] https://jira.mariadb.org/browse/MDEV-13508 > Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has already been updated. Let me recheck the patch and see if it will use the newer version. > Thanks, > Nam > > > -----Original Message----- > > From: Paul Belanger [mailto:pabelanger at redhat.com] > > Sent: Tuesday, March 13, 2018 9:54 PM > > To: openstack-dev at lists.openstack.org > > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from > > fedora-26 to fedora-27 > > > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > > > Greetings, > > > > > > A quick search of git shows your projects are using fedora-26 nodes for > > testing. > > > Please take a moment to look at gerrit[1] and help land patches. We'd > > > like to remove fedora-26 nodes in the next week and to avoid broken > > > jobs you'll need to approve these patches. > > > > > > If you jobs are failing under fedora-27, please take the time to fix > > > any issue or update said patches to make them non-voting. > > > > > > We (openstack-infra) aim to only keep the latest fedora image online, > > > which changes aprox every 6 months. > > > > > > Thanks for your help and understanding, Paul > > > > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > > > > > Greetings, > > > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove > > our fedora-26 images next week and if jobs haven't been migrated you may > > start to see NODE_FAILURE messages while running jobs. Please take a > > moment to merge the open changes or update them to be non-voting while > > you work on fixes. > > > > Thanks again, > > Paul > > > > ______________________________________________________________ > > ____________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Wed Mar 14 19:39:19 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 14 Mar 2018 20:39:19 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints Message-ID: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> We now have neutron and horizon in global-requirements and do not need to install them anymore with tools/tox_install.sh. This allows to simplify our jobs and testing. Unfortunately, the merging caused now the projects that install neutron and horizon via tools/tox_install to break with constraints. I'm currently pushing up changes for these using topic tox-siblings [1]. Please merge those - and if you're pushing up changes yourself, let's use the same topic. Sorry for the breakage ;( Andreas [1] Links https://review.openstack.org/#/q/topic:tox-siblings+(status:open+OR+status:merged) -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Wed Mar 14 19:46:43 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 14 Mar 2018 20:46:43 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> Message-ID: <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> On 2018-03-14 20:39, Andreas Jaeger wrote: > We now have neutron and horizon in global-requirements and do not need > to install them anymore with tools/tox_install.sh. > > This allows to simplify our jobs and testing. > > Unfortunately, the merging caused now the projects that install neutron > and horizon via tools/tox_install to break with constraints. > > I'm currently pushing up changes for these using topic tox-siblings [1]. > > Please merge those - and if you're pushing up changes yourself, let's > use the same topic. > > Sorry for the breakage ;( > Andreas > > [1] Links > https://review.openstack.org/#/q/topic:tox-siblings+(status:open+OR+status:merged) > Note that thanks to the tox-siblings feature, we really continue to install neutron and horizon from git - and not use the versions in the global-requirements constraints file, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From pabelanger at redhat.com Wed Mar 14 20:44:07 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 14 Mar 2018 16:44:07 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180314192040.GA22694@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> <20180314192040.GA22694@localhost.localdomain> Message-ID: <20180314204407.GA31026@localhost.localdomain> On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote: > On Wed, Mar 14, 2018 at 03:53:59AM +0000, namnh at vn.fujitsu.com wrote: > > Hello Paul, > > > > I am Nam from Barbican team. I would like to notify a problem when using fedora-27. > > > > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this version and it is the main reason for failure Barbican database upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of mariadb before removing fedora-26. > > > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 > > [2] https://jira.mariadb.org/browse/MDEV-13508 > > > Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has > already been updated. Let me recheck the patch and see if it will use the newer > version. > Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed a patch to fix that[3]. Once landed, I'll recheck the job. [3] https://review.openstack.org/553052 > > Thanks, > > Nam > > > > > -----Original Message----- > > > From: Paul Belanger [mailto:pabelanger at redhat.com] > > > Sent: Tuesday, March 13, 2018 9:54 PM > > > To: openstack-dev at lists.openstack.org > > > Subject: Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from > > > fedora-26 to fedora-27 > > > > > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > > > > Greetings, > > > > > > > > A quick search of git shows your projects are using fedora-26 nodes for > > > testing. > > > > Please take a moment to look at gerrit[1] and help land patches. We'd > > > > like to remove fedora-26 nodes in the next week and to avoid broken > > > > jobs you'll need to approve these patches. > > > > > > > > If you jobs are failing under fedora-27, please take the time to fix > > > > any issue or update said patches to make them non-voting. > > > > > > > > We (openstack-infra) aim to only keep the latest fedora image online, > > > > which changes aprox every 6 months. > > > > > > > > Thanks for your help and understanding, Paul > > > > > > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > > > > > > > Greetings, > > > > > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove > > > our fedora-26 images next week and if jobs haven't been migrated you may > > > start to see NODE_FAILURE messages while running jobs. Please take a > > > moment to merge the open changes or update them to be non-voting while > > > you work on fixes. > > > > > > Thanks again, > > > Paul > > > > > > ______________________________________________________________ > > > ____________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev- > > > request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Wed Mar 14 21:20:03 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 15 Mar 2018 08:20:03 +1100 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol Message-ID: <20180314212003.GC25428@thor.bakeyournoodle.com> Hi all, JP has asked me to to work with infra to tag the newton branches of the following repos as EOL: openstack/ansible-hardening openstack/openstack-ansible-apt_package_pinning openstack/openstack-ansible-ceph_client openstack/openstack-ansible-galera_client openstack/openstack-ansible-galera_server openstack/openstack-ansible-haproxy_server openstack/openstack-ansible-lxc_container_create openstack/openstack-ansible-lxc_hosts openstack/openstack-ansible-memcached_server openstack/openstack-ansible-nspawn_container_create openstack/openstack-ansible-nspawn_hosts openstack/openstack-ansible-openstack_hosts openstack/openstack-ansible-openstack_openrc openstack/openstack-ansible-os_aodh openstack/openstack-ansible-os_barbican openstack/openstack-ansible-os_ceilometer openstack/openstack-ansible-os_cinder openstack/openstack-ansible-os_designate openstack/openstack-ansible-os_glance openstack/openstack-ansible-os_gnocchi openstack/openstack-ansible-os_heat openstack/openstack-ansible-os_horizon openstack/openstack-ansible-os_ironic openstack/openstack-ansible-os_keystone openstack/openstack-ansible-os_magnum openstack/openstack-ansible-os_molteniron openstack/openstack-ansible-os_neutron openstack/openstack-ansible-os_nova openstack/openstack-ansible-os_octavia openstack/openstack-ansible-os_panko openstack/openstack-ansible-os_rally openstack/openstack-ansible-os_sahara openstack/openstack-ansible-os_swift openstack/openstack-ansible-os_tacker openstack/openstack-ansible-os_tempest openstack/openstack-ansible-os_trove openstack/openstack-ansible-pip_install openstack/openstack-ansible-pip_lock_down openstack/openstack-ansible-plugins openstack/openstack-ansible-rabbitmq_server openstack/openstack-ansible-repo_build openstack/openstack-ansible-repo_server openstack/openstack-ansible-rsyslog_client openstack/openstack-ansible-rsyslog_server openstack/openstack-ansible-security openstack/openstack-ansible-ops openstack/openstack-ansible-os_almanach openstack/openstack-ansible-os_cloudkitty openstack/openstack-ansible-os_congress openstack/openstack-ansible-os_freezer openstack/openstack-ansible-os_karbor openstack/openstack-ansible-os_monasca openstack/openstack-ansible-os_monasca-agent openstack/openstack-ansible-os_monasca-ui openstack/openstack-ansible-os_searchlight openstack/openstack-ansible-os_watcher openstack/openstack-ansible-os_zaqar openstack/openstack-ansible-specs openstack/openstack-ansible-tests I'll process that this week after getting an ACK from JP Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From corvus at inaugust.com Wed Mar 14 21:36:08 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 14 Mar 2018 14:36:08 -0700 Subject: [openstack-dev] Zuul flaw in json logging Message-ID: <87r2oml4qf.fsf@meyer.lemoncheese.net> Hi, If your project is using secrets in Zuul v3, please see the attached message to determine whether they may have been disclosed. OpenStack's Zuul is now running with the referenced fix in place, and we have verified that the secrets used in the project-config repo (eg, to upload logs and artifacts) were not subject to disclosure. -Jim -------------- next part -------------- An embedded message was scrubbed... From: Subject: Zuul flaw in json logging Date: Wed, 14 Mar 2018 18:44:36 +0000 Size: 12190 URL: From jean-philippe at evrard.me Wed Mar 14 21:40:33 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 14 Mar 2018 21:40:33 +0000 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: <20180314212003.GC25428@thor.bakeyournoodle.com> References: <20180314212003.GC25428@thor.bakeyournoodle.com> Message-ID: Hello folks, The list is almost perfect: you can do all of those except openstack/openstack-ansible-tests. I'd like to phase out openstack/openstack-ansible-tests and openstack/openstack-ansible later. JP On 14 March 2018 at 21:20, Tony Breeds wrote: > Hi all, > JP has asked me to to work with infra to tag the newton branches of > the following repos as EOL: > > > openstack/ansible-hardening > openstack/openstack-ansible-apt_package_pinning > openstack/openstack-ansible-ceph_client > openstack/openstack-ansible-galera_client > openstack/openstack-ansible-galera_server > openstack/openstack-ansible-haproxy_server > openstack/openstack-ansible-lxc_container_create > openstack/openstack-ansible-lxc_hosts > openstack/openstack-ansible-memcached_server > openstack/openstack-ansible-nspawn_container_create > openstack/openstack-ansible-nspawn_hosts > openstack/openstack-ansible-openstack_hosts > openstack/openstack-ansible-openstack_openrc > openstack/openstack-ansible-os_aodh > openstack/openstack-ansible-os_barbican > openstack/openstack-ansible-os_ceilometer > openstack/openstack-ansible-os_cinder > openstack/openstack-ansible-os_designate > openstack/openstack-ansible-os_glance > openstack/openstack-ansible-os_gnocchi > openstack/openstack-ansible-os_heat > openstack/openstack-ansible-os_horizon > openstack/openstack-ansible-os_ironic > openstack/openstack-ansible-os_keystone > openstack/openstack-ansible-os_magnum > openstack/openstack-ansible-os_molteniron > openstack/openstack-ansible-os_neutron > openstack/openstack-ansible-os_nova > openstack/openstack-ansible-os_octavia > openstack/openstack-ansible-os_panko > openstack/openstack-ansible-os_rally > openstack/openstack-ansible-os_sahara > openstack/openstack-ansible-os_swift > openstack/openstack-ansible-os_tacker > openstack/openstack-ansible-os_tempest > openstack/openstack-ansible-os_trove > openstack/openstack-ansible-pip_install > openstack/openstack-ansible-pip_lock_down > openstack/openstack-ansible-plugins > openstack/openstack-ansible-rabbitmq_server > openstack/openstack-ansible-repo_build > openstack/openstack-ansible-repo_server > openstack/openstack-ansible-rsyslog_client > openstack/openstack-ansible-rsyslog_server > openstack/openstack-ansible-security > openstack/openstack-ansible-ops > openstack/openstack-ansible-os_almanach > openstack/openstack-ansible-os_cloudkitty > openstack/openstack-ansible-os_congress > openstack/openstack-ansible-os_freezer > openstack/openstack-ansible-os_karbor > openstack/openstack-ansible-os_monasca > openstack/openstack-ansible-os_monasca-agent > openstack/openstack-ansible-os_monasca-ui > openstack/openstack-ansible-os_searchlight > openstack/openstack-ansible-os_watcher > openstack/openstack-ansible-os_zaqar > openstack/openstack-ansible-specs > openstack/openstack-ansible-tests > > I'll process that this week after getting an ACK from JP > > > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aj at suse.com Wed Mar 14 21:56:08 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 14 Mar 2018 22:56:08 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> Message-ID: <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> On 2018-03-14 20:46, Andreas Jaeger wrote: > On 2018-03-14 20:39, Andreas Jaeger wrote: >> We now have neutron and horizon in global-requirements and do not need >> to install them anymore with tools/tox_install.sh. >> >> This allows to simplify our jobs and testing. >> >> Unfortunately, the merging caused now the projects that install neutron >> and horizon via tools/tox_install to break with constraints. >> >> I'm currently pushing up changes for these using topic tox-siblings [1]. >> >> Please merge those - and if you're pushing up changes yourself, let's >> use the same topic. >> >> Sorry for the breakage ;( >> Andreas >> >> [1] Links >> https://review.openstack.org/#/q/topic:tox-siblings+(status:open+OR+status:merged) >> > > Note that thanks to the tox-siblings feature, we really continue to > install neutron and horizon from git - and not use the versions in the > global-requirements constraints file, Btw. this work is part of what Monty proposed in http://lists.openstack.org/pipermail/openstack-dev/2017-November/124676.html Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From lhinds at redhat.com Wed Mar 14 21:56:33 2018 From: lhinds at redhat.com (Luke Hinds) Date: Wed, 14 Mar 2018 21:56:33 +0000 Subject: [openstack-dev] [security] Tomorrow's meeting and LCOO In-Reply-To: References: Message-ID: Sounds great, thanks Gage! I will try to catch up with you on Friday! On Wed, Mar 14, 2018 at 6:51 PM, Gage Hugo wrote: > Hey Luke, > > I can chair the meeting tomorrow if that works. > > I will also ping eeiden about getting some LCOO discussion going as well. > > On Wed, Mar 14, 2018 at 1:35 PM, Luke Hinds wrote: > >> Hello, >> >> Something has come up that determines I won't be able to attend the >> meeting tomorrow and more importantly chair it. >> >> However I would not want to be a bottleneck to good progress underway. >> >> If someone would like to step up and chair for just this meeting, the >> agenda is below: >> >> https://etherpad.openstack.org/p/security-agenda >> >> Also keep in mind, we now meet in #openstack-meeting at 15:00, instead of >> 17:00. >> >> If not, we will defer and meet the week after. >> >> Last point, someone called eeiden pinged me on IRC, but have since logged >> out. They LCOO has an interest in working with the security SIG, which is >> most welcome. If anyone knows eeiden, can you ask him / her to contact us >> on this list and we can get initial discussions going and hopefully bring >> them into the meeting too. >> >> Cheers, >> >> Luke >> > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Mar 14 22:16:11 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 15 Mar 2018 07:16:11 +0900 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> Message-ID: The current version of proposed patches which drops tox_install.sh works in our CI. Even if we have neutron>=12.0.0 (queens) or horizon>=13.0.0 (queens), if we have "required-projects" in zuul v3 config, tox-sibling role ensures to install the latest master of neutron/horizon. It is okay in our CI. On the other hand, this change brings a couple of problems. I think it is worth discussed broadly here. (1) it makes difficult to run tests in local environment We have only released version of neutron/horizon on PyPI. It means PyPI version (i.e. queens) is installed when we run tox in our local development. Most neutron stadium projects and horizon plugins depends on the latest master. Test run in local environment will be broken. We need to install the latest neutron/horizon manually. This confuses most developers. We need to ensure that tox can run successfully in a same manner in our CI and local environments. (2) neutron/horizon version in requirements.txt is confusing In the cycle-with-milestone model, a new version of neutron/horizon will be released only when a release is shipped. The code depends on the latest master, but requirements.txt says it depends on queens or later. It sounds confusing. Thanks, Akihiro 2018-03-15 6:56 GMT+09:00 Andreas Jaeger : > On 2018-03-14 20:46, Andreas Jaeger wrote: >> On 2018-03-14 20:39, Andreas Jaeger wrote: >>> We now have neutron and horizon in global-requirements and do not need >>> to install them anymore with tools/tox_install.sh. >>> >>> This allows to simplify our jobs and testing. >>> >>> Unfortunately, the merging caused now the projects that install neutron >>> and horizon via tools/tox_install to break with constraints. >>> >>> I'm currently pushing up changes for these using topic tox-siblings [1]. >>> >>> Please merge those - and if you're pushing up changes yourself, let's >>> use the same topic. >>> >>> Sorry for the breakage ;( >>> Andreas >>> >>> [1] Links >>> https://review.openstack.org/#/q/topic:tox-siblings+(status:open+OR+status:merged) >>> >> >> Note that thanks to the tox-siblings feature, we really continue to >> install neutron and horizon from git - and not use the versions in the >> global-requirements constraints file, > > Btw. this work is part of what Monty proposed in > http://lists.openstack.org/pipermail/openstack-dev/2017-November/124676.html > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Wed Mar 14 22:25:49 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 14 Mar 2018 22:25:49 +0000 (GMT) Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> Message-ID: On Thu, 15 Mar 2018, Akihiro Motoki wrote: > (1) it makes difficult to run tests in local environment > We have only released version of neutron/horizon on PyPI. It means > PyPI version (i.e. queens) is installed when we run tox in our local > development. Most neutron stadium projects and horizon plugins depends > on the latest master. Test run in local environment will be broken. We > need to install the latest neutron/horizon manually. This confuses > most developers. We need to ensure that tox can run successfully in a > same manner in our CI and local environments. Assuming that ^ is actually the case then: This sounds like a really critical issue. We need to be really careful about automating the human out of the equation to the point where people are submitting broken code just so they can get a good test run. That's not great if we'd like to encourage various forms of TDD and the like and we also happen to have a limited supply of CI resources. (Which is not to say that tox-siblings isn't an awesome feature. I hadn't really known about it until today and it's a great thing.) -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From pabelanger at redhat.com Wed Mar 14 22:36:00 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 14 Mar 2018 18:36:00 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180314204407.GA31026@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> <20180314192040.GA22694@localhost.localdomain> <20180314204407.GA31026@localhost.localdomain> Message-ID: <20180314223600.GA7757@localhost.localdomain> On Wed, Mar 14, 2018 at 04:44:07PM -0400, Paul Belanger wrote: > On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote: > > On Wed, Mar 14, 2018 at 03:53:59AM +0000, namnh at vn.fujitsu.com wrote: > > > Hello Paul, > > > > > > I am Nam from Barbican team. I would like to notify a problem when using fedora-27. > > > > > > Currently, fedora-27 is using mariadb at 10.2.12. But there is a bug in this version and it is the main reason for failure Barbican database upgrading [1], the bug was fixed at 10.2.13 [2]. Would you mind updating the version of mariadb before removing fedora-26. > > > > > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 > > > [2] https://jira.mariadb.org/browse/MDEV-13508 > > > > > Looking at https://apps.fedoraproject.org/packages/mariadb seems 10.2.13 has > > already been updated. Let me recheck the patch and see if it will use the newer > > version. > > > Okay, it looks like our AFS mirrors for fedora our out of sync, I've proposed a > patch to fix that[3]. Once landed, I'll recheck the job. > Okay, database looks to be fixed, but there are tests failing[4]. I'll defer back to you to continue work on the migration. [4] http://logs.openstack.org/20/547120/2/check/barbican-dogtag-devstack-functional-fedora-27/4cd64e0/job-output.txt.gz#_2018-03-14_22_29_49_400822 From pabelanger at redhat.com Wed Mar 14 22:41:31 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 14 Mar 2018 18:41:31 -0400 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> Message-ID: <20180314224131.GA7990@localhost.localdomain> On Wed, Mar 14, 2018 at 10:25:49PM +0000, Chris Dent wrote: > On Thu, 15 Mar 2018, Akihiro Motoki wrote: > > > (1) it makes difficult to run tests in local environment > > We have only released version of neutron/horizon on PyPI. It means > > PyPI version (i.e. queens) is installed when we run tox in our local > > development. Most neutron stadium projects and horizon plugins depends > > on the latest master. Test run in local environment will be broken. We > > need to install the latest neutron/horizon manually. This confuses > > most developers. We need to ensure that tox can run successfully in a > > same manner in our CI and local environments. > > Assuming that ^ is actually the case then: > > This sounds like a really critical issue. We need to be really > careful about automating the human out of the equation to the point > where people are submitting broken code just so they can get a good > test run. That's not great if we'd like to encourage various forms > of TDD and the like and we also happen to have a limited supply of > CI resources. > > (Which is not to say that tox-siblings isn't an awesome feature. I > hadn't really known about it until today and it's a great thing.) > If ansible is our interface for developers to use, it shouldn't be difficult to reproduce the environments locally to get master. This does mean changing the developer workflow to use ansible, which I can understand might not be what people want to do. The main reason for removing install_tox.sh, is to remove zuul-cloner from our DIB images, as zuulv3 no longer includes this command. Even running that locally, would no longer work against git.o.o. I agree, we should see how to make the migration for local developer environments better. Paul From iwienand at redhat.com Wed Mar 14 22:50:09 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 15 Mar 2018 09:50:09 +1100 Subject: [openstack-dev] [infra][all] Anyone using our ubuntu-mariadb mirror? Message-ID: Hello, We discovered an issue with our mariadb package mirroring that suggests it hasn't been updating for some time. This would be packages from http://mirror.X.Y.openstack.org/ubuntu-mariadb/10.<1|2> This was originally added in [1]. AFAICT from codesearch, it is currently unused. We export the top-level directory in the mirror config scripts as NODEPOOL_MARIADB_MIRROR, which is not referenced in any jobs [2], and I couldn't find anything setting up apt repos pointing to it. Thus since it's not updating and nothing seems to reference it, I am going to assume it is unused and remove it next week. If not, please respond and we can organise a fix. -i [1] https://review.openstack.org/#/c/307831/ [2] http://codesearch.openstack.org/?q=NODEPOOL_MARIADB_MIRROR&i=nope&files=&repos= From kennelson11 at gmail.com Wed Mar 14 22:55:15 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 14 Mar 2018 22:55:15 +0000 Subject: [openstack-dev] [reno] moved to storyboard In-Reply-To: <1521039106-sup-1670@lrrr.local> References: <1521039106-sup-1670@lrrr.local> Message-ID: Woot woot! -Kendall (diablo_rojo) On Wed, Mar 14, 2018 at 7:51 AM Doug Hellmann wrote: > The bug tracker for reno has moved to storyboard: > https://storyboard.openstack.org/#!/project/933 > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Mar 14 23:39:27 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 14 Mar 2018 16:39:27 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - nova/cinder Message-ID: Hello all, Here’s the PTG summary etherpad [0] for the nova/cinder session from the PTG, also included as a plain text export on this email. Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-cinder-summary *Nova/Cinder: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L63 *Key topics * New attach flow fixes and multi-attach * Attach mode * Swap volume with two read/write attachments * SHELVED_OFFLOADED and 'in-use' state in old attach flow * Server multi-create with attaching to the same volume fails * Data migration for old-style attachments * Volume replication for in-use volumes * Object-ifying os-brick connection_info * Formatting blank encrypted volumes during creation on the cinder side * Volume detail show reveals the attached compute hostname for non-admins * Bulk volume create/attach *Agreements and decisions * To handle attach mode for a multi-attach volume to several instances, we will change the compute API to allow the user to pass the attach mode so we can pass it through to cinder * The second attachment is going to be read/write by default and if the user wants read-only, they have to specify it * Spec: https://review.openstack.org/#/c/552078/ * Swap volume with two read/write attachments could definitely corrupt data. However, the cinder API doesn't allow retype/migration of in-use multi-attach volumes, so this isn't a problem right now * It would be reasonable to fix SHELVED_OFFLOADED to leave the volume in 'reserved' state instead of 'in-use', but it's low priority * The bug with server multi-create and multi-attach will be fixed on the cinder side and we'll add a new compute API microversion to leverage the cinder fix * Spec: https://review.openstack.org/#/c/552078/ * We'll migrate old-style attachments on-the-fly when a change is made to a volume, such as a migration. For the rest, we'll migrate old-style attachments on compute startup to new-style attachments * Compute startup data migration patch: https://review.openstack.org/#/c/549130/ * For volume replication of in-use volumes, on the cinder side, we'll need a prototype and spec, and drivers will need to indicate the type of replication and what recovery on the nova side needs to be. On the nova side, we'll need a new API microversion for the os-server-external-events change (like extended volume) * Owner: jgriffith * On the possibility of object-ifying connection_info in os-brick, it would be best to defer it until nova/neutron have worked out vif negotiation using os-vif * lyarwood asked to restore https://review.openstack.org/#/c/269867/ * On formatting blank encrypted volumes during creation, it sounded like we had agreement to fix it on the cinder side as they already have code for it. Need to double-check with the cinder team to make sure * For volume detail show revealing the attached compute hostname for non-admins, cinder will make a change to add a policy to not display the compute hostname for non-admins * Note: this doesn't impact nova, but it might impact glance. * On bulk volume create/attach, it will be up to cinder to decide whether they will want to implement bulk create. In nova, we are not going to support bulk attach as that's a job better done by an orchestration system like Heat * Note: Cinder team agreed to not support bulk create: https://wiki.openstack.org/wiki/CinderRockyPTGSummary#Bulk_Volume_Create.2FAttach From juliaashleykreger at gmail.com Wed Mar 14 23:51:14 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 14 Mar 2018 16:51:14 -0700 Subject: [openstack-dev] [tripleo] TLS by default In-Reply-To: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> References: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> Message-ID: On Wed, Mar 14, 2018 at 4:52 AM, Dmitry Tantsur wrote: > Just to clarify: only for public endpoints, right? I don't think e.g. > ironic-python-agent can talk to self-signed certificates yet. > > For what it is worth, it is possible for IPA to speak to a self signed certificate, although it requires injecting the signing private CA certificate into the ramdisk or iso image that is being used. There are a few other options that can be implemented, but those may also lower overall security posture. From kennelson11 at gmail.com Thu Mar 15 00:28:13 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Mar 2018 00:28:13 +0000 Subject: [openstack-dev] [Openstack-sigs] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: We talked about this more during the meeting last night. Most people were pretty neutral on the topic. I personally feel like #openstack-dev is the ideal place to direct people once they get set up on irc and are interested in contributing, but maybe that is because my perception of the chats in #openstack isn't correct. Looking at the definition in the IRC channel list[1] and the channel topic[2] I feel like it more for support questions on running openstack? I dunno. I honestly didn't spend time in the channel really until I joined last night so maybe my perception just needs an update. I am all ears for your reasoning behind #openstack instead of #openstack-dev though too. Please enlighten me :) -Kendall (diablo_rojo) [1] https://wiki.openstack.org/wiki/IRC [2] Openstack Support Channel, Development in #openstack-dev | Wiki: http://wiki.openstack.org/ | Docs: http://docs.openstack.org/ | Answers: https://ask.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ | Paste: http://paste.openstack.org/ On Tue, Mar 13, 2018 at 3:30 PM Amy Marrich wrote: > Just one comment on this section before I forget again: > #IRC Channels# > We want to get rid of #openstack-101 and begin using #openstack-dev > instead. The 101 channel isn't watched closely enough anymore and it makes > more sense to move onboarding activities (like in OpenStack Upstream > Institute) to a channel where there are people that can answer questions > rather than asking those to move to a new channel. For those concerned > about noise, OUI is run the weekend before the summit when most people are > traveling to the Summit anyway. > > I would recommend sending folks to #openstack vs #openstack-dev by default. > > Amy (spotz) > > On Mon, Mar 5, 2018 at 2:00 PM, Kendall Nelson > wrote: > >> Hello Everyone :) >> >> It was wonderful to see and talk with so many of you last week! For those >> that couldn't attend our whole day of chats or those that couldn't attend >> at all, I thought I would put forth a summary of our discussions which were >> mostly noted in the etherpad[1] >> >> #Contributor Guide# >> >> - Walkthrough: We walked through every section of what exists and came up >> with a variety of improvements on what is there. Most of these items have >> been added to our StoryBoard project[2]. This came up again Tuesday in docs >> sessions and I have added those items to StoryBoard as well. >> >> - Google Analytics: It was discussed we should do something about getting >> the contributor portal[3] to appear higher in Google searches about >> onboarding. Not sure what all this entails. NEEDS AN OWNER IF ANYONE WANTS >> TO VOLUNTEER. >> >> #Mission Statement# >> >> We updated our mission statement[4]! It now states: >> >> To provide a place for new contributors to come for information and >> advice. This group will also analyze and document successful contribution >> models while seeking out and providing information to new members of the >> community. >> >> #Weekly Meeting# >> >> We discussed beginning a weekly meeting- optimized for APAC/Europe and >> settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed here[5]. >> For now I added a section to our wiki for agenda organization[6]. The two >> main items we want to cover on a weekly basis are new contributor patches >> in gerrit and if anything has come up on ask.openstack.org about >> contributors so those will be standing agenda items. >> >> #Forum Session# >> >> We discussed proposing some forum sessions in order to get more >> involvement from operators. Currently, our activities focus on development >> activities and we would like to diversify. When this SIG was first proposed >> we wanted to have two chairs- one to represent developers and one to >> represent operators. We will propose a session or two when the call for >> forum proposals go out (should be today). >> >> #IRC Channels# >> We want to get rid of #openstack-101 and begin using #openstack-dev >> instead. The 101 channel isn't watched closely enough anymore and it makes >> more sense to move onboarding activities (like in OpenStack Upstream >> Institute) to a channel where there are people that can answer questions >> rather than asking those to move to a new channel. For those concerned >> about noise, OUI is run the weekend before the summit when most people are >> traveling to the Summit anyway. >> >> #Ongoing Onboarding Efforts# >> >> - GSOC: Unfortunately we didn't get accepted this year. We will try again >> next year. >> >> - Outreachy: Applications for the next round of interns are due March >> 22nd, 2018 [7]. Decisions will be made by April and then internships run >> May to August. >> >> - WoO Mentoring: The format of mentoring is changing from 1x1 to cohorts >> focused on a single goal. If you are interested in helping out, please >> contact me! I NEED HELP :) >> >> - Contributor guide: Please see the above section. >> >> - OpenStack Upstream Institute: It will be run, as usual, the weekend >> before the Summit in Vancouver. Depending on how much progress is made on >> the contributor guide, we will make use of it as opposed to slides like >> previous renditions. There have also been a number of OpenStack Days >> requesting we run it there as well. More details of those to come. >> >> #Project Liaisons# >> >> The list is filling out nicely, but we still need more coverage. If you >> know someone from a project not listed that might be willing to help, >> please reach out to them and get them added to our list [8]. >> >> I thiiiiiink that is just about everything. Hopefully I at least covered >> everything important :) >> >> Thanks Everyone! >> >> - Kendall Nelson (diablo_rojo) >> >> [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG >> [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 >> [3] Contributor Portal https://www.openstack.org/community/ >> [4] Mission Statement Update https://review.openstack.org/#/c/548054/ >> [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ >> [6] Meeting Agenda >> https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda >> [7] Outreachy https://www.outreachy.org/apply/ >> [8] Project Liaisons >> https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> >> > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Mar 15 00:37:35 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 15 Mar 2018 08:37:35 +0800 Subject: [openstack-dev] [Openstack-sigs] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: Hi folks, Sorry was not be able to make it to the First Contact SIG discussions in Dublin. The only suggestion I have is to have as many project as possible to have a dedicated FC SIG wiki page for localize onboarding support. For example like what we do in Cyborg: https://wiki.openstack.org/wiki/Cyborg/FirstContact I've captured all the discussions of the China Dev Group via a Chinese streaming site and also put the wiki link in the wechat group description, so that new developer know where to go to for a localize support. For communciation tools, per irc I agree converge to #openstack-dev is better than maintaining some independent channels. But also from my experience video conf in native lang is more effective. On Thu, Mar 15, 2018 at 8:28 AM, Kendall Nelson wrote: > We talked about this more during the meeting last night. Most people were > pretty neutral on the topic. > > I personally feel like #openstack-dev is the ideal place to direct people > once they get set up on irc and are interested in contributing, but maybe > that is because my perception of the chats in #openstack isn't correct. > Looking at the definition in the IRC channel list[1] and the channel > topic[2] I feel like it more for support questions on running openstack? I > dunno. I honestly didn't spend time in the channel really until I joined > last night so maybe my perception just needs an update. I am all ears for > your reasoning behind #openstack instead of #openstack-dev though too. > Please enlighten me :) > > -Kendall (diablo_rojo) > > [1] https://wiki.openstack.org/wiki/IRC > [2] Openstack Support Channel, Development in #openstack-dev | Wiki: > http://wiki.openstack.org/ | Docs: http://docs.openstack.org/ | Answers: > https://ask.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ > | Paste: http://paste.openstack.org/ > > > On Tue, Mar 13, 2018 at 3:30 PM Amy Marrich wrote: > >> Just one comment on this section before I forget again: >> #IRC Channels# >> We want to get rid of #openstack-101 and begin using #openstack-dev >> instead. The 101 channel isn't watched closely enough anymore and it makes >> more sense to move onboarding activities (like in OpenStack Upstream >> Institute) to a channel where there are people that can answer questions >> rather than asking those to move to a new channel. For those concerned >> about noise, OUI is run the weekend before the summit when most people are >> traveling to the Summit anyway. >> >> I would recommend sending folks to #openstack vs #openstack-dev by >> default. >> >> Amy (spotz) >> >> On Mon, Mar 5, 2018 at 2:00 PM, Kendall Nelson >> wrote: >> >>> Hello Everyone :) >>> >>> It was wonderful to see and talk with so many of you last week! For >>> those that couldn't attend our whole day of chats or those that couldn't >>> attend at all, I thought I would put forth a summary of our discussions >>> which were mostly noted in the etherpad[1] >>> >>> #Contributor Guide# >>> >>> - Walkthrough: We walked through every section of what exists and came >>> up with a variety of improvements on what is there. Most of these items >>> have been added to our StoryBoard project[2]. This came up again Tuesday in >>> docs sessions and I have added those items to StoryBoard as well. >>> >>> - Google Analytics: It was discussed we should do something about >>> getting the contributor portal[3] to appear higher in Google searches about >>> onboarding. Not sure what all this entails. NEEDS AN OWNER IF ANYONE WANTS >>> TO VOLUNTEER. >>> >>> #Mission Statement# >>> >>> We updated our mission statement[4]! It now states: >>> >>> To provide a place for new contributors to come for information and >>> advice. This group will also analyze and document successful contribution >>> models while seeking out and providing information to new members of the >>> community. >>> >>> #Weekly Meeting# >>> >>> We discussed beginning a weekly meeting- optimized for APAC/Europe and >>> settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed here[5]. >>> For now I added a section to our wiki for agenda organization[6]. The two >>> main items we want to cover on a weekly basis are new contributor patches >>> in gerrit and if anything has come up on ask.openstack.org about >>> contributors so those will be standing agenda items. >>> >>> #Forum Session# >>> >>> We discussed proposing some forum sessions in order to get more >>> involvement from operators. Currently, our activities focus on development >>> activities and we would like to diversify. When this SIG was first proposed >>> we wanted to have two chairs- one to represent developers and one to >>> represent operators. We will propose a session or two when the call for >>> forum proposals go out (should be today). >>> >>> #IRC Channels# >>> We want to get rid of #openstack-101 and begin using #openstack-dev >>> instead. The 101 channel isn't watched closely enough anymore and it makes >>> more sense to move onboarding activities (like in OpenStack Upstream >>> Institute) to a channel where there are people that can answer questions >>> rather than asking those to move to a new channel. For those concerned >>> about noise, OUI is run the weekend before the summit when most people are >>> traveling to the Summit anyway. >>> >>> #Ongoing Onboarding Efforts# >>> >>> - GSOC: Unfortunately we didn't get accepted this year. We will try >>> again next year. >>> >>> - Outreachy: Applications for the next round of interns are due March >>> 22nd, 2018 [7]. Decisions will be made by April and then internships run >>> May to August. >>> >>> - WoO Mentoring: The format of mentoring is changing from 1x1 to cohorts >>> focused on a single goal. If you are interested in helping out, please >>> contact me! I NEED HELP :) >>> >>> - Contributor guide: Please see the above section. >>> >>> - OpenStack Upstream Institute: It will be run, as usual, the weekend >>> before the Summit in Vancouver. Depending on how much progress is made on >>> the contributor guide, we will make use of it as opposed to slides like >>> previous renditions. There have also been a number of OpenStack Days >>> requesting we run it there as well. More details of those to come. >>> >>> #Project Liaisons# >>> >>> The list is filling out nicely, but we still need more coverage. If you >>> know someone from a project not listed that might be willing to help, >>> please reach out to them and get them added to our list [8]. >>> >>> I thiiiiiink that is just about everything. Hopefully I at least covered >>> everything important :) >>> >>> Thanks Everyone! >>> >>> - Kendall Nelson (diablo_rojo) >>> >>> [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG >>> [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 >>> [3] Contributor Portal https://www.openstack.org/community/ >>> [4] Mission Statement Update https://review.openstack.org/#/c/548054/ >>> [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ >>> [6] Meeting Agenda https://wiki.openstack.org/wiki/First_Contact_SIG# >>> Meeting_Agenda >>> [7] Outreachy https://www.outreachy.org/apply/ >>> [8] Project Liaisons https://wiki.openstack.org/wiki/First_Contact_SIG# >>> Project_Liaisons >>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >>> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Mar 15 00:59:00 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 15 Mar 2018 11:59:00 +1100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> Message-ID: <20180315005859.GE25428@thor.bakeyournoodle.com> On Thu, Mar 15, 2018 at 07:16:11AM +0900, Akihiro Motoki wrote: > The current version of proposed patches which drops tox_install.sh > works in our CI. Even if we have neutron>=12.0.0 (queens) or > horizon>=13.0.0 (queens), if we have "required-projects" in zuul v3 > config, tox-sibling role ensures to install the latest master of > neutron/horizon. It is okay in our CI. > > On the other hand, this change brings a couple of problems. I think it > is worth discussed broadly here. > > (1) it makes difficult to run tests in local environment > We have only released version of neutron/horizon on PyPI. It means > PyPI version (i.e. queens) is installed when we run tox in our local > development. Most neutron stadium projects and horizon plugins depends > on the latest master. Test run in local environment will be broken. We > need to install the latest neutron/horizon manually. This confuses > most developers. We need to ensure that tox can run successfully in a > same manner in our CI and local environments. This is an issue I agree and one we need to think about but it will be somewhat mitigated for local development by pbr siblings[1] In the short term, developers can do something like: for env in pep8,py35,py27 ; do tox -e $env --notest .tox/$env/bin/pip install -e /path/to/{horizon,neutron} tox -e $env done Which is far from ideal but gives as a little breathing room to decide if we need to revert and try again in a while or persist with the plan as it stands. pbr siblings wont fix all the issues we have and still makes consumption of neutron and horizon (and plugins / stadium projects) difficult outside of test. > (2) neutron/horizon version in requirements.txt is confusing > In the cycle-with-milestone model, a new version of neutron/horizon > will be released only when a release is shipped. > The code depends on the latest master, but requirements.txt says it > depends on queens or later. It sounds confusing. Yes we either need to create a new release-model or switch neutron/horizon to the cycle-with-intermediary model and encourage appropriate releases. I'd really like to avoid publishing daily to pypi. Yours Tony. [1] https://review.openstack.org/#/q/status:open+project:openstack-dev/pbr+branch:master+topic:fix-siblings-entrypoints -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Mar 15 00:57:46 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Mar 2018 09:57:46 +0900 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> Message-ID: Thanks all for starting the collaboration on this which is long pending things and we all want to have some start on this. Myself and SamP talked about it during OPS meetup in Tokyo and we talked about below draft plan- - Update the Spec - https://review.openstack.org/#/c/443504/. which is almost ready as per SamP and his team is working on that. - Start the technical debate on tooling we can use/reuse like Yardstick etc, which is more this mailing thread. - Accept the new repo for Eris under QA and start at least something in Rocky cycle. I am in for having meeting on this which is really good idea. non-IRC meeting is totally fine here. Do we have meeting place and time setup ? -gmann On Fri, Mar 9, 2018 at 8:16 PM, Bogdan Dobrelya wrote: > On 3/8/18 6:44 PM, Raoul Scarazzini wrote: > >> On 08/03/2018 17:03, Adam Spiers wrote: >> [...] >> >>> Yes agreed again, this is a strong case for collaboration between the >>> self-healing and QA SIGs. In Dublin we also discussed the idea of the >>> self-healing and API SIGs collaborating on the related topic of health >>> check APIs. >>> >> >> Guys, thanks a ton for your involvement in the topic, I am +1 to any >> kind of meeting we can have to discuss this (like it was proposed by >> > > Please count me in as well. I can't stop dreaming of Jepsen's Nemesis [0] > hammering openstack to make it stronger :D > Jokes off, let's do the best to consolidate on frameworks and tools and > ditching NIH syndrome! > > [0] https://github.com/jepsen-io/jepsen/blob/master/jepsen/src/j > epsen/nemesis.clj > > Adam) so I'll offer my bluejeans channel for whatever kind of meeting we >> want to organize. >> About the best practices part Georg was mentioning I'm 100% in >> agreement, the testing methodologies are the first thing we need to care >> about, starting from what we want to achieve. >> That said, I'll keep studying Yardstick. >> >> Hope to hear from you soon, and thanks again! >> >> > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Mar 15 01:11:43 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 15 Mar 2018 12:11:43 +1100 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: References: <20180314212003.GC25428@thor.bakeyournoodle.com> Message-ID: <20180315011132.GF25428@thor.bakeyournoodle.com> On Wed, Mar 14, 2018 at 09:40:33PM +0000, Jean-Philippe Evrard wrote: > Hello folks, > > The list is almost perfect: you can do all of those except > openstack/openstack-ansible-tests. > I'd like to phase out openstack/openstack-ansible-tests and > openstack/openstack-ansible later. Okay excluding the 2 repos above and filtering out projects that don't have newton branches we came down to: # EOL repos belonging to OpenStackAnsible eol_branch.sh -- stable/newton newton-eol \ openstack/ansible-hardening \ openstack/openstack-ansible-apt_package_pinning \ openstack/openstack-ansible-ceph_client \ openstack/openstack-ansible-galera_client \ openstack/openstack-ansible-galera_server \ openstack/openstack-ansible-haproxy_server \ openstack/openstack-ansible-lxc_container_create \ openstack/openstack-ansible-lxc_hosts \ openstack/openstack-ansible-memcached_server \ openstack/openstack-ansible-openstack_hosts \ openstack/openstack-ansible-openstack_openrc \ openstack/openstack-ansible-ops \ openstack/openstack-ansible-os_aodh \ openstack/openstack-ansible-os_ceilometer \ openstack/openstack-ansible-os_cinder \ openstack/openstack-ansible-os_glance \ openstack/openstack-ansible-os_gnocchi \ openstack/openstack-ansible-os_heat \ openstack/openstack-ansible-os_horizon \ openstack/openstack-ansible-os_ironic \ openstack/openstack-ansible-os_keystone \ openstack/openstack-ansible-os_magnum \ openstack/openstack-ansible-os_neutron \ openstack/openstack-ansible-os_nova \ openstack/openstack-ansible-os_rally \ openstack/openstack-ansible-os_sahara \ openstack/openstack-ansible-os_swift \ openstack/openstack-ansible-os_tempest \ openstack/openstack-ansible-pip_install \ openstack/openstack-ansible-plugins \ openstack/openstack-ansible-rabbitmq_server \ openstack/openstack-ansible-repo_build \ openstack/openstack-ansible-repo_server \ openstack/openstack-ansible-rsyslog_client \ openstack/openstack-ansible-rsyslog_server \ openstack/openstack-ansible-security If you confirm I have the list right this time I'll work on this tomorrow Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Mar 15 01:37:29 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 15 Mar 2018 10:37:29 +0900 Subject: [openstack-dev] [Openstack-sigs] [First Contact][SIG] [PTG] Summary of Discussions In-Reply-To: References: Message-ID: Thanks Zhipeng that helps and good idea. If new contributors find that they will have enough information to start. >From FirstContact SIG, we also maintain the Projects Liaisons to have contact person for that project we can redirect the new contributors. idea is to have multiple people from single project to cover all different timezone. I think you already have your name there (+1). ..1 https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -gmann On Thu, Mar 15, 2018 at 9:37 AM, Zhipeng Huang wrote: > Hi folks, > > Sorry was not be able to make it to the First Contact SIG discussions in > Dublin. The only suggestion I have is to have as many project as possible > to have a dedicated FC SIG wiki page for localize onboarding support. > > For example like what we do in Cyborg: https://wiki. > openstack.org/wiki/Cyborg/FirstContact I've captured all the discussions > of the China Dev Group via a Chinese streaming site and also put the wiki > link in the wechat group description, so that new developer know where to > go to for a localize support. > > For communciation tools, per irc I agree converge to #openstack-dev is > better than maintaining some independent channels. But also from my > experience video conf in native lang is more effective. > > On Thu, Mar 15, 2018 at 8:28 AM, Kendall Nelson > wrote: > >> We talked about this more during the meeting last night. Most people were >> pretty neutral on the topic. >> >> I personally feel like #openstack-dev is the ideal place to direct people >> once they get set up on irc and are interested in contributing, but maybe >> that is because my perception of the chats in #openstack isn't correct. >> Looking at the definition in the IRC channel list[1] and the channel >> topic[2] I feel like it more for support questions on running openstack? I >> dunno. I honestly didn't spend time in the channel really until I joined >> last night so maybe my perception just needs an update. I am all ears for >> your reasoning behind #openstack instead of #openstack-dev though too. >> Please enlighten me :) >> >> -Kendall (diablo_rojo) >> >> [1] https://wiki.openstack.org/wiki/IRC >> [2] Openstack Support Channel, Development in #openstack-dev | Wiki: >> http://wiki.openstack.org/ | Docs: http://docs.openstack.org/ | Answers: >> https://ask.openstack.org | Logs: http://eavesdrop.openstack.org/irclogs/ >> | Paste: http://paste.openstack.org/ >> >> >> On Tue, Mar 13, 2018 at 3:30 PM Amy Marrich wrote: >> >>> Just one comment on this section before I forget again: >>> #IRC Channels# >>> We want to get rid of #openstack-101 and begin using #openstack-dev >>> instead. The 101 channel isn't watched closely enough anymore and it makes >>> more sense to move onboarding activities (like in OpenStack Upstream >>> Institute) to a channel where there are people that can answer questions >>> rather than asking those to move to a new channel. For those concerned >>> about noise, OUI is run the weekend before the summit when most people are >>> traveling to the Summit anyway. >>> >>> I would recommend sending folks to #openstack vs #openstack-dev by >>> default. >>> >>> Amy (spotz) >>> >>> On Mon, Mar 5, 2018 at 2:00 PM, Kendall Nelson >>> wrote: >>> >>>> Hello Everyone :) >>>> >>>> It was wonderful to see and talk with so many of you last week! For >>>> those that couldn't attend our whole day of chats or those that couldn't >>>> attend at all, I thought I would put forth a summary of our discussions >>>> which were mostly noted in the etherpad[1] >>>> >>>> #Contributor Guide# >>>> >>>> - Walkthrough: We walked through every section of what exists and came >>>> up with a variety of improvements on what is there. Most of these items >>>> have been added to our StoryBoard project[2]. This came up again Tuesday in >>>> docs sessions and I have added those items to StoryBoard as well. >>>> >>>> - Google Analytics: It was discussed we should do something about >>>> getting the contributor portal[3] to appear higher in Google searches about >>>> onboarding. Not sure what all this entails. NEEDS AN OWNER IF ANYONE WANTS >>>> TO VOLUNTEER. >>>> >>>> #Mission Statement# >>>> >>>> We updated our mission statement[4]! It now states: >>>> >>>> To provide a place for new contributors to come for information and >>>> advice. This group will also analyze and document successful contribution >>>> models while seeking out and providing information to new members of the >>>> community. >>>> >>>> #Weekly Meeting# >>>> >>>> We discussed beginning a weekly meeting- optimized for APAC/Europe and >>>> settled on 800 UTC in #openstack-meeting on Wednesdays. Proposed here[5]. >>>> For now I added a section to our wiki for agenda organization[6]. The two >>>> main items we want to cover on a weekly basis are new contributor patches >>>> in gerrit and if anything has come up on ask.openstack.org about >>>> contributors so those will be standing agenda items. >>>> >>>> #Forum Session# >>>> >>>> We discussed proposing some forum sessions in order to get more >>>> involvement from operators. Currently, our activities focus on development >>>> activities and we would like to diversify. When this SIG was first proposed >>>> we wanted to have two chairs- one to represent developers and one to >>>> represent operators. We will propose a session or two when the call for >>>> forum proposals go out (should be today). >>>> >>>> #IRC Channels# >>>> We want to get rid of #openstack-101 and begin using #openstack-dev >>>> instead. The 101 channel isn't watched closely enough anymore and it makes >>>> more sense to move onboarding activities (like in OpenStack Upstream >>>> Institute) to a channel where there are people that can answer questions >>>> rather than asking those to move to a new channel. For those concerned >>>> about noise, OUI is run the weekend before the summit when most people are >>>> traveling to the Summit anyway. >>>> >>>> #Ongoing Onboarding Efforts# >>>> >>>> - GSOC: Unfortunately we didn't get accepted this year. We will try >>>> again next year. >>>> >>>> - Outreachy: Applications for the next round of interns are due March >>>> 22nd, 2018 [7]. Decisions will be made by April and then internships run >>>> May to August. >>>> >>>> - WoO Mentoring: The format of mentoring is changing from 1x1 to >>>> cohorts focused on a single goal. If you are interested in helping out, >>>> please contact me! I NEED HELP :) >>>> >>>> - Contributor guide: Please see the above section. >>>> >>>> - OpenStack Upstream Institute: It will be run, as usual, the weekend >>>> before the Summit in Vancouver. Depending on how much progress is made on >>>> the contributor guide, we will make use of it as opposed to slides like >>>> previous renditions. There have also been a number of OpenStack Days >>>> requesting we run it there as well. More details of those to come. >>>> >>>> #Project Liaisons# >>>> >>>> The list is filling out nicely, but we still need more coverage. If you >>>> know someone from a project not listed that might be willing to help, >>>> please reach out to them and get them added to our list [8]. >>>> >>>> I thiiiiiink that is just about everything. Hopefully I at least >>>> covered everything important :) >>>> >>>> Thanks Everyone! >>>> >>>> - Kendall Nelson (diablo_rojo) >>>> >>>> [1] PTG Etherpad https://etherpad.openstack.org/p/FC_SIG_Rocky_PTG >>>> [2] StoryBoard Tracker https://storyboard.openstack.org/#!/project/913 >>>> [3] Contributor Portal https://www.openstack.org/community/ >>>> [4] Mission Statement Update https://review.openstack.org/#/c/548054/ >>>> [5] Meeting Slot Proposal https://review.openstack.org/#/c/549849/ >>>> [6] Meeting Agenda https://wiki.openstack.org/wik >>>> i/First_Contact_SIG#Meeting_Agenda >>>> [7] Outreachy https://www.outreachy.org/apply/ >>>> [8] Project Liaisons https://wiki.openstack.org/wik >>>> i/First_Contact_SIG#Project_Liaisons >>>> >>>> _______________________________________________ >>>> openstack-sigs mailing list >>>> openstack-sigs at lists.openstack.org >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>>> >>>> >>> _______________________________________________ >>> openstack-sigs mailing list >>> openstack-sigs at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >>> >> >> _______________________________________________ >> openstack-sigs mailing list >> openstack-sigs at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs >> >> > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Thu Mar 15 01:54:59 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 15 Mar 2018 09:54:59 +0800 Subject: [openstack-dev] [nova] Rocky PTG summary - cells In-Reply-To: References: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Message-ID: Thanks for the recap, got one question for the "block creation": * An attempt to create an instance should be blocked if the project has > instances in a "down" cell (the instance_mappings table has a "project_id" > column) because we cannot count instances in "down" cells for the quota > check. Since users are not aware of any cell information, and the cells are mostly randomly selected, there could be high possibility that users(projects) instances are equally spreaded across cells. The proposed behavior seems can easily cause a lot of users couldn't create instances because one of the cells is down, isn't it too rude? BR, Kevin Zheng On Thu, Mar 15, 2018 at 2:26 AM, Chris Dent wrote: > On Wed, 14 Mar 2018, melanie witt wrote: > > I’ve created a summary etherpad [0] for the nova cells session from the >> PTG and included a plain text export of it on this email. >> > > Nice summary. Apparently I wasn't there or paying attention when > something was decided: > > * An attempt to delete an instance in a "down" cell should result in a >> 500 or 503 error. >> > > Depending on how we look at it, this doesn't really align with what > 500 or 503 are supposed to be used. They are supposed to indicate > that the web server is broken in some fashion: 500 being an > unexpected and uncaught exception in the web server, 503 that the > web server is either overloaded or down for maintenance. > > So, you could argue that 409 is the right thing here (as seems to > always happen when we discuss these things). You send a DELETE to > kill the instance, but the current state of the instance is "on a > cell that can't be reached" which is in "conflict" with the state > required to do a DELETE. > > If a 5xx is really necessary, for whatever reason, then 503 is a > better choice than 500 because it at least signals that the broken > thing is sort of "over there somewhere" rather than the web server > having an error (which is what 500 is supposed to mean). > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu Mar 15 01:57:40 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 14 Mar 2018 20:57:40 -0500 Subject: [openstack-dev] Fwd: 2.7 EOL = 2020 January 1 References: Message-ID: <064FE7F5-4A51-47E8-B010-FD061C5C11AE@leafe.com> Just FYI to all the people who don't prioritize moving our code to Python3: > Begin forwarded message: > > From: Terry Reedy > Subject: 2.7 EOL = 2020 January 1 > Date: March 13, 2018 at 9:58:42 AM CDT > To: python-list at python.org > > On March 10, on thread "Python 2.7 -- bugfix or security before EOL?", Guido van Russum wrote > > "The way I see the situation for 2.7 is that EOL is January 1st, 2020, and there will be no updates, not even source-only security patches, after that date. Support (from the core devs, the PSF, and python.org) stops completely on that date. If you want support for 2.7 beyond that day you will have to pay a commercial vendor. Of course it's open source so people are also welcome to fork it. But the core devs have toiled long enough, and the 2020 EOL date (an extension from the originally annouced 2015 EOL!) was announced with sufficient lead time and fanfare that I don't feel bad about stopping to support it at all." > > Two days later, Benjamin Peterson, the 2.7 release manager, replied "Sounds good to me. I've updated the PEP to say 2.7 is completely dead on Jan 1 2020." adding "The final release may not literally be on January 1st". > > https://www.python.org/dev/peps/pep-0373/ now says > "2.7 will receive bugfix support until January 1, 2020." > > -- > Terry Jan Reedy > > -- > https://mail.python.org/mailman/listinfo/python-list > -- Ed Leafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 15 02:29:33 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 14 Mar 2018 19:29:33 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - cells In-Reply-To: References: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Message-ID: On Thu, 15 Mar 2018 09:54:59 +0800, Zhenyu Zheng wrote: > Thanks for the recap, got one question for the "block creation": > > * An attempt to create an instance should be blocked if the project > has instances in a "down" cell (the instance_mappings table has a > "project_id" column) because we cannot count instances in "down" > cells for the quota check. > > > Since users are not aware of any cell information, and the cells are > mostly randomly selected, there could be high possibility that > users(projects) instances are equally spreaded across cells. The > proposed behavior seems can > easily cause a lot of users couldn't create instances because one of the > cells is down, isn't it too rude? To be honest, I share your concern. I had planned to change quota checks to use placement instead of reading cell databases ASAP but hit a snag where we won't be able to count instances from placement because we can't determine the "type" of an allocation. Allocations can be instances, or network-related resources, or volume-related resources, etc. Adding the concept of an allocation "type" in placement has been a controversial discussion so far. BUT ... we also said we would add a column like "queued_for_delete" to the instance_mappings table. If we do that, we could count instances from the instance_mappings table in the API database and count cores/ram from placement and no longer rely on reading cell databases for quota checks. Although, there is one more wrinkle: instance_mappings has a project_id column but does not have a user_id column, so we wouldn't be able to get a count by project + user needed for the quota check against user quota. So, if people would not be opposed, we could also add a "user_id" column to instance_mappings to handle that case. I would prefer not to block instance creations because of "down" cells, so maybe there is some possibility to avoid it if we can get "queued_for_delete" and "user_id" columns added to the instance_mappings table. -melanie From doug at doughellmann.com Thu Mar 15 03:42:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 14 Mar 2018 23:42:12 -0400 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <20180315005859.GE25428@thor.bakeyournoodle.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> <20180315005859.GE25428@thor.bakeyournoodle.com> Message-ID: <1521084457-sup-2477@lrrr.local> Excerpts from Tony Breeds's message of 2018-03-15 11:59:00 +1100: > On Thu, Mar 15, 2018 at 07:16:11AM +0900, Akihiro Motoki wrote: > > The current version of proposed patches which drops tox_install.sh > > works in our CI. Even if we have neutron>=12.0.0 (queens) or > > horizon>=13.0.0 (queens), if we have "required-projects" in zuul v3 > > config, tox-sibling role ensures to install the latest master of > > neutron/horizon. It is okay in our CI. > > > > On the other hand, this change brings a couple of problems. I think it > > is worth discussed broadly here. > > > > (1) it makes difficult to run tests in local environment > > We have only released version of neutron/horizon on PyPI. It means > > PyPI version (i.e. queens) is installed when we run tox in our local > > development. Most neutron stadium projects and horizon plugins depends > > on the latest master. Test run in local environment will be broken. We > > need to install the latest neutron/horizon manually. This confuses > > most developers. We need to ensure that tox can run successfully in a > > same manner in our CI and local environments. > > This is an issue I agree and one we need to think about but it will be > somewhat mitigated for local development by pbr siblings[1] > > In the short term, developers can do something like: > > for env in pep8,py35,py27 ; do > tox -e $env --notest > .tox/$env/bin/pip install -e /path/to/{horizon,neutron} > tox -e $env > done > > Which is far from ideal but gives as a little breathing room to decide > if we need to revert and try again in a while or persist with the plan > as it stands. > > pbr siblings wont fix all the issues we have and still makes consumption of > neutron and horizon (and plugins / stadium projects) difficult outside > of test. > > > (2) neutron/horizon version in requirements.txt is confusing > > In the cycle-with-milestone model, a new version of neutron/horizon > > will be released only when a release is shipped. > > The code depends on the latest master, but requirements.txt says it > > depends on queens or later. It sounds confusing. > > Yes we either need to create a new release-model or switch > neutron/horizon to the cycle-with-intermediary model and encourage > appropriate releases. I'd really like to avoid publishing daily to pypi. We keep doing lots of infra-related work to make it "easy" to do things we shouldn't be doing in the first place when it comes to managing dependencies. There are three ways to address the issue with horizon and neutron, and none of them involve adding features to pbr. 1. Things that are being used like libraries need to release like libraries. Real releases. With appropriate version numbers. So that other things that depend on them can express valid dependencies. 2. Extract the relevant code into libraries and release *those*. 3. Things that are not stable enough to be treated as a library shouldn't be used that way. Move the things that use the application code as library code back into the repo with the thing that they are tied to but that we don't want to (or can't) treat like a library. Let's stop making things hard on ourselves and start managing this code properly. Doug From doug at doughellmann.com Thu Mar 15 03:48:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 14 Mar 2018 23:48:28 -0400 Subject: [openstack-dev] [reno] moved to storyboard In-Reply-To: References: <1521039106-sup-1670@lrrr.local> Message-ID: <1521085633-sup-1365@lrrr.local> I was remiss in not thanking fungi for his help with the move, and diablo_rojo for preparing the docs explaining the rest of the steps I needed to take afterwards. Thank you both! Excerpts from Kendall Nelson's message of 2018-03-14 22:55:15 +0000: > Woot woot! > > -Kendall (diablo_rojo) > > On Wed, Mar 14, 2018 at 7:51 AM Doug Hellmann wrote: > > > The bug tracker for reno has moved to storyboard: > > https://storyboard.openstack.org/#!/project/933 > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From 935540343 at qq.com Thu Mar 15 04:03:45 2018 From: 935540343 at qq.com (=?ISO-8859-1?B?X18gbWFuZ28u?=) Date: Thu, 15 Mar 2018 12:03:45 +0800 Subject: [openstack-dev] [gnocchi] gnocchi-keystone verification failed. Message-ID: hi, I have a question about the validation of gnocchi keystone. I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be successful) # gnocchi status --debug REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" Starting new HTTP connection (1): localhost http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114 RESP: [401] Content-Type: application/json Content-Length: 114 WWW-Authenticate: Keystone uri='http://192.168.12.244:5000/v3' Connection: Keep-Alive RESP BODY: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} The request you have made requires authentication. (HTTP 401) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 113, in run column_names, data = self.take_action(parsed_args) File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status_cli.py", line 23, in take_action status = utils.get_client(self).status.get() File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status.py", line 21, in get return self._get(self.url + '?details=%s' % details).json() File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/base.py", line 37, in _get return self.client.api.get(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, in get return self.request(url, 'GET', **kwargs) File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/client.py", line 52, in request raise exceptions.from_response(resp, method) Unauthorized: The request you have made requires authentication. (HTTP 401) Traceback (most recent call last): File "/usr/local/bin/gnocchi", line 11, in sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/shell.py", line 251, in main return GnocchiShell().run(args) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 279, in run result = self.run_subcommand(remainder) File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in run_subcommand result = cmd.run(parsed_args) File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 113, in run column_names, data = self.take_action(parsed_args) File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status_cli.py", line 23, in take_action status = utils.get_client(self).status.get() File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status.py", line 21, in get return self._get(self.url + '?details=%s' % details).json() File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/base.py", line 37, in _get return self.client.api.get(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, in get return self.request(url, 'GET', **kwargs) File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/client.py", line 52, in request raise exceptions.from_response(resp, method) gnocchiclient.exceptions.Unauthorized: The request you have made requires authentication. (HTTP 401) -------------- next part -------------- An HTML attachment was scrubbed... URL: From aakashkt0 at gmail.com Thu Mar 15 06:04:08 2018 From: aakashkt0 at gmail.com (Aakash Kt) Date: Thu, 15 Mar 2018 11:34:08 +0530 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hi James, Just a small reminder that I have pushed a patch for review, according to changes you suggested :-) Thanks, Aakash On Mon, Mar 12, 2018 at 2:38 PM, James Page wrote: > Hi Aakash > > On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote: > >> Hi, >> >> I had previously put in a mail about the development for openstack-ovn >> charm. Sorry it took me this long to get back, was involved in other >> projects. >> >> I have submitted a charm spec for the above charm. >> Here is the review link : https://review.openstack.org/#/c/551800/ >> >> Please look in to it and we can further discuss how to proceed. >> > > I'll feedback directly on the review. > > Thanks! > > James > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Thu Mar 15 07:50:16 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 15 Mar 2018 15:50:16 +0800 Subject: [openstack-dev] [nova][libvirt] question on max cpu and memory Message-ID: In order to work on [1] for prove libvirt is ok to do live-resize, I want to make following changes on xml file of instance to make maximum memory as 1G and current memory as 512M , and current CPU is a while maximum CPU is 2 so that we can hot resize through libvirt interface question I have is whether it's ok and whether the current CPU count (1 in this case) is inconsistent to cpu topology lead to any problem ? and are there some considerations/limitations ? Not so much experience so any comments is appreciated 1048576 524288 2 [1]https://blueprints.launchpad.net/nova/+spec/instance-live-resize Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Thu Mar 15 08:01:51 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 15 Mar 2018 16:01:51 +0800 Subject: [openstack-dev] [openstack-ops][heat][PTG] Heat PTG Summary Message-ID: Hi Heat devs and ops It's a great PTG plus SnowpenStack experience. Now Rocky started. We really need all kind of input and effort to make sure we're heading toward the right way. Here is what we been discussed during PTG: - Future strategy for heat-tempest-plugin & functional tests - Multi-cloud support - Next plan for Heat Dashboard - Race conditions for clients updating/deleting stacks - Swift Template/file object support - heat dashboard needs of clients - Resuming after an engine failure - Moving SyncPoints from DB to DLM - toggle the debug option at runtime - remove mox - Allow partial success in ASG - Client Plugins and OpenStackSDK - Global Request Id support - Heat KeyStone Credential issue - (How we going to survive on the island) You can find *all Etherpads links* in *https://etherpad.openstack.org/p/heat-rocky-ptg * We try to document down as much as we can(Thanks Zane for picking it up), including discussion and actions. *Will try to target all actions in Rocky*. If you do like to input on any topic (or any topic you think we missing), *please try to provide inputs to the etherpad* (and be kind to leave messages in ML or meeting so we won't miss it.) *Use Cases* If you have any use case for us (What's your usecase, what's not working/ what's working well), please help us and input to* https://etherpad.openstack.org/p/heat-usecases * Here are *Team photos* we took: *https://www.dropbox.com/sh/dtei3ovfi7z74vo/AADX_s3PXFiC3Fod8Yj_RO4na/Heat?dl=0 * -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Thu Mar 15 08:15:30 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 15 Mar 2018 17:15:30 +0900 Subject: [openstack-dev] [masakari] Masakari Project mascot ideas In-Reply-To: References: Message-ID: Thanks all. Seems like we are good to go with "St. Bernard". I think general image is [1]. Or let me know I am wrong. I will inform to Anne and Kendall,about our choice. [1] https://www.min-inuzukan.com/st-bernard.html --- Regards, Sampath On Wed, Mar 14, 2018 at 6:42 PM, Shewale, Bhagyashri < Bhagyashri.Shewale at nttdata.com> wrote: > +1 for St. Bernard > > Regards, > Bhagyashri Shewale > > ________________________________________ > From: Patil, Tushar > Sent: Wednesday, March 14, 2018 7:52:54 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas > > Hi, > > > Total 4 people attended last IRC meeting and all of them have voted for > St.Bernard Dog. > > > If someone has missed to vote, please vote for mascot now. > > > Options: > > 1) Asiatic black bear > 2) Gekko : Geckos is able to regrow it's tail when the tail is lost. > 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM > instances) > > > Thank you. > > > Regards, > > Tushar Patil > > > > ________________________________ > From: Bhor, Dinesh > Sent: Wednesday, March 14, 2018 10:16:29 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas > > > Hi Sampath San, > > > There is one more option which we discussed in yesterdays masakari meeting > [1]: > > St. Bernard(Dog) [2]. > > > [1] http://eavesdrop.openstack.org/meetings/masakari/2018/ > masakari.2018-03-13-04.01.log.html#l-38 > > > [2] https://en.wikipedia.org/wiki/St._Bernard_(dog) > > > Thank you, > > Dinesh Bhor > > > ________________________________ > From: Sam P > Sent: 13 March 2018 22:19:00 > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] [masakari] Masakari Project mascot ideas > > Hi All, > > We started this discussion on IRC meeting few weeks ago and still no > progress..;) > (aspiers: thanks for the reminder!) > > Need mascot proposals for Masakari, see FAQ [1] for more info > Current ideas: Origin of "Masakari" is related to hero from Japanese > folklore [2]. > Considering that relationship and to start the process, here are few ideas, > (1) Asiatic black bear > (2) Gekko : Geckos is able to regrow it's tail when the tail is lost. > > [1] https://www.openstack.org/project-mascots/ > [http://www.openstack.org/themes/openstack/images/openstack-logo-full.png > ] > > Project Mascots - OpenStack is open source software for ...< > https://www.openstack.org/project-mascots/> > www.openstack.org > We are OpenStack. We’re also passionately developing more than 60 projects > within OpenStack. To support each project’s unique identity and visually > demonstrate ... > > > [2] https://en.wikipedia.org/wiki/Kintar%C5%8D > > --- Regards, > Sampath > > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > ______________________________________________________________________ > Disclaimer: This email and any attachments are sent in strictest confidence > for the sole use of the addressee and may contain legally privileged, > confidential, and proprietary data. If you are not the intended recipient, > please advise the sender by replying promptly to this email and then delete > and destroy this email and any attachments without any further use, copying > or forwarding. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Thu Mar 15 08:26:16 2018 From: sam47priya at gmail.com (Sam P) Date: Thu, 15 Mar 2018 17:26:16 +0900 Subject: [openstack-dev] [masakari] Rocky work items Message-ID: Hi All, Rocky work items will be discussed in 3/20 masakari IRC meeting [1]. Current items are listed in etherpad [2] and new items, comments, questions are welcome. If you are not able to join IRC meeting, then please add sufficient details to your comment and your contacts (IRC or email) where we can reach you for further discussion. [1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting [2] https://etherpad.openstack.org/p/masakari-rocky-work-items --- Regards, Sampath -------------- next part -------------- An HTML attachment was scrubbed... URL: From ponomarev at selectel.ru Thu Mar 15 08:39:52 2018 From: ponomarev at selectel.ru (=?UTF-8?B?0JLQsNC00LjQvCDQn9C+0L3QvtC80LDRgNC10LI=?=) Date: Thu, 15 Mar 2018 11:39:52 +0300 Subject: [openstack-dev] [Octavia] Using Octavia without neutron's extensions allowed-address-pairs and security-groups. Message-ID: Hi, I'm trying to install Octavia (from branch master) in my openstack installation. In my installation, neutron works with disabled extension allowed-address-pairs and disabled extension security-groups. This is done to improve performance. At the moment, i see that Octavia supporting for neutron only the network_driver allowed_address_pairs_driver, but this driver requires the extensions [1]. How can i use Octavia without the extensions? Or the only option is to write your own driver? [1] https://github.com/openstack/octavia/blob/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L57 -- Best regards, Vadim Ponomarev -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Thu Mar 15 08:48:57 2018 From: julien at danjou.info (Julien Danjou) Date: Thu, 15 Mar 2018 09:48:57 +0100 Subject: [openstack-dev] [gnocchi] gnocchi-keystone verification failed. In-Reply-To: (mango.'s message of "Thu, 15 Mar 2018 12:03:45 +0800") References: Message-ID: On Thu, Mar 15 2018, __ mango. wrote: > I have a question about the validation of gnocchi keystone. There's no question in your message. > I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be successful) > > # gnocchi status --debug > REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False > -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H > "Accept: application/json, */*" -H "User-Agent: gnocchi > keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" There's no token in this request so Keystone auth won't work. You did not set the environment variable OS_* correctly. -- Julien Danjou # Free Software hacker # https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From thomas.morin at orange.com Thu Mar 15 09:05:43 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Thu, 15 Mar 2018 10:05:43 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> Message-ID: <820d9b3cb693355784938bd301246b26c9a8745f.camel@orange.com> Hi Andreas, all, > Note that thanks to the tox-siblings feature, we really continue to > install neutron and horizon from git - and not use the versions in > the global-requirements constraints file. This addresses my main concern, which was that by removing tools/tox_install.sh we would end up not pulling master from git. The fact that we do keep pulling from git wasn't explicit AFAIK in any of the commit messages of the changes I had to look at to understand what was being modified. I concur with Akihiro's comment, and would go slightly beyond that: ideally the solution chosen would not only technical work, but would reduce the ahah-there-is-magic-behind-the-scene effect, which is a pain I believe for many: people new to the community face a steeper learning curve, people inside the community need to spend time adjust, and infra folks end up having to document or explain more. In this precise case, the magic behind the scene (ie. the tox-siblings role) may lead to confusion for packagers (why our CI tests as valid is not what appears in requirements.txt) and perhaps people working in external communities (e.g. [1]). Best, -Thomas [1] http://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-over view.html#xci-overview Andreas Jaeger, 2018-03-14 20:46: -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Thu Mar 15 09:15:38 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Thu, 15 Mar 2018 10:15:38 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <1521084457-sup-2477@lrrr.local> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> <20180315005859.GE25428@thor.bakeyournoodle.com> <1521084457-sup-2477@lrrr.local> Message-ID: Hi Doug, Doug Hellmann, 2018-03-14 23:42: > We keep doing lots of infra-related work to make it "easy" to do > when it comes to > managing dependencies. There are three ways to address the issue > with horizon and neutron, and none of them involve adding features > to pbr. > > 1. Things that are being used like libraries need to release like > libraries. Real releases. With appropriate version numbers. So > that other things that depend on them can express valid > dependencies. > > 2. Extract the relevant code into libraries and release *those*. > > 3. Things that are not stable enough to be treated as a library > shouldn't be used that way. Move the things that use the > application > code as library code back into the repo with the thing that they > are tied to but that we don't want to (or can't) treat like a > library. What about the case where there is co-development of features across repos ? One specific case I have in mind is the Neutron stadium where we sometimes have features in neutron repo that are worked on as a pre- requisite for things that will be done in a neutron-* or networking-* project. Another is a case for instance where we need to add in project X a tempest test to validate the resolution of a bug for which the fix actually happened in project B (and where B is not a library). My intuition is that it is not illegitimate to expect this kind of development workflow to be feasible; but at the same time I read your suggestion above as meaning that it belongs to the real of "things we shouldn't be doing in the first place". The only way I can reconcile the two would be to conclude we should collapse all the module in neutron-*/networking-* into neutron, but doing that would have quite a lot of side effects (yes, this is an understatement). -Thomas From 935540343 at qq.com Thu Mar 15 09:16:28 2018 From: 935540343 at qq.com (=?gb18030?B?X18gbWFuZ28u?=) Date: Thu, 15 Mar 2018 17:16:28 +0800 Subject: [openstack-dev] [gnocchi] gnocchi-keystone verification failed. In-Reply-To: References: Message-ID: hi,The environment variable that you're talking about has been configured and the error has not gone away. I was on OpenStack for the first time, can you be more specific? Thank you very much. ------------------ Original ------------------ From: "Julien Danjou"; Date: Thu, Mar 15, 2018 04:48 PM To: "__ mango."<935540343 at qq.com>; Cc: "openstack-dev"; Subject: Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed. On Thu, Mar 15 2018, __ mango. wrote: > I have a question about the validation of gnocchi keystone. There's no question in your message. > I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be successful) > > # gnocchi status --debug > REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False > -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H > "Accept: application/json, */*" -H "User-Agent: gnocchi > keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" There's no token in this request so Keystone auth won't work. You did not set the environment variable OS_* correctly. -- Julien Danjou # Free Software hacker # https://julien.danjou.info -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2904F20F at C554F336.EC39AA5A.jpg Type: image/jpeg Size: 8127 bytes Desc: not available URL: From aj at suse.com Thu Mar 15 09:21:23 2018 From: aj at suse.com (Andreas Jaeger) Date: Thu, 15 Mar 2018 10:21:23 +0100 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <820d9b3cb693355784938bd301246b26c9a8745f.camel@orange.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <820d9b3cb693355784938bd301246b26c9a8745f.camel@orange.com> Message-ID: <0115ee31-3079-6b64-1706-eab0ff67471b@suse.com> On 2018-03-15 10:05, Thomas Morin wrote: > Hi Andreas, all, > > Andreas Jaeger, 2018-03-14 20:46: >> Note that thanks to the tox-siblings feature, we really continue to >> install neutron and horizon from git - and not use the versions in the >> global-requirements constraints file. > > This addresses my main concern, which was that by removing > tools/tox_install.sh we would end up not pulling master from git. > > The fact that we do keep pulling from git wasn't explicit AFAIK in any > of the commit messages of the changes I had to look at to understand > what was being modified. Sorry for not mentioning that. > I concur with Akihiro's comment, and would go slightly beyond that: > ideally the solution chosen would not only technical work, but would > reduce the ahah-there-is-magic-behind-the-scene effect, which is a pain > I believe for many: people new to the community face a steeper learning > curve, people inside the community need to spend time adjust, and infra > folks end up having to document or explain more. In this precise case, > the magic behind the scene (ie. the tox-siblings role) may lead to > confusion for packagers (why our CI tests as valid is not what appears > in requirements.txt) and perhaps people working in external communities > (e.g. [1]). The old way - included some magic as well ;( I agree with Doug - we need to architect our dependencies better to avoid these problems and hacks, Andreas > Best, > > -Thomas > > [1] > http://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-overview.html#xci-overview -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From m.andre at redhat.com Thu Mar 15 09:40:34 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Thu, 15 Mar 2018 10:40:34 +0100 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: +1 On Tue, Mar 13, 2018 at 5:50 PM, Swapnil Kulkarni wrote: > On Mon, Mar 12, 2018 at 7:36 AM, Jeffrey Zhang wrote: >> Kolla core reviewer team, >> >> It is my pleasure to nominate caoyuan for kolla core team. >> >> caoyuan's output is fantastic over the last cycle. And he is the most >> active non-core contributor on Kolla project for last 180 days[1]. He >> focuses on configuration optimize and improve the pre-checks feature. >> >> Consider this nomination a +1 vote from me. >> >> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 >> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous >> response is reached or a veto vote occurs. >> >> [1] http://stackalytics.com/report/contribution/kolla-group/180 >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > +1 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From j.harbott at x-ion.de Thu Mar 15 10:28:33 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Thu, 15 Mar 2018 10:28:33 +0000 Subject: [openstack-dev] [neutron][stable] New release for Pike is overdue Message-ID: The last neutron release for Pike has been made in November, a lot of bug fixes have made it into the stable/pike branch, can we please get a fresh release for it soon? From jean-philippe at evrard.me Thu Mar 15 10:57:58 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 15 Mar 2018 10:57:58 +0000 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: <20180315011132.GF25428@thor.bakeyournoodle.com> References: <20180314212003.GC25428@thor.bakeyournoodle.com> <20180315011132.GF25428@thor.bakeyournoodle.com> Message-ID: Looks good to me. On 15 March 2018 at 01:11, Tony Breeds wrote: > On Wed, Mar 14, 2018 at 09:40:33PM +0000, Jean-Philippe Evrard wrote: >> Hello folks, >> >> The list is almost perfect: you can do all of those except >> openstack/openstack-ansible-tests. >> I'd like to phase out openstack/openstack-ansible-tests and >> openstack/openstack-ansible later. > > Okay excluding the 2 repos above and filtering out projects that don't > have newton branches we came down to: > > # EOL repos belonging to OpenStackAnsible > eol_branch.sh -- stable/newton newton-eol \ > openstack/ansible-hardening \ > openstack/openstack-ansible-apt_package_pinning \ > openstack/openstack-ansible-ceph_client \ > openstack/openstack-ansible-galera_client \ > openstack/openstack-ansible-galera_server \ > openstack/openstack-ansible-haproxy_server \ > openstack/openstack-ansible-lxc_container_create \ > openstack/openstack-ansible-lxc_hosts \ > openstack/openstack-ansible-memcached_server \ > openstack/openstack-ansible-openstack_hosts \ > openstack/openstack-ansible-openstack_openrc \ > openstack/openstack-ansible-ops \ > openstack/openstack-ansible-os_aodh \ > openstack/openstack-ansible-os_ceilometer \ > openstack/openstack-ansible-os_cinder \ > openstack/openstack-ansible-os_glance \ > openstack/openstack-ansible-os_gnocchi \ > openstack/openstack-ansible-os_heat \ > openstack/openstack-ansible-os_horizon \ > openstack/openstack-ansible-os_ironic \ > openstack/openstack-ansible-os_keystone \ > openstack/openstack-ansible-os_magnum \ > openstack/openstack-ansible-os_neutron \ > openstack/openstack-ansible-os_nova \ > openstack/openstack-ansible-os_rally \ > openstack/openstack-ansible-os_sahara \ > openstack/openstack-ansible-os_swift \ > openstack/openstack-ansible-os_tempest \ > openstack/openstack-ansible-pip_install \ > openstack/openstack-ansible-plugins \ > openstack/openstack-ansible-rabbitmq_server \ > openstack/openstack-ansible-repo_build \ > openstack/openstack-ansible-repo_server \ > openstack/openstack-ansible-rsyslog_client \ > openstack/openstack-ansible-rsyslog_server \ > openstack/openstack-ansible-security > > If you confirm I have the list right this time I'll work on this tomorrow > > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Thu Mar 15 11:03:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 07:03:11 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects Message-ID: <1521110096-sup-3634@lrrr.local> Back in Barcelona for the Ocata summit I presented a rough outline of a plan for us to change the way we manage dependencies across projects so that we can stop syncing them [1]. We've made some progress, and I think it's time to finish the work so I'm volunteering to take some of it up during Rocky. This email is meant to rehash and update the proposal, and fill in some of the missing details. [1] https://etherpad.openstack.org/p/ocata-requirements-notes TL;DR ----- Let's stop copying exact dependency specifications into all our projects to allow them to reflect the actual versions of things they depend on. The constraints system in pip makes this change safe. We still need to maintain some level of compatibility, so the existing requirements-check job (run for changes to requirements.txt within each repo) will change a bit rather than going away completely. We can enable unit test jobs to verify the lower constraint settings at the same time that we're doing the other work. Some History ------------ Back in the dark ages of OpenStack development we had a lot of trouble keeping the dependencies of all of our various projects configured so they were co-installable. Usually, but not always, the problems were caused by caps or "exclusions" (version != X) on dependencies in one project but not in another. Because pip's dependency resolver does not take into account the versions of dependencies needed by existing packages, it was quite easy to install things in the "wrong" order and end up with incompatible libraries so services wouldn't start or couldn't import plugins. The first (working) solution to the problem was to develop a dependency management system based the openstack/requirements repository. This system and our policies required projects to copy exactly the settings for all of their dependencies from a global list managed by a team of reviewers (first the release team, and later the requirements team). By copying exactly the same settings into all projects we ensured that they were "co-installable" without any dependency conflicts. Having a centralized list of dependencies with a review team also gave us an opportunity to look for duplicates, packages using incompatible licenses, and otherwise curate the list of dependencies. More on that later. Some time after we had the centralized dependency management system in place, Robert Collins worked with the PyPA folks to add a feature to pip to constrain installed versions of packages that are actually installed, while still allowing a range of versions to be specified in the dependency list. We were then able to to create a list of "upper constraints" -- the highest, or newest, versions -- of all of the packages we depend on and set up our test jobs to use that list to control what is actually installed. This gives us the ability to say that we need at least version X.Y.Z of a package and to force the selection of X.Y+1.0 because we want to test with that version. The constraint feature means that we no longer need to have all of the dependency specifications match exactly, since we basically force the installation of a specific version anyway. We've been running with both constraints and requirements syncing enabled for a while now, and I think we should stop syncing the settings to allow projects to let their lower bounds (the minimum versions of their dependencies) diverge. That divergence is useful to folks creating packages for just some of the services, especially when they are going to be deployed in isolation where co-installability is not required. Skipping the syncs will also mean we end up releasing fewer versions of stable libraries, because we won't be raising the minimum supported versions of their dependencies automatically. That second benefit is my motivation for focusing on this right now. Our Requirements ---------------- We have three primary requirements for managing the dependency list: 1. Maintain a list of co-installable versions of all of our dependencies. 2. Avoid breaking or deadlocking any of our gate jobs due to dependency conflicts. 3. Continue to review new dependencies for licensing, redundancy, etc. I believe the upper-constraints.txt file in openstack/releases satisfies the first two of these requirements. The third means we need to continue to *have* a global requirements list, but we can change how we manage it. In addition to these hard requirements, it would be nice if we could test the lower bounds of dependencies in projects to detect when a project is using a feature of a newer version of a library than their dependencies indicate. Although that is a bit orthogonal to the syncing issue, I'm going to describe one way we could do that because the original plan of keeping a global list of "lower constraints" breaks our ability to stop syncing the same lower bounds into all of the projects somewhat. What I Want to Do ----------------- 1. Update the requirements-check test job to change the check for an exact match to be a check for compatibility with the upper-constraints.txt value. We would check the value for the dependency from upper-constraints.txt against the range of allowed values in the project. If the constraint version is compatible, the dependency range is OK. This rule means that in order to change the dependency settings for a project in a way that are incompatible with the constraint, the constraint (and probably the global requirements list) would have to be changed first in openstack/requirements. However, if the change to the dependency is still compatible with the constraint, no change would be needed in openstack/requirements. For example, if the global list constraints a library to X.Y.Z and a project lists X.Y.Z-2 as the minimum version but then needs to raise that because it needs a feature in X.Y.Z-1, it can do that with a single patch in-tree. We also need to change requirements-check to look at the exclusions to ensure they all appear in the global-requirements.txt list (the local list needs to be a subset of the global list, but does not have to match it exactly). We can't have one project excluding a version that others do not, because we could then end up with a conflict with the upper constraints list that could wedge the gate as we had happen in the past. We also need to verify that projects do not cap dependencies for the same reason. Caps prevent us from advancing to versions of dependencies that are "too new" and possibly incompatible. We can manage caps in the global requirements list, which would cause that list to calculate the constraints correctly. This change would immediately allow all projects currently following the global requirements lists to specify different lower bounds from that global list, as long as those lower bounds still allow the dependencies to be co-installable. (The upper bounds, managed through the upper-constraints.txt list, would still be built by selecting the newest compatible version because that is how pip's dependency resolver works.) 2. We should stop syncing dependencies by turning off the propose-update-requirements job entirely. Turning off the job will stop the bot from proposing more dependency updates to projects. As part of deleting the job we can also remove the "requirements" case from playbooks/proposal/propose_update.sh, since it won't need that logic any more. We can also remove the update-requirements command from the openstack/requirements repository, since that is the tool that generates the updated list and it won't be needed if we aren't proposing updates any more. 3. Remove the minimum specifications from the global requirements list to make clear that the global list is no longer expressing minimums. This clean-up step has been a bit more controversial among the requirements team, but I think it is a key piece. As the minimum versions of dependencies diverge within projects, there will no longer *be* a real global set of minimum values. Tracking a list of "highest minimums", would either require rebuilding the list from the settings in all projects, or requiring two patches to change the minimum version of a dependency within a project. Maintaining a global list of minimums also implies that we consider it OK to run OpenStack as a whole with that list. This message conflicts with the message we've been sending about the upper constraints list since that was established, which is that we have a known good list of versions and deploying all of OpenStack with different versions of those dependencies is untested. After these 3 steps are done, the requirements team will continue to maintain the global-requirements.txt and upper-constraints.txt files, as before. Adding a new dependency to a project will still involve a review step to add it to the global list so we can monitor licensing, duplication, python 3 support, etc. But adjusting the version numbers once that dependency is in the global list will be easier. Testing Lower Bounds of Dependencies ------------------------------------ I don't have any personal interest in us testing against "old" versions of dependencies, but since the requirements team feels at least having a plan for such testing in place is a prerequisite for the other work, here is what I've come up with. We can define a new test job to run the unit tests under python 3 using a tox environment called "lower-constraints" that is configured to install the dependencies for the repo using a file lower-constraints.txt that lives in the project repository. Then, for each repository listed in projects.txt (~325 today), we need to add the job to the zuul configuration within the repo. We don't want to turn the job on voting by default globally for all of those projects because it would break until the tox environment was configured. We don't want to turn it on non-voting because then the infra team would have ~325 patches to review as it was set to be voting for each repository individually. At some point in the future, after all of the projects have it enabled in-repo, we can move that configuration to the project-config repo in 1 patch and make the lower-constraints job part of the check-requirements job template. To configure the job in a given repo we will need to run a few separate steps to prepare a single patch like https://review.openstack.org/#/c/550603/ (that patch is experimental and contains the full job definition, which won't be needed everywhere). 1. Set up a new tox environment called "lower-constraints" with base-python set to "python3" and with the deps setting configured to include a copy of the existing global lower constraints file from the openstack/requirements repo. 2. Run "tox -e lower-constraints —notest" to build a virtualenv using the lower constraints. 3. Run ".tox/lower-constraints/bin/pip freeze > lower-constraints.txt" to create the initial version of the lower-constraints.txt file for the current repo. 4. Modify the tox settings for lower-constraints to point to the new file that was generated instead of the global list. 5. Update the zuul configuration to add the new job defined in project-config. The results of those steps can be combined into a single patch and proposed to the project. To avoid overwhelming zuul's job configuration resolver, we need to propose the patches in separate batches of about 10 repos at a time. This is all mostly scriptable, so I will write a script and propose the patches (unless someone else wants to do it all -- we need a single person to keep up with how many patches we're proposing at one time). The point of creating the initial lower-constraints.txt file is not necessarily to be "accurate" with the constraints immediately, but to have something to work from. After the patches are proposed, please either plan to land them or vote -2 indicating that you don't want a job like that on that repo. If you want to change the constraints significantly, please do that in a separate patch. With ~325 of them, I'm not going to be able to keep up with everyone's separate needs and this is all meant to just establish the initial version of the job anyway. For projects that currently only support python 2 we can modify the proposed patches to not set base-python to use python3. You will have noticed that this will only apply to unit test jobs. Projects are free to use the results to add their own functional test jobs using the same lower-constraints.txt files, but that's up to them to do. For the reasons outlined above about why we want divergence, I don't think it makes much sense to run a full integration job with the other projects, since their dependency lists may differ. Sorry for the length of this email, but we don't have a specs repo for the requirements team and I wanted to put all of the details of the proposal down in one place for discussion. Let me know what you think, Doug From rasca at redhat.com Thu Mar 15 11:37:49 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Thu, 15 Mar 2018 12:37:49 +0100 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> Message-ID: <35078e57-2acb-f500-59c0-18eebdf9db04@redhat.com> On 15/03/2018 01:57, Ghanshyam Mann wrote: > Thanks all for starting the collaboration on this which is long pending > things and we all want to have some start on this. > Myself and SamP talked about it during OPS meetup in Tokyo and we talked > about below draft plan- > - Update the Spec - https://review.openstack.org/#/c/443504/. which is > almost ready as per SamP and his team is working on that. > - Start the technical debate on tooling we can use/reuse like Yardstick > etc, which is more this mailing thread.  > - Accept the new repo for Eris under QA and start at least something in > Rocky cycle. > I am in for having meeting on this which is really good idea. non-IRC > meeting is totally fine here. Do we have meeting place and time setup ? > -gmann Hi Ghanshyam, as I wrote earlier in the thread it's no problem for me to offer my bluejeans channel, let's sort out which timeslice can be good. I've added to the main etherpad [1] my timezone (line 53), let's do all that so that we can create the meeting invite. [1] https://etherpad.openstack.org/p/extreme-testing-contacts -- Raoul Scarazzini rasca at redhat.com From gord at live.ca Thu Mar 15 12:24:05 2018 From: gord at live.ca (gordon chung) Date: Thu, 15 Mar 2018 12:24:05 +0000 Subject: [openstack-dev] [gnocchi] gnocchi-keystone verification failed. In-Reply-To: References: Message-ID: On 2018-03-15 5:16 AM, __ mango. wrote: > hi, > The environment variable that you're talking about has been configured > and the error has not gone away. > > I was on OpenStack for the first time, can you be more specific? Thank > you very much. > https://gnocchi.xyz/gnocchiclient/shell.html#openstack-keystone-authentication you're missing OS_AUTH_TYPE -- gord From lijie at unitedstack.com Thu Mar 15 12:27:54 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Thu, 15 Mar 2018 20:27:54 +0800 Subject: [openstack-dev] [Openstack-operators] [nova] about rebuildinstance booted from volume In-Reply-To: <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> Message-ID: It seems that we can only delete the snapshots of the original volume firstly,then delete the original volume if the original volume has snapshots. Thanks, lijie ------------------ Original ------------------ From: "Tomáš Vondra"; Date: Wed, Mar 14, 2018 11:43 PM To: "OpenStack Developmen"; "openstack-operators"; Subject: Re: [openstack-dev] [Openstack-operators] [nova] about rebuildinstance booted from volume Hi! I say delete! Delete them all! Really, it's called delete_on_termination and should be ignored on Rebuild. We have a VPS service implemented on top of OpenStack and do throw the old contents away on Rebuild. When the user has the Backup service paid, they can restore a snapshot. Backup is implemented as volume snapshot, then clone volume, then upload to image (glance is on a different disk array). I also sometimes multi-attach a volume manually to a service node and just dd an image onto it. If it was to be implemented this way, then there would be no deleting a volume with delete_on_termination, just overwriting. But the effect is the same. IMHO you can have snapshots of volumes that have been deleted. Just some backends like our 3PAR don't allow it, but it's not disallowed in the API contract. Tomas from Homeatcloud -----Original Message----- From: Saverio Proto [mailto:zioproto at gmail.com] Sent: Wednesday, March 14, 2018 3:19 PM To: Tim Bell; Matt Riedemann Cc: OpenStack Development Mailing List (not for usage questions); openstack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume My idea is that if delete_on_termination flag is set to False the Volume should never be deleted by Nova. my 2 cents Saverio 2018-03-14 15:10 GMT+01:00 Tim Bell : > Matt, > > To add another scenario and make things even more difficult (sorry (), if the original volume has snapshots, I don't think you can delete it. > > Tim > > > -----Original Message----- > From: Matt Riedemann > Reply-To: "OpenStack Development Mailing List (not for usage > questions)" > Date: Wednesday, 14 March 2018 at 14:55 > To: "openstack-dev at lists.openstack.org" > , openstack-operators > > Subject: Re: [openstack-dev] [nova] about rebuild instance booted from > volume > > On 3/14/2018 3:42 AM, 李杰 wrote: > > > > This is the spec about rebuild a instance booted from > > volume.In the spec,there is a > > question about if we should delete the old root_volume.Anyone who > > is interested in > > booted from volume can help to review this. Any suggestion is > > welcome.Thank you! > > The link is here. > > Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator > s _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Thu Mar 15 12:34:12 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Thu, 15 Mar 2018 20:34:12 +0800 Subject: [openstack-dev] [nova] Rocky PTG summary - cells In-Reply-To: References: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Message-ID: Thanks for the reply, both solution looks reasonable. On Thu, Mar 15, 2018 at 10:29 AM, melanie witt wrote: > On Thu, 15 Mar 2018 09:54:59 +0800, Zhenyu Zheng wrote: > >> Thanks for the recap, got one question for the "block creation": >> >> * An attempt to create an instance should be blocked if the project >> has instances in a "down" cell (the instance_mappings table has a >> "project_id" column) because we cannot count instances in "down" >> cells for the quota check. >> >> >> Since users are not aware of any cell information, and the cells are >> mostly randomly selected, there could be high possibility that >> users(projects) instances are equally spreaded across cells. The proposed >> behavior seems can >> easily cause a lot of users couldn't create instances because one of the >> cells is down, isn't it too rude? >> > > To be honest, I share your concern. I had planned to change quota checks > to use placement instead of reading cell databases ASAP but hit a snag > where we won't be able to count instances from placement because we can't > determine the "type" of an allocation. Allocations can be instances, or > network-related resources, or volume-related resources, etc. Adding the > concept of an allocation "type" in placement has been a controversial > discussion so far. > > BUT ... we also said we would add a column like "queued_for_delete" to the > instance_mappings table. If we do that, we could count instances from the > instance_mappings table in the API database and count cores/ram from > placement and no longer rely on reading cell databases for quota checks. > Although, there is one more wrinkle: instance_mappings has a project_id > column but does not have a user_id column, so we wouldn't be able to get a > count by project + user needed for the quota check against user quota. So, > if people would not be opposed, we could also add a "user_id" column to > instance_mappings to handle that case. > > I would prefer not to block instance creations because of "down" cells, so > maybe there is some possibility to avoid it if we can get > "queued_for_delete" and "user_id" columns added to the instance_mappings > table. > > -melanie > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Thu Mar 15 12:45:35 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 15 Mar 2018 12:45:35 +0000 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <35078e57-2acb-f500-59c0-18eebdf9db04@redhat.com> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> <35078e57-2acb-f500-59c0-18eebdf9db04@redhat.com> Message-ID: <20180315124535.ekyibc5wowjjjogg@pacific.linksys.moosehall> Raoul Scarazzini wrote: >On 15/03/2018 01:57, Ghanshyam Mann wrote: >> Thanks all for starting the collaboration on this which is long pending >> things and we all want to have some start on this. >> Myself and SamP talked about it during OPS meetup in Tokyo and we talked >> about below draft plan- >> - Update the Spec - https://review.openstack.org/#/c/443504/. which is >> almost ready as per SamP and his team is working on that. >> - Start the technical debate on tooling we can use/reuse like Yardstick >> etc, which is more this mailing thread.  >> - Accept the new repo for Eris under QA and start at least something in >> Rocky cycle. >> I am in for having meeting on this which is really good idea. non-IRC >> meeting is totally fine here. Do we have meeting place and time setup ? >> -gmann > >Hi Ghanshyam, >as I wrote earlier in the thread it's no problem for me to offer my >bluejeans channel, let's sort out which timeslice can be good. I've >added to the main etherpad [1] my timezone (line 53), let's do all that >so that we can create the meeting invite. > >[1] https://etherpad.openstack.org/p/extreme-testing-contacts Good idea! I've added mine. We're still missing replies from several key stakeholders though (lines 62++) - probably worth getting buy-in from a few more people before we organise anything. I'm pinging a few on IRC with reminders about this. From hjensas at redhat.com Thu Mar 15 13:05:40 2018 From: hjensas at redhat.com (hjensas at redhat.com) Date: Thu, 15 Mar 2018 14:05:40 +0100 Subject: [openstack-dev] [tripleo] FFE - Feuture Freeze Exception request for Routed Spine and Leaf Deployment In-Reply-To: References: <1517570931.6277.15.camel@redhat.com> Message-ID: <1521119140.4323.46.camel@redhat.com> Hi, It has come to my attention that I missed one detail for the routed spine and leaf support. There is an issue with introspection and the filtering used to ensure only specified nodes are introspected. Apparently we are still using the iptables based PXE filtering in ironic-inspecter. (I tought the new dnsmasq based filter was the default already.) The problem: When using iptables to filter on mac addresses we won't be able to filter PXE DHCP requests coming in via the dhcp-relay agent, e.g the nodes in remote L2 segments will not be filtered. So while introspection works, we have no way to ensure that nodes we do not intend to introspect ends up running introspection by accident. The solution: Switch to use the dnsmasq based filter available in ironic-inspector. The question is where do we go from here? * Do we declare introspection unsupported for Queens when using routed networks? * Can we continue the feature work, and backport something to stable/queens that use the dnsmasq based filter? Maby with a conditional to use the new filtering if, and only if, routed networks support is enabled in the undercloud? The work to start using the new filtering is on-going in the following patches: puppet-ironic: https://review.openstack.org/523922 puppet-tripleo: https://review.openstack.org/525203/ instack-undercloud: https://review.openstack.org/523944/ This one for overcloud and containers based undercloud. (This would not be a backport requirement.) https://review.openstack.org/523909/ Best Regars Harald Jensås From doug at doughellmann.com Thu Mar 15 13:28:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 09:28:21 -0400 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> <20180315005859.GE25428@thor.bakeyournoodle.com> <1521084457-sup-2477@lrrr.local> Message-ID: <1521119762-sup-847@lrrr.local> Excerpts from Thomas Morin's message of 2018-03-15 10:15:38 +0100: > Hi Doug, > > Doug Hellmann, 2018-03-14 23:42: > > We keep doing lots of infra-related work to make it "easy" to do > > when it comes to > > managing dependencies. There are three ways to address the issue > > with horizon and neutron, and none of them involve adding features > > to pbr. > > > > 1. Things that are being used like libraries need to release like > > libraries. Real releases. With appropriate version numbers. So > > that other things that depend on them can express valid > > dependencies. > > > > 2. Extract the relevant code into libraries and release *those*. > > > > 3. Things that are not stable enough to be treated as a library > > shouldn't be used that way. Move the things that use the > > application > > code as library code back into the repo with the thing that they > > are tied to but that we don't want to (or can't) treat like a > > library. > > What about the case where there is co-development of features across > repos ? One specific case I have in mind is the Neutron stadium where We do that all the time with the Oslo libraries. It's not as easy as having everything in one repo, but we manage. > we sometimes have features in neutron repo that are worked on as a pre- > requisite for things that will be done in a neutron-* or networking-* > project. Another is a case for instance where we need to add in project > X a tempest test to validate the resolution of a bug for which the fix > actually happened in project B (and where B is not a library). If the tempest test can't live in B because it uses part of X, then I think X and B are really one thing and you're doing more work than you need to be doing to keep them in separate libraries. > My intuition is that it is not illegitimate to expect this kind of > development workflow to be feasible; but at the same time I read your > suggestion above as meaning that it belongs to the real of "things we > shouldn't be doing in the first place". The only way I can reconcile You read me correctly. We install a bunch of components from source for integration tests in devstack-gate because we want the final releases to work together. But those things only interact via REST APIs, and don't import each other. The cases with neutron and horizon are different. Even the *unit* tests of the add-ons require code from the "parent" app. That indicates a level of coupling that is not being properly addressed by the release model and code management practices for the parent apps. > the two would be to conclude we should collapse all the module in > neutron-*/networking-* into neutron, but doing that would have quite a > lot of side effects (yes, this is an understatement). That's not the only way to do it. The other way would be to properly decompose the shared code into a library and then provide *stable APIs* so code can be consumed by the add-on modules. That will make evolving things a little more difficult because of the stability requirement. So it's a trade off. I think the teams involved should make that trade off (in one direction or another), instead of building tools to continue to avoid dealing with it. So let's start by examining the root of the problem: Why are the things that need to import neutron/horizon not part of the neutron/horizon repositories in the first place? Doug From surya.seetharaman9 at gmail.com Thu Mar 15 13:28:31 2018 From: surya.seetharaman9 at gmail.com (Surya Seetharaman) Date: Thu, 15 Mar 2018 14:28:31 +0100 Subject: [openstack-dev] [nova] Rocky PTG summary - cells In-Reply-To: References: <6C2A4F0E-BDEC-4036-B1A1-CCEA8385AEF9@gmail.com> Message-ID: I would also prefer not having to rely on reading all the cell DBs to calculate quotas. On Thu, Mar 15, 2018 at 3:29 AM, melanie witt wrote: > > > I would prefer not to block instance creations because of "down" cells, ​++ ​ > so maybe there is some possibility to avoid it if we can get > "queued_for_delete" and "user_id" columns added to the instance_mappings > table. > > seems reason enough to add them from my perspective. Regards, Surya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Mar 15 13:34:50 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 15 Mar 2018 14:34:50 +0100 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <09e1597e-0217-7bed-2b89-d6146c8e79ca@openstack.org> Doug Hellmann wrote: > [...] > TL;DR > ----- > > Let's stop copying exact dependency specifications into all our > projects to allow them to reflect the actual versions of things > they depend on. The constraints system in pip makes this change > safe. We still need to maintain some level of compatibility, so the > existing requirements-check job (run for changes to requirements.txt > within each repo) will change a bit rather than going away completely. > We can enable unit test jobs to verify the lower constraint settings > at the same time that we're doing the other work. Thanks for the very detailed plan, Doug. It all makes sense to me, although I have a precision question (see below). > [...] > We also need to change requirements-check to look at the exclusions > to ensure they all appear in the global-requirements.txt list > (the local list needs to be a subset of the global list, but > does not have to match it exactly). We can't have one project > excluding a version that others do not, because we could then > end up with a conflict with the upper constraints list that could > wedge the gate as we had happen in the past. > [...] > 2. We should stop syncing dependencies by turning off the > propose-update-requirements job entirely. > > Turning off the job will stop the bot from proposing more > dependency updates to projects. > [...] > After these 3 steps are done, the requirements team will continue > to maintain the global-requirements.txt and upper-constraints.txt > files, as before. Adding a new dependency to a project will still > involve a review step to add it to the global list so we can monitor > licensing, duplication, python 3 support, etc. But adjusting the > version numbers once that dependency is in the global list will be > easier. How would you set up an exclusion in that new world order ? We used to add it to the global-requirements file and the bot would automatically sync it to various consuming projects. Now since any exclusion needs to also appear on the global file, you would push it first in the global-requirements, then to the project itself, is that correct ? In the end the global-requirements file would only contain those exclusions, right ? -- Thierry Carrez (ttx) From mriedemos at gmail.com Thu Mar 15 13:35:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 08:35:46 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] about rebuildinstance booted from volume In-Reply-To: References: <6AC92E2F-2F9D-4B18-8877-361B7877B677@cern.ch> <024e01d3bbab$38f44290$aadcc7b0$@homeatcloud.cz> Message-ID: On 3/15/2018 7:27 AM, 李杰 wrote: > It seems that  we  can  only delete the snapshots of the original volume > firstly,then delete the original volume if the original volume has > snapshots. Nova won't be deleting the volume snapshots just to delete the volume during a rebuild. If we decide to delete the root volume during rebuild (delete_on_termination=True *or* we decide to not consider that flag during rebuild), the rebuild operation will likely have to handle the scenario that the volume has snapshots and can't be deleted. Which opens up another question: if we hit that scenario, what should the rebuild operation do? Log a warning and just detach the volume but not delete it and continue, or fail? -- Thanks, Matt From doug at doughellmann.com Thu Mar 15 13:45:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 09:45:38 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <09e1597e-0217-7bed-2b89-d6146c8e79ca@openstack.org> References: <1521110096-sup-3634@lrrr.local> <09e1597e-0217-7bed-2b89-d6146c8e79ca@openstack.org> Message-ID: <1521121433-sup-7650@lrrr.local> Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100: > Doug Hellmann wrote: > > [...] > > TL;DR > > ----- > > > > Let's stop copying exact dependency specifications into all our > > projects to allow them to reflect the actual versions of things > > they depend on. The constraints system in pip makes this change > > safe. We still need to maintain some level of compatibility, so the > > existing requirements-check job (run for changes to requirements.txt > > within each repo) will change a bit rather than going away completely. > > We can enable unit test jobs to verify the lower constraint settings > > at the same time that we're doing the other work. > > Thanks for the very detailed plan, Doug. It all makes sense to me, > although I have a precision question (see below). > > > [...] > > We also need to change requirements-check to look at the exclusions > > to ensure they all appear in the global-requirements.txt list > > (the local list needs to be a subset of the global list, but > > does not have to match it exactly). We can't have one project > > excluding a version that others do not, because we could then > > end up with a conflict with the upper constraints list that could > > wedge the gate as we had happen in the past. > > [...] > > 2. We should stop syncing dependencies by turning off the > > propose-update-requirements job entirely. > > > > Turning off the job will stop the bot from proposing more > > dependency updates to projects. > > [...] > > After these 3 steps are done, the requirements team will continue > > to maintain the global-requirements.txt and upper-constraints.txt > > files, as before. Adding a new dependency to a project will still > > involve a review step to add it to the global list so we can monitor > > licensing, duplication, python 3 support, etc. But adjusting the > > version numbers once that dependency is in the global list will be > > easier. > > How would you set up an exclusion in that new world order ? We used to > add it to the global-requirements file and the bot would automatically > sync it to various consuming projects. > > Now since any exclusion needs to also appear on the global file, you > would push it first in the global-requirements, then to the project > itself, is that correct ? In the end the global-requirements file would > only contain those exclusions, right ? > The first step would need to be adding it to the global-requirements.txt list. After that, it would depend on how picky we want to be. If the upper-constraints.txt list is successfully updated to avoid the release, we might not need anything in the project. If the project wants to provide detailed guidance about compatibility, then they could add the exclusion. For example, if a version of oslo.config breaks cinder but not nova, we might only put the exclusion in global-requirements.txt and the requirements.txt for cinder. Doug From lebre.adrien at free.fr Thu Mar 15 13:46:37 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Thu, 15 Mar 2018 14:46:37 +0100 (CET) Subject: [openstack-dev] [FEMDC] Brainstorming regarding the Vancouver Forum In-Reply-To: <1771123683.49874717.1521121505525.JavaMail.root@zimbra29-e5> Message-ID: <2139216848.49879210.1521121597993.JavaMail.root@zimbra29-e5> Hi all, I just created an FEMDC etherpad following the Melvin's email regarding the next Forum in Vancouver. Please do not hesitate to propose ideas for sessions at the forum : https://wiki.openstack.org/wiki/Forum/Vancouver2018 ++ Ad_ri3n_ From Greg.Waines at windriver.com Thu Mar 15 14:05:13 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 15 Mar 2018 14:05:13 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests In-Reply-To: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> References: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> Message-ID: <746DEED6-D8E8-4125-87D6-936F2F06508A@windriver.com> Re-posting this question to ‘OPENSTACK REFSTACK’, Any guidance on what level of compliance is required to qualify for the OpenStack Logo ( https://www.openstack.org/brand/interop/ ), See questions below. Greg. From: Greg Waines Date: Monday, February 26, 2018 at 6:22 PM To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests I have a commercial OpenStack product that I would like to claim compliancy with RefStack · Is it sufficient to claim compliance with only the “OpenStack Powered Platform” TESTS ? o i.e. https://refstack.openstack.org/#/guidelines o i.e. the ~350-ish compute + object-storage tests · OR · Should I be using the COMPLETE API Test Set ? o i.e. the > 1,000 tests from various domains that get run if you do not specify a test-list Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Mar 15 14:16:30 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 15 Mar 2018 14:16:30 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests In-Reply-To: <746DEED6-D8E8-4125-87D6-936F2F06508A@windriver.com> References: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> <746DEED6-D8E8-4125-87D6-936F2F06508A@windriver.com> Message-ID: Greg, For compliance it is sufficient to run https://refstack.openstack.org/#/guidelines. But it is good if you can also submit fill Tempest run. That is used internally by refstack to identify which tests to include in the future. This can be submitted anonymously if you like. Thanks, Arkady From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, March 15, 2018 9:05 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests Re-posting this question to ‘OPENSTACK REFSTACK’, Any guidance on what level of compliance is required to qualify for the OpenStack Logo ( https://www.openstack.org/brand/interop/ ), See questions below. Greg. From: Greg Waines > Date: Monday, February 26, 2018 at 6:22 PM To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests I have a commercial OpenStack product that I would like to claim compliancy with RefStack · Is it sufficient to claim compliance with only the “OpenStack Powered Platform” TESTS ? o i.e. https://refstack.openstack.org/#/guidelines o i.e. the ~350-ish compute + object-storage tests · OR · Should I be using the COMPLETE API Test Set ? o i.e. the > 1,000 tests from various domains that get run if you do not specify a test-list Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Mar 15 14:27:55 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Mar 2018 09:27:55 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 03/14/2018 08:46 AM, Matt Riedemann wrote: > On 3/14/2018 3:42 AM, 李杰 wrote: >> >>              This is the spec about  rebuild a instance booted from >> volume.In the spec,there is a >>        question about if we should delete the old root_volume.Anyone >> who is interested in >>        booted from volume can help to review this. Any suggestion is >> welcome.Thank you! >>        The link is here. >>        Re:the rebuild spec:https://review.openstack.org/#/c/532407/ > > Copying the operators list and giving some more context. > > This spec is proposing to add support for rebuild with a new image for > volume-backed servers, which today is just a 400 failure in the API > since the compute doesn't support that scenario. > > With the proposed solution, the backing root volume would be deleted and > a new volume would be created from the new image, similar to how boot > from volume works. > > The question raised in the spec is whether or not nova should delete the > root volume even if its delete_on_termination flag is set to False. The > semantics get a bit weird here since that flag was not meant for this > scenario, it's meant to be used when deleting the server to which the > volume is attached. Rebuilding a server is not deleting it, but we would > need to replace the root volume, so what do we do with the volume we're > replacing? > > Do we say that delete_on_termination only applies to deleting a server > and not rebuild and therefore nova can delete the root volume during a > rebuild? > > If we don't delete the volume during rebuild, we could end up leaving a > lot of volumes lying around that the user then has to clean up, > otherwise they'll eventually go over quota. > > We need user (and operator) feedback on this issue and what they would > expect to happen. > As a developer who would also be a user of this functionality, I don't want the volume left around after rebuild. To me the data loss of the root disk is inherent in the rebuild operation. I guess the one gotcha might be that I always create the root volume as part of the initial instance creation, if someone manually created a volume and then pointed Nova at it there's probably a better chance they don't want Nova to delete it on them. Rather than overload delete_on_termination, could another flag like delete_on_rebuild be added? -Ben From fungi at yuggoth.org Thu Mar 15 14:28:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Mar 2018 14:28:49 +0000 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <20180315142848.we3w3lzohjv6yg2s@yuggoth.org> On 2018-03-15 07:03:11 -0400 (-0400), Doug Hellmann wrote: [...] > 1. Update the requirements-check test job to change the check for > an exact match to be a check for compatibility with the > upper-constraints.txt value. [...] I thought it might be possible to even just do away with this job entirely, but some cursory testing shows that if you supply a required versionspec which excludes your constrained version of the same package, you'll still get the constrained version installed even though you indicated it wasn't in your "supported" range. Might be a nice patch to work on upstream in pip, making it explicitly error on such a mismatch (and _then_ we might be able to stop bothering with this job). > We also need to change requirements-check to look at the exclusions > to ensure they all appear in the global-requirements.txt list > (the local list needs to be a subset of the global list, but > does not have to match it exactly). We can't have one project > excluding a version that others do not, because we could then > end up with a conflict with the upper constraints list that could > wedge the gate as we had happen in the past. [...] At first it seems like this wouldn't end up being necessary; as long as you're not setting an upper bound or excluding the constrained version, there shouldn't be a coinstallability problem right? Though I suppose there are still a couple of potential pitfalls if we don't check exclusions: setting an exclusion for a future version which hasn't been released yet or is otherwise higher than the global upper constraint; situations where we need to roll back a constraint to an earlier version (e.g., we discover a bug in it) and some project has that earlier version excluded. So I suppose there is some merit to centrally coordinating these, making sure we can still pick sane constraints which work for all projects (mental exercise: do we also need to build a tool which can make sure that proposed exclusions don't eliminate all possible version numbers?). > As the minimum > versions of dependencies diverge within projects, there will no > longer *be* a real global set of minimum values. Tracking a list of > "highest minimums", would either require rebuilding the list from the > settings in all projects, or requiring two patches to change the > minimum version of a dependency within a project. [...] It's also been suggested in the past that package maintainers for some distributions relied on the ranges in our global requirements list to determine what the minimum acceptable version of a dependency is so they know whether/when it needs updating (fairly critical when you consider that within a given distro some dependencies may be shared by entirely unrelated software outside our ecosystem and may not be compatible with new versions as soon as we are). On the other hand, we never actually _test_ our lower bounds, so this was to some extent a convenient fiction anyway. > 1. Set up a new tox environment called "lower-constraints" with > base-python set to "python3" and with the deps setting configured > to include a copy of the existing global lower constraints file > from the openstack/requirements repo. [...] I didn't realize lower-constraints.txt already existed (looks like it got added a little over a week ago). Reviewing the log it seems to have been updated based on individual projects' declared minimums so far which seems to make it a questionable starting point for a baseline. I suppose the assumption is that projects have been merging requirements proposals which bump their declared lower-bounds, though experience suggests that this doesn't happen consistently in projects receiving g-r updates today (they will either ignore the syncs or amend them to undo the lower-bounds changes before merging). At any rate, I suppose that's a separate conversation to be had, and as you say it's just a place to start from but projects will be able to change it to whatever values they want at that point. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu Mar 15 14:40:34 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 15 Mar 2018 14:40:34 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests In-Reply-To: References: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> <746DEED6-D8E8-4125-87D6-936F2F06508A@windriver.com> Message-ID: <20180315144034.3dgtk3f2kvx4vlrp@yuggoth.org> On 2018-03-15 14:16:30 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: [...] > This can be submitted anonymously if you like. Anonymous submissions got disabled (and the existing set of data from them deleted). See the announcement from a month ago for details: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127103.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dms at danplanet.com Thu Mar 15 14:46:56 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 15 Mar 2018 07:46:56 -0700 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: (Ben Nemec's message of "Thu, 15 Mar 2018 09:27:55 -0500") References: Message-ID: > Rather than overload delete_on_termination, could another flag like > delete_on_rebuild be added? Isn't delete_on_termination already the field we want? To me, that field means "nova owns this". If that is true, then we should be able to re-image the volume (in-place is ideal, IMHO) and if not, we just fail. Is that reasonable? --Dan From prometheanfire at gentoo.org Thu Mar 15 15:05:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Mar 2018 10:05:50 -0500 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521121433-sup-7650@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <09e1597e-0217-7bed-2b89-d6146c8e79ca@openstack.org> <1521121433-sup-7650@lrrr.local> Message-ID: <20180315150550.nwznbsj3xeytwa35@gentoo.org> On 18-03-15 09:45:38, Doug Hellmann wrote: > Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100: > > Doug Hellmann wrote: > > > [...] > > > TL;DR > > > ----- > > > > > > Let's stop copying exact dependency specifications into all our > > > projects to allow them to reflect the actual versions of things > > > they depend on. The constraints system in pip makes this change > > > safe. We still need to maintain some level of compatibility, so the > > > existing requirements-check job (run for changes to requirements.txt > > > within each repo) will change a bit rather than going away completely. > > > We can enable unit test jobs to verify the lower constraint settings > > > at the same time that we're doing the other work. > > > > Thanks for the very detailed plan, Doug. It all makes sense to me, > > although I have a precision question (see below). > > > > > [...] > > > We also need to change requirements-check to look at the exclusions > > > to ensure they all appear in the global-requirements.txt list > > > (the local list needs to be a subset of the global list, but > > > does not have to match it exactly). We can't have one project > > > excluding a version that others do not, because we could then > > > end up with a conflict with the upper constraints list that could > > > wedge the gate as we had happen in the past. > > > [...] > > > 2. We should stop syncing dependencies by turning off the > > > propose-update-requirements job entirely. > > > > > > Turning off the job will stop the bot from proposing more > > > dependency updates to projects. > > > [...] > > > After these 3 steps are done, the requirements team will continue > > > to maintain the global-requirements.txt and upper-constraints.txt > > > files, as before. Adding a new dependency to a project will still > > > involve a review step to add it to the global list so we can monitor > > > licensing, duplication, python 3 support, etc. But adjusting the > > > version numbers once that dependency is in the global list will be > > > easier. > > > > How would you set up an exclusion in that new world order ? We used to > > add it to the global-requirements file and the bot would automatically > > sync it to various consuming projects. > > > > Now since any exclusion needs to also appear on the global file, you > > would push it first in the global-requirements, then to the project > > itself, is that correct ? In the end the global-requirements file would > > only contain those exclusions, right ? > > > > The first step would need to be adding it to the global-requirements.txt > list. After that, it would depend on how picky we want to be. If the > upper-constraints.txt list is successfully updated to avoid the release, > we might not need anything in the project. If the project wants to > provide detailed guidance about compatibility, then they could add the > exclusion. For example, if a version of oslo.config breaks cinder but > not nova, we might only put the exclusion in global-requirements.txt and > the requirements.txt for cinder. > I wonder if we'd be able to have projects decide via a flag in their tox or zuul config if they'd like to opt into auto-updating exclusions only. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From prometheanfire at gentoo.org Thu Mar 15 15:24:10 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Mar 2018 10:24:10 -0500 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <20180315152410.rijbohawx6tqq37v@gentoo.org> On 18-03-15 07:03:11, Doug Hellmann wrote: > What I Want to Do > ----------------- > > 1. Update the requirements-check test job to change the check for > an exact match to be a check for compatibility with the > upper-constraints.txt value. > > We would check the value for the dependency from upper-constraints.txt > against the range of allowed values in the project. If the > constraint version is compatible, the dependency range is OK. > > This rule means that in order to change the dependency settings > for a project in a way that are incompatible with the constraint, > the constraint (and probably the global requirements list) would > have to be changed first in openstack/requirements. However, if > the change to the dependency is still compatible with the > constraint, no change would be needed in openstack/requirements. > For example, if the global list constraints a library to X.Y.Z > and a project lists X.Y.Z-2 as the minimum version but then needs > to raise that because it needs a feature in X.Y.Z-1, it can do > that with a single patch in-tree. > I think what may be better is for global-requirements to become a gathering place for projects that requirements watches to have their smallest constrainted installable set defined in. Upper-constraints has a req of foo===2.0.3 Project A has a req of foo>=1.0.0,!=1.6.0 Project B has a req of foo>=1.4.0 Global reqs would be updated with foo>=1.4.0,!=1.6.0 Project C comes along and sets foo>=2.0.0 Global reqs would be updated with foo>=2.0.0 This would make global-reqs descriptive rather than prescriptive for versioning and would represent the 'true' version constraints of openstack. > We also need to change requirements-check to look at the exclusions > to ensure they all appear in the global-requirements.txt list > (the local list needs to be a subset of the global list, but > does not have to match it exactly). We can't have one project > excluding a version that others do not, because we could then > end up with a conflict with the upper constraints list that could > wedge the gate as we had happen in the past. > How would this happen when using constraints? A project is not allowed to have a requirement that masks a constriant (and would be verified via the requirements-check job). There's a failure mode not covered, a project could add a mask (!=) to their requirements before we update constraints. The project that was passing the requirements-check job would then become incompatable. This means that the requirements-check would need to be run for each changeset to catch this as soon as it happens, instead of running only on requirements changes. > We also need to verify that projects do not cap dependencies for > the same reason. Caps prevent us from advancing to versions of > dependencies that are "too new" and possibly incompatible. We > can manage caps in the global requirements list, which would > cause that list to calculate the constraints correctly. > > This change would immediately allow all projects currently > following the global requirements lists to specify different > lower bounds from that global list, as long as those lower bounds > still allow the dependencies to be co-installable. (The upper > bounds, managed through the upper-constraints.txt list, would > still be built by selecting the newest compatible version because > that is how pip's dependency resolver works.) > > 2. We should stop syncing dependencies by turning off the > propose-update-requirements job entirely. > > Turning off the job will stop the bot from proposing more > dependency updates to projects. > > As part of deleting the job we can also remove the "requirements" > case from playbooks/proposal/propose_update.sh, since it won't > need that logic any more. We can also remove the update-requirements > command from the openstack/requirements repository, since that > is the tool that generates the updated list and it won't be > needed if we aren't proposing updates any more. > > 3. Remove the minimum specifications from the global requirements > list to make clear that the global list is no longer expressing > minimums. > > This clean-up step has been a bit more controversial among the > requirements team, but I think it is a key piece. As the minimum > versions of dependencies diverge within projects, there will no > longer *be* a real global set of minimum values. Tracking a list of > "highest minimums", would either require rebuilding the list from the > settings in all projects, or requiring two patches to change the > minimum version of a dependency within a project. > > Maintaining a global list of minimums also implies that we > consider it OK to run OpenStack as a whole with that list. This > message conflicts with the message we've been sending about the > upper constraints list since that was established, which is that > we have a known good list of versions and deploying all of > OpenStack with different versions of those dependencies is > untested. > As noted above I think that gathering the min versions/maskings from openstack projects to be valuable (especially to packagers who already use our likely invalid values already). > After these 3 steps are done, the requirements team will continue > to maintain the global-requirements.txt and upper-constraints.txt > files, as before. Adding a new dependency to a project will still > involve a review step to add it to the global list so we can monitor > licensing, duplication, python 3 support, etc. But adjusting the > version numbers once that dependency is in the global list will be > easier. > Thanks for writing this up, I think it looks good in general, but like you mentioned before, there is some discussion to be had about gathering and creating a versionspec from all of openstack for requirements. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu Mar 15 15:38:05 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 15 Mar 2018 11:38:05 -0400 Subject: [openstack-dev] [glance] OSSN-0076 brainstorming session Friday 16 March Message-ID: I'm working on a spec to alleviate OSSN-0076 that would follow up the OSSN-0075 proposal [0], but have run into some problems. It would be helpful to lay them out and get some feedback. I'll be in the #openstack-glance channel at 16:30 UTC tomorrow (Friday) to discuss. Will take less than 1/2 hour. cheers, brian [0] https://review.openstack.org/#/c/468179/ From jlibosva at redhat.com Thu Mar 15 15:49:24 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Thu, 15 Mar 2018 16:49:24 +0100 Subject: [openstack-dev] [neutron][stable] New release for Pike is overdue In-Reply-To: References: Message-ID: Thanks for notice. I sent a patch to request a new release: https://review.openstack.org/#/c/553447/ Jakub On 15/03/2018 11:28, Jens Harbott wrote: > The last neutron release for Pike has been made in November, a lot of > bug fixes have made it into the stable/pike branch, can we please get > a fresh release for it soon? > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kendall at openstack.org Thu Mar 15 16:05:05 2018 From: kendall at openstack.org (Kendall Waters) Date: Thu, 15 Mar 2018 11:05:05 -0500 Subject: [openstack-dev] Vancouver Summit Schedule now live! Message-ID: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> The schedule is now live for the Vancouver Summit! Check out the 100+ sessions, demos, workshops and tutorials that will be featured at the Vancouver Summit, May 21-24. What’s New? As infrastructure has evolved, so has the Summit. In addition to OpenStack features and operations, you'll find a strong focus on cross-project integration and addressing new use cases like edge computing and machine learning. Sessions will feature user stories from the likes of JPMorgan Chase, Progressive Insurance, Target, Wells Fargo, and more, as well as the integration and use of projects like Kata Containers, Kubernetes, Istio, Ceph, ONAP, Ansible, and many others. The schedule is organized by new tracks according to use cases: private & hybrid cloud, public cloud, container infrastructure, CI / CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort within the schedule to find sessions and speakers around each topic or open source project (with new tags!). Please check out this Superuser article and help us promote it via social media: http://superuser.openstack.org/articles/whats-new-vancouver-summit/ Submit Sessions to the Forum The Technical Committee and User Committee are now collecting sessions for the Forum at the Vancouver Summit. If you have a project-specific session, strategic community-wide discussion or cross-project that you would like to propose, add links to the etherpads found at the Vancouver Forum Wiki (https://wiki.openstack.org/wiki/Forum/Vancouver2018) . Time to Register The early bird deadline is approaching, so please register https://www.openstack.org/summit/vancouver-2018/ before prices increase on April 4 at 11:59pm PT. For speakers whose sessions were accepted, look for an email from speakersupport at openstack.org for next steps on registration. ATCs and AUCs should also check their inbox for discount codes. Questions? Email summit at openstack.org Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul-andre.raymond at b-yond.com Thu Mar 15 16:23:58 2018 From: paul-andre.raymond at b-yond.com (Paul-Andre Raymond) Date: Thu, 15 Mar 2018 16:23:58 +0000 Subject: [openstack-dev] [Edge-computing] [FEMDC] Brainstorming regarding the Vancouver Forum In-Reply-To: <2139216848.49879210.1521121597993.JavaMail.root@zimbra29-e5> References: <1771123683.49874717.1521121505525.JavaMail.root@zimbra29-e5> <2139216848.49879210.1521121597993.JavaMail.root@zimbra29-e5> Message-ID: <7F60F414-EFCF-4137-827D-F1E67C4143CA@b-yond.com> I have added a couple of points and links. Paul-André -- On 3/15/18, 9:46 AM, "lebre.adrien at free.fr" wrote: Hi all, I just created an FEMDC etherpad following the Melvin's email regarding the next Forum in Vancouver. Please do not hesitate to propose ideas for sessions at the forum : https://wiki.openstack.org/wiki/Forum/Vancouver2018 ++ Ad_ri3n_ _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From miguel at mlavalle.com Thu Mar 15 16:24:42 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 15 Mar 2018 11:24:42 -0500 Subject: [openstack-dev] [neutron][stable] New release for Pike is overdue In-Reply-To: References: Message-ID: I just +1ed Kuba's patch On Thu, Mar 15, 2018 at 10:49 AM, Jakub Libosvar wrote: > Thanks for notice. I sent a patch to request a new release: > https://review.openstack.org/#/c/553447/ > > Jakub > > On 15/03/2018 11:28, Jens Harbott wrote: > > The last neutron release for Pike has been made in November, a lot of > > bug fixes have made it into the stable/pike branch, can we please get > > a fresh release for it soon? > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Mar 15 16:56:47 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 15 Mar 2018 17:56:47 +0100 Subject: [openstack-dev] [tripleo] TLS by default In-Reply-To: References: <5ecd3cd3-6732-8ad2-c29f-915f9b86c7f1@redhat.com> Message-ID: <51dc9040-0086-3453-f50c-8dbccb672155@redhat.com> On 03/15/2018 12:51 AM, Julia Kreger wrote: > On Wed, Mar 14, 2018 at 4:52 AM, Dmitry Tantsur wrote: >> Just to clarify: only for public endpoints, right? I don't think e.g. >> ironic-python-agent can talk to self-signed certificates yet. >> >> > > For what it is worth, it is possible for IPA to speak to a self signed > certificate, although it requires injecting the signing private CA > certificate into the ramdisk or iso image that is being used. There > are a few other options that can be implemented, but those may also > lower overall security posture. Yep, that's the problem. We can quite easily make IPA talk to custom https. We cannot securely make IPA expose an https endpoint without using virtual media (not supported by tripleo, vendor-specific). We cannot (IIUC) make iPXE use https with custom certificates without rebuilding the firmware from source. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pkovar at redhat.com Thu Mar 15 17:17:04 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 15 Mar 2018 18:17:04 +0100 Subject: [openstack-dev] [docs] Retiring openstack-doc-specs-core Message-ID: <20180315181704.0ff278c097ff529c27c3959a@redhat.com> Hi all, For historical reasons, the docs team maintains a separate core group for the docs-specs repo. With the new docs processes in place, we agreed at the Rocky PTG to further simplify the docs group setup and retire openstack-doc-specs-core by removing the existing members and adding openstack-doc-core as a group member. That way, we will only have one core group, which better reflects the current status of the team. Would there be any objections to this? The current openstack-doc-specs-core membership can be found here: https://review.openstack.org/#/admin/groups/384,members Thanks, pk From cdent+os at anticdent.org Thu Mar 15 17:30:16 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 15 Mar 2018 17:30:16 +0000 (GMT) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, A rousing good time at the API-SIG meeting today. We opened with some discussion on what might be missing from the Methods [7] section of the HTTP guidelines. At the PTG we had discussed that perhaps we needed more info on which methods were appropriate when. It turns that what we probably need is better discoverability, so we're going to work on that but at the same time do a general review of that entire page. We then talked about microversions a bit (because it wouldn't be an API-SIG without them). There's an in-progress history of microversions document (linked below) that we need to decide if we'll revive. If you have a strong opinion, let us know. And finally we explored the options for how or if Neutron can cleanly resolve the handling of invalid query parameters. This was raised a while back in an email thread [8]. It's generally a good idea not to break existing client code, but what if that client code is itself broken? The next step will be to make the choice configurable. Neutron doesn't support microversions so "throw another microversion at it" won't work. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://specs.openstack.org/openstack/api-wg/guidelines/http.html#http-methods [8] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128023.html Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Thu Mar 15 18:14:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 13:14:58 -0500 Subject: [openstack-dev] Vancouver Summit Schedule now live! In-Reply-To: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> References: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> Message-ID: <1a459ad9-c1a3-e26c-bb7d-6d71e41098a9@gmail.com> On 3/15/2018 11:05 AM, Kendall Waters wrote: > The schedule is organized by new tracks according to use cases: private > & hybrid cloud, public cloud, container infrastructure, CI / CD, edge > computing, HPC / GPUs / AI, and telecom / NFV. You can sort within the > schedule to find sessions and speakers around each topic or open source > project (with new tags!). I've asked about this before, but are the project updates no longer happening at this summit? Maybe those are too silo'ed to fall into the track buckets. I ask because I don't see anything in the schedule about project updates. -- Thanks, Matt From jungleboyj at gmail.com Thu Mar 15 18:20:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 15 Mar 2018 13:20:22 -0500 Subject: [openstack-dev] Vancouver Summit Schedule now live! In-Reply-To: <1a459ad9-c1a3-e26c-bb7d-6d71e41098a9@gmail.com> References: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> <1a459ad9-c1a3-e26c-bb7d-6d71e41098a9@gmail.com> Message-ID: <967cba48-9762-0573-e143-2276bf598878@gmail.com> On 3/15/2018 1:14 PM, Matt Riedemann wrote: > On 3/15/2018 11:05 AM, Kendall Waters wrote: >> The schedule is organized by new tracks according to use cases: >> private & hybrid cloud, public cloud, container infrastructure, CI / >> CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort >> within the schedule to find sessions and speakers around each topic >> or open source project (with new tags!). > > I've asked about this before, but are the project updates no longer > happening at this summit? Maybe those are too silo'ed to fall into the > track buckets. I ask because I don't see anything in the schedule > about project updates. > Matt, Project updates are happening.  I think it is Anne and Kendall N that are setting those up.  An e-mail went out to PTLs about that a while back asking if they wanted to participate. I found it weird too that the schedule went out without those listed.  I know they are busy, however, trying to coordinate those and the onboarding sessions. Jay From openstack at fried.cc Thu Mar 15 18:30:19 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 15 Mar 2018 13:30:19 -0500 Subject: [openstack-dev] [nova][placement] update_provider_tree design updates Message-ID: <90a9be02-cbba-cc7a-9275-9c7060797c2a@fried.cc> One of the takeaways from the Queens retrospective [1] was that we should be summarizing discussions that happen in person/hangout/IRC/etc. to the appropriate mailing list for the benefit of those who weren't present (or paying attention :P ). This is such a summary. As originally conceived, ComputeDriver.update_provider_tree was intended to be the sole source of truth for traits and aggregates on resource providers under its purview. Then came the idea of reflecting compute driver capabilities as traits [2], which would be done outside of update_provider_tree, but still within the bounds of nova compute. Then Friday discussions at the PTG [3] brought to light the fact that we need to honor traits set by outside agents (operators, other services like neutron, etc.), effectively merging those with whatever the virt driver sets. Concerns were raised about how to reconcile overlaps, and in particular how compute (via update_provider_tree or otherwise) can know if a trait is safe to *remove*. At the PTG, we agreed we need to do this, but deferred the details. ...which we discussed earlier this week in IRC [4][5]. We concluded: - Compute is the source of truth for any and all traits it could ever assign, which will be a subset of what's in os-traits, plus whatever CUSTOM_ traits it stakes a claim to. If an outside agent sets a trait that's in that list, compute can legitimately remove it. If an outside agent removes a trait that's in that list, compute can reassert it. - Anything outside of that list of compute-owned traits is fair game for outside agents to set/unset. Compute won't mess with those, ever. - Compute (and update_provider_tree) will therefore need to know what that list comprises. Furthermore, it must take care to use merging logic such that it only sets/unsets traits it "owns". - To facilitate this on the compute side, ProviderTree will get new methods to add/remove provider traits. (Technically, it could all be done via update_traits [6], which replaces the entire set of traits on a provider, but then every update_provider_tree implementation would have to write the same kind of merging logic.) - For operators, we'll need OSC affordance for setting/unsetting provider traits. And finally: - Everything above *also* applies to provider aggregates. NB: Here there be tygers. Unlike traits, the comprehensive list of which can conceivably be known a priori (even including CUSTOM_*s), aggregate UUIDs are by their nature unique and likely generated dynamically. Knowing that you "own" an aggregate UUID is relatively straightforward when you need to set it; but to know you can/must unset it, you need to have kept a record of having set it in the first place. A record that persists e.g. across compute service restarts. Can/should virt drivers write a file? If so, we better make sure it works across upgrades. And so on. Ugh. For the time being, we're kinda punting on this issue until it actually becomes a problem IRL. And now for the moment you've all been awaiting with bated breath: - Delta [7] to the update_provider_tree spec [8]. - Patch for ProviderTree methods to add/remove traits/aggregates [9]. - Patch modifying the update_provider_tree docstring, and adding devref content for update_provider_tree [10]. Please feel free to email or reach out in #openstack-nova if you have any questions. Thanks, efried [1] https://etherpad.openstack.org/p/nova-queens-retrospective (L122 as of this writing) [2] https://review.openstack.org/#/c/538498/ [3] https://etherpad.openstack.org/p/nova-ptg-rocky (L496-502 aotw) [4] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T16:02:08 [5] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T19:20:23 [6] https://github.com/openstack/nova/blob/5f38500df6a8e1665b968c3e98b804e0fdfefc63/nova/compute/provider_tree.py#L494 [7] https://review.openstack.org/552122 [8] http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/update-provider-tree.html [9] https://review.openstack.org/553475 [10] https://review.openstack.org/553476 From kent.gordon at verizonwireless.com Thu Mar 15 18:31:39 2018 From: kent.gordon at verizonwireless.com (Gordon, Kent S) Date: Thu, 15 Mar 2018 13:31:39 -0500 Subject: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa] Message-ID: Compute host disk requirements for Openstack Ansible seem high in the documentation. I think I have used smaller compute hosts in the past. Did something change in Queens? https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html Compute hosts Disk space requirements depend on the total number of instances running on each host and the amount of disk space allocated to each instance. - Compute hosts must have a minimum of 1 TB of disk space available. -- Kent S. Gordon kent.gordon at verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518 -------------- next part -------------- An HTML attachment was scrubbed... URL: From richwellum at gmail.com Thu Mar 15 18:33:56 2018 From: richwellum at gmail.com (Richard Wellum) Date: Thu, 15 Mar 2018 18:33:56 +0000 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: +1 On Thu, Mar 15, 2018 at 5:40 AM Martin André wrote: > +1 > > On Tue, Mar 13, 2018 at 5:50 PM, Swapnil Kulkarni > wrote: > > On Mon, Mar 12, 2018 at 7:36 AM, Jeffrey Zhang > wrote: > >> Kolla core reviewer team, > >> > >> It is my pleasure to nominate caoyuan for kolla core team. > >> > >> caoyuan's output is fantastic over the last cycle. And he is the most > >> active non-core contributor on Kolla project for last 180 days[1]. He > >> focuses on configuration optimize and improve the pre-checks feature. > >> > >> Consider this nomination a +1 vote from me. > >> > >> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 > >> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous > >> response is reached or a veto vote occurs. > >> > >> [1] http://stackalytics.com/report/contribution/kolla-group/180 > >> -- > >> Regards, > >> Jeffrey Zhang > >> Blog: http://xcodest.me > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > +1 > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Mar 15 19:29:02 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 15 Mar 2018 19:29:02 +0000 (GMT) Subject: [openstack-dev] [nova][placement] update_provider_tree design updates In-Reply-To: <90a9be02-cbba-cc7a-9275-9c7060797c2a@fried.cc> References: <90a9be02-cbba-cc7a-9275-9c7060797c2a@fried.cc> Message-ID: On Thu, 15 Mar 2018, Eric Fried wrote: > One of the takeaways from the Queens retrospective [1] was that we > should be summarizing discussions that happen in person/hangout/IRC/etc. > to the appropriate mailing list for the benefit of those who weren't > present (or paying attention :P ). This is such a summary. Thank you _very_ much for doing this. I've got two questions within. > ...which we discussed earlier this week in IRC [4][5]. We concluded: > > - Compute is the source of truth for any and all traits it could ever > assign, which will be a subset of what's in os-traits, plus whatever > CUSTOM_ traits it stakes a claim to. If an outside agent sets a trait > that's in that list, compute can legitimately remove it. If an outside > agent removes a trait that's in that list, compute can reassert it. Where does that list come from? Or more directly how does Compute stake the claim for "mine"? How does an outside agent know what Compute has claimed? Presumably they want to know that so they can avoid wastefully doing something that's going to get clobbered? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From johnsomor at gmail.com Thu Mar 15 19:37:31 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 15 Mar 2018 12:37:31 -0700 Subject: [openstack-dev] [Octavia] Using Octavia without neutron's extensions allowed-address-pairs and security-groups. In-Reply-To: References: Message-ID: Hi Vadim, Yes, currently the only network driver available for Octavia (called allowed-address-pairs) uses the allowed-address-pairs feature of neutron. This allows active/standby and VIP migration during failover situations. If you need to run without that feature, an non-allowed-address-pairs driver will need to be created. This driver would not support the active/standby load balancer topology. Michael On Thu, Mar 15, 2018 at 1:39 AM, Вадим Пономарев wrote: > Hi, > > I'm trying to install Octavia (from branch master) in my openstack > installation. In my installation, neutron works with disabled extension > allowed-address-pairs and disabled extension security-groups. This is done > to improve performance. At the moment, i see that Octavia supporting for > neutron only the network_driver allowed_address_pairs_driver, but this > driver requires the extensions [1]. How can i use Octavia without the > extensions? Or the only option is to write your own driver? > > [1] > https://github.com/openstack/octavia/blob/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L57 > > -- > Best regards, > Vadim Ponomarev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Thu Mar 15 19:40:49 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 15 Mar 2018 14:40:49 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: On 03/15/2018 09:46 AM, Dan Smith wrote: >> Rather than overload delete_on_termination, could another flag like >> delete_on_rebuild be added? > > Isn't delete_on_termination already the field we want? To me, that field > means "nova owns this". If that is true, then we should be able to > re-image the volume (in-place is ideal, IMHO) and if not, we just > fail. Is that reasonable? If that's what the flag means then it seems reasonable. I got the impression from the previous discussion that not everyone was seeing it that way though. From kennelson11 at gmail.com Thu Mar 15 19:53:16 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 15 Mar 2018 19:53:16 +0000 Subject: [openstack-dev] Vancouver Summit Schedule now live! In-Reply-To: <967cba48-9762-0573-e143-2276bf598878@gmail.com> References: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> <1a459ad9-c1a3-e26c-bb7d-6d71e41098a9@gmail.com> <967cba48-9762-0573-e143-2276bf598878@gmail.com> Message-ID: Hey Matt and Jay :) Jay's correct the Updates are happening. Project Updates are their own thing separate from the standard submission. Anne Bertucio had sent an email out to all PTLs (shortly before the election so it went to you and not Melanie) saying we had slots and you needed to request one through her. I **thiiink** there are spots left, but I am not positive. I had sent an email AFTER the election to PTLs about Project Onboarding and needed a response if the team was interested. I don't currently have Nova on the list and I have less than 5 spots left so I need to know ASAP if you want one. We had been holding off on scheduling Onboarding and Updates until we could do them mostly together (this was a request from several people over the last two rounds of Onboarding). Anne, Kendall W and I have a call for starting to puzzle things together into the schedule late tomorrow. Barring any issues, all of that should be live on the schedule sometime next week. -Kendall (diablo_rojo) On Thu, Mar 15, 2018 at 11:20 AM Jay S Bryant wrote: > > > On 3/15/2018 1:14 PM, Matt Riedemann wrote: > > On 3/15/2018 11:05 AM, Kendall Waters wrote: > >> The schedule is organized by new tracks according to use cases: > >> private & hybrid cloud, public cloud, container infrastructure, CI / > >> CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort > >> within the schedule to find sessions and speakers around each topic > >> or open source project (with new tags!). > > > > I've asked about this before, but are the project updates no longer > > happening at this summit? Maybe those are too silo'ed to fall into the > > track buckets. I ask because I don't see anything in the schedule > > about project updates. > > > Matt, > > Project updates are happening. I think it is Anne and Kendall N that > are setting those up. An e-mail went out to PTLs about that a while > back asking if they wanted to participate. > > I found it weird too that the schedule went out without those listed. I > know they are busy, however, trying to coordinate those and the > onboarding sessions. > > Jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Thu Mar 15 19:55:18 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 15 Mar 2018 19:55:18 +0000 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> Deleting all snapshots would seem dangerous though... 1. I want to reset my instance to how it was before 2. I'll just do a snapshot in case I need any data in the future 3. rebuild 4. oops Tim -----Original Message----- From: Ben Nemec Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 15 March 2018 at 20:42 To: Dan Smith Cc: "OpenStack Development Mailing List (not for usage questions)" , openstack-operators Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume On 03/15/2018 09:46 AM, Dan Smith wrote: >> Rather than overload delete_on_termination, could another flag like >> delete_on_rebuild be added? > > Isn't delete_on_termination already the field we want? To me, that field > means "nova owns this". If that is true, then we should be able to > re-image the volume (in-place is ideal, IMHO) and if not, we just > fail. Is that reasonable? If that's what the flag means then it seems reasonable. I got the impression from the previous discussion that not everyone was seeing it that way though. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Thu Mar 15 20:11:49 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 15 Mar 2018 13:11:49 -0700 Subject: [openstack-dev] Vancouver Summit Schedule now live! In-Reply-To: References: <2A3E2A7B-8A2D-4FD5-B193-954AD28EA09D@openstack.org> <1a459ad9-c1a3-e26c-bb7d-6d71e41098a9@gmail.com> <967cba48-9762-0573-e143-2276bf598878@gmail.com> Message-ID: <55fde0cd-33dd-8b92-7d30-589ca7a2e6c0@gmail.com> On Thu, 15 Mar 2018 19:53:16 +0000, Kendall Nelson wrote: > > Jay's correct the Updates are happening. Project Updates are their own > thing separate from the standard submission. Anne Bertucio had sent an > email out to all PTLs (shortly before the election so it went to you and > not Melanie) saying we had slots and you needed to request one through > her. I **thiiink** there are spots left, but I am not positive. Matt did request a project update slot back before the election. I've emailed Anne directly to find out what's going on and whether we can get a project update slot. > I had sent an email AFTER the election to PTLs about Project Onboarding > and needed a response if the team was interested. I don't currently have > Nova on the list and I have less than 5 spots left so I need to know > ASAP if you want one. FYI to all, I had held off on requesting an onboarding slot until after asking for volunteers to help me with it in today's Nova meeting so I could have all of the speaker names ready. But I've gone ahead and gotten us a slot with only me as speaker for now and I'll update Kendall when/if I get volunteer speakers to help. :) -melanie From melwittt at gmail.com Thu Mar 15 20:30:00 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 15 Mar 2018 13:30:00 -0700 Subject: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron Message-ID: Hello Stackers, I've put together an etherpad [0] for the summary of the nova/neutron session from the PTG in the Croke Park Hotel breakfast area and included it as a plain text export on this email. Please feel free to edit or reply to this thread to add/correct anything I've missed. Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-neutron-summary *Nova/Neutron: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L159 *Key topics * NUMA-aware vSwitches * Minimum bandwidth-based scheduling * New port binding API in Neutron * Filtering instances by floating IP in Nova * Nova bug around re-attaching network interfaces on nova-compute restart -- port re-plug and re-create results in loss of some configuration like VLANs * https://bugs.launchpad.net/nova/+bug/1670628 * Routed provider networks needs to move to placement aggregates *Agreements and decisions * For NUMA-aware vSwitches, we'll go forward with a config-based solution for Rocky and deprecate it later when we have support in placement for the necessary inventory reporting bit (which will be implemented as part of the bandwidth-based scheduling work). We'll use dynamic attributes like "physnet_mapping_[name] = nodes" to avoid the JSON blob problem (Cinder and Manila do this) and thusly we'll avoid having to deprecate any additional YAML config file or JSON blob based thing when the placement support is available. * Spec: https://review.openstack.org/#/c/541290/ * On minimum bandwidth-based scheduling: * Neutron will create the network related RPs under the compute RP in placement * It's reasonable to require unique hostnames (across cells, the internet, the world) and we'll solve the host -- compute uuid issue separately * Neutron will report the bandwidth inventory to placement * On the interaction of Neutron and Nova to communicate the requested bandwidth per port: * The requested minimum bandwidth for a neturon port wil be available in the neutron port API https://review.openstack.org/#/c/396297/7/specs/pike/strict-minimum-bandwidth-support.rst at 68 * The work does not depend on the new neutron port binding API * We'll need not just resources but traits as well on the neutron port and neutron should add the physnet to the port as a trait. We'll assume that the requested resources and traits are from a single provider per port * We don't need to block bandwidth-based scheduling support for doing port creation in conductor (it's not trivial), however, if nova creates a port on a network with a QoS policy, nova is going to have to munge the allocations and update placement (from nova-compute) ... so maybe we should block this on moving port creation to conductor after all * Nova will merge the requested bandwidth into the allocation_candidate request by a new request filter * Nova will create the allocation in placement for bandwidth resources and the allocation uuid will be the instance uuid. Multiple ports with different QoS rules will be distinguishable because they will have allocations from different providers * As PF/VF modeling in placement has not been done yet we can phase this feature to support OVS first and add support for SRIOV after the PF/VF modelling is done * Nova spec: https://review.openstack.org/#/c/502306/ * Neutron spec: https://review.openstack.org/#/c/508149 * On the new port binding API in Neutron, there is solid progress on the Neutron side and the Nova skeleton patches are making progress and depend on the Neutron patch, so some testing will be possible soon (still need to plumb in the libvirt driver changes) * Spec: https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/neutron-new-port-binding-api.html * Neutron patch: https://review.openstack.org/#/c/414251/ * On the Nova bug about re-attaching network interfaces: * There was a bug in OVS back in 2014 for which a workaround was added: https://github.com/openstack/nova/commit/33cc64fb817 * The bug was fixed in OVS in 2015 and is available in OVS 2.6.0 onward: https://github.com/openvswitch/ovs/commit/e21c6643a02c6b446d2fbdfde366ea303b4c2730 * The old workaround in Nova (now in os-vif) was determined to be causing the bug, so a fix to os-vif was made which essentially reverted the workaround: https://review.openstack.org/#/c/546588 * We can close the bug in Nova once we have a os-vif library release and we depend on its version in our requirements.txt * On routed provider networks: * On the Neutron side, this is already done: https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html * Summit videos about routed provider networks: * https://www.openstack.org/videos/barcelona-2016/scaling-up-openstack-networking-with-routed-networks * https://www.openstack.org/videos/sydney-2017/openstack-networking-routed-networks-new-features From mikal at stillhq.com Thu Mar 15 20:42:00 2018 From: mikal at stillhq.com (Michael Still) Date: Fri, 16 Mar 2018 07:42:00 +1100 Subject: [openstack-dev] [Tatu][Nova] Handling instance destruction Message-ID: Heya, I've just stumbled across Tatu and the design presentation [1], and I am wondering how you handle cleaning up instances when they are deleted given that nova vendordata doesn't expose a "delete event". Specifically I'm wondering if we should add support for such an event to vendordata somehow, given I can now think of a couple of use cases for it. Thanks, Michael 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 15 20:42:43 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 15 Mar 2018 13:42:43 -0700 Subject: [openstack-dev] [nova][cinder] Rocky PTG summary - nova/cinder In-Reply-To: References: Message-ID: I realized I forgot to add the [cinder] tag to the subject line when I sent this originally. Sorry about that. Hello all, Here’s the PTG summary etherpad [0] for the nova/cinder session from the PTG, also included as a plain text export on this email. Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-cinder-summary *Nova/Cinder: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L63 *Key topics * New attach flow fixes and multi-attach * Attach mode * Swap volume with two read/write attachments * SHELVED_OFFLOADED and 'in-use' state in old attach flow * Server multi-create with attaching to the same volume fails * Data migration for old-style attachments * Volume replication for in-use volumes * Object-ifying os-brick connection_info * Formatting blank encrypted volumes during creation on the cinder side * Volume detail show reveals the attached compute hostname for non-admins * Bulk volume create/attach *Agreements and decisions * To handle attach mode for a multi-attach volume to several instances, we will change the compute API to allow the user to pass the attach mode so we can pass it through to cinder * The second attachment is going to be read/write by default and if the user wants read-only, they have to specify it * Spec: https://review.openstack.org/#/c/552078/ * Swap volume with two read/write attachments could definitely corrupt data. However, the cinder API doesn't allow retype/migration of in-use multi-attach volumes, so this isn't a problem right now * It would be reasonable to fix SHELVED_OFFLOADED to leave the volume in 'reserved' state instead of 'in-use', but it's low priority * The bug with server multi-create and multi-attach will be fixed on the cinder side and we'll add a new compute API microversion to leverage the cinder fix * Spec: https://review.openstack.org/#/c/552078/ * We'll migrate old-style attachments on-the-fly when a change is made to a volume, such as a migration. For the rest, we'll migrate old-style attachments on compute startup to new-style attachments * Compute startup data migration patch: https://review.openstack.org/#/c/549130/ * For volume replication of in-use volumes, on the cinder side, we'll need a prototype and spec, and drivers will need to indicate the type of replication and what recovery on the nova side needs to be. On the nova side, we'll need a new API microversion for the os-server-external-events change (like extended volume) * Owner: jgriffith * On the possibility of object-ifying connection_info in os-brick, it would be best to defer it until nova/neutron have worked out vif negotiation using os-vif * lyarwood asked to restore https://review.openstack.org/#/c/269867/ * On formatting blank encrypted volumes during creation, it sounded like we had agreement to fix it on the cinder side as they already have code for it. Need to double-check with the cinder team to make sure * For volume detail show revealing the attached compute hostname for non-admins, cinder will make a change to add a policy to not display the compute hostname for non-admins * Note: this doesn't impact nova, but it might impact glance. * On bulk volume create/attach, it will be up to cinder to decide whether they will want to implement bulk create. In nova, we are not going to support bulk attach as that's a job better done by an orchestration system like Heat * Note: Cinder team agreed to not support bulk create: https://wiki.openstack.org/wiki/CinderRockyPTGSummary#Bulk_Volume_Create.2FAttach From openstack at fried.cc Thu Mar 15 21:08:52 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 15 Mar 2018 16:08:52 -0500 Subject: [openstack-dev] [nova][placement] update_provider_tree design updates In-Reply-To: References: <90a9be02-cbba-cc7a-9275-9c7060797c2a@fried.cc> Message-ID: Excellent and astute questions, both of which came up in the discussion, but I neglected to mention. (I had to miss *something*, right?) See inline. On 03/15/2018 02:29 PM, Chris Dent wrote: > On Thu, 15 Mar 2018, Eric Fried wrote: > >> One of the takeaways from the Queens retrospective [1] was that we >> should be summarizing discussions that happen in person/hangout/IRC/etc. >> to the appropriate mailing list for the benefit of those who weren't >> present (or paying attention :P ).  This is such a summary. > > Thank you _very_ much for doing this. I've got two questions within. > >> ...which we discussed earlier this week in IRC [4][5].  We concluded: >> >> - Compute is the source of truth for any and all traits it could ever >> assign, which will be a subset of what's in os-traits, plus whatever >> CUSTOM_ traits it stakes a claim to.  If an outside agent sets a trait >> that's in that list, compute can legitimately remove it.  If an outside >> agent removes a trait that's in that list, compute can reassert it. > > Where does that list come from? Or more directly how does Compute > stake the claim for "mine"? One piece of the list should come from the traits associated with the compute driver capabilities [2]. Likewise anything else in the future that's within compute but outside of virt. In other words, we're declaring that it doesn't make sense for an operator to e.g. set the "has_imagecache" trait on a compute if the compute doesn't do that itself. The message being that you can't turn on a capability by setting a trait. Beyond that, each virt driver is going to be responsible for figuring out its own list. Thinking this through with my PowerVM hat on, it won't actually be as hard as it initially sounded - though it will require more careful accounting. Essentially, the driver is going to ask the platform questions and get responses in its own language; then map those responses to trait names. So we'll be writing blocks like: if sys_caps.can_modify_io: provider_tree.add_trait(nodename, "CUSTOM_LIVE_RESIZE_CAPABLE") else: provider_tree.remove_trait(nodename, "CUSTOM_LIVE_RESIZE_CAPABLE") And, for some subset of the "owned" traits, we should be able to maintain a dict such that this works: for feature in trait_map.values(): if feature in sys_features: provider_tree.add_trait(nodename, trait_map[feature]) else: provider_tree.remove_trait(nodename, trait_map[feature]) BUT what about *dynamic* features? If I have code like (don't kill me): vendor_id_trait = 'CUSTOM_DEV_VENDORID_' + slugify(io_device.vendor_id) provider_tree.add_trait(io_dev_rp, vendor_id_trait) ...then there's no way I can know ahead of time what all those might be. (In particular, if I want to support new devices without updating my code.) I.e. I *can't* write the corresponding provider_tree.remove_trait(...) condition. Maybe that never becomes a real problem because we'll never need to remove a dynamic trait. Or maybe we can tolerate "leakage". Or maybe we do something clever-but-ugly with namespacing (if trait.startswith('CUSTOM_DEV_VENDORID_')...). We're consciously kicking this can down the road. And note that this "dynamic" problem is likely to be a much larger portion (possibly all) of the domain when we're talking about aggregates. Then there's ironic, which is currently set up to get its traits blindly from Inspector. So Inspector not only needs to maintain the "owned traits" list (with all the same difficulties as above), but it must also either a) communicate that list to ironic virt so the latter can manage the add/remove logic; or b) own the add/remove logic and communicate the individual traits with a +/- on them so virt knows whether to add or remove them. > How does an outside agent know what Compute has claimed? Presumably > they want to know that so they can avoid wastefully doing something > that's going to get clobbered? Yup [11]. It was deemed that we don't need an API/CLI to discover those lists (assuming that would even be possible). The reasoning was two-pronged: - We'll document that there are traits "owned" by nova and attempts to set/unset them will be frustrated. You can't find out which ones they are except when a manually-set/-unset trait magically dis-/re-appears. - It probably won't be an issue because outside agents will be setting traits based on some specific thing they want to do, and the documentation for that thing will specify traits that are known not to interfere with those in nova's wheelhouse. > [2] https://review.openstack.org/#/c/538498/ [11] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T16:26:29 From corvus at inaugust.com Thu Mar 15 21:57:05 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 15 Mar 2018 14:57:05 -0700 Subject: [openstack-dev] Zuul project evolution Message-ID: <87sh91gfym.fsf@meyer.lemoncheese.net> Hi, To date, Zuul has (perhaps rightly) often been seen as an OpenStack-specific tool. That's only natural since we created it explicitly to solve problems we were having in scaling the testing of OpenStack. Nevertheless, it is useful far beyond OpenStack, and even before v3, it has found adopters elsewhere. Though as we talk to more people about adopting it, it is becoming clear that the less experience they have with OpenStack, the more likely they are to perceive that Zuul isn't made for them. At the same time, the OpenStack Foundation has identified a number of strategic focus areas related to open infrastructure in which to invest. CI/CD is one of these. The OpenStack project infrastructure team, the Zuul team, and the Foundation staff recently discussed these issues and we feel that establishing Zuul as its own top-level project with the support of the Foundation would benefit everyone. It's too early in the process for me to say what all the implications are, but here are some things I feel confident about: * The folks supporting the Zuul running for OpenStack will continue to do so. We love OpenStack and it's just way too fun running the world's most amazing public CI system to do anything else. * Zuul will be independently promoted as a CI/CD tool. We are establishing our own website and mailing lists to facilitate interacting with folks who aren't otherwise interested in OpenStack. You can expect to hear more about this over the coming months. * We will remain just as open as we have been -- the "four opens" are intrinsic to what we do. As a first step in this process, I have proposed a change[1] to remove Zuul from the list of official OpenStack projects. If you have any questions, please don't hesitate to discuss them here, or privately contact me or the Foundation staff. -Jim [1] https://review.openstack.org/552637 From dms at danplanet.com Thu Mar 15 22:29:13 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 15 Mar 2018 15:29:13 -0700 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> (Tim Bell's message of "Thu, 15 Mar 2018 19:55:18 +0000") References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> Message-ID: > Deleting all snapshots would seem dangerous though... > > 1. I want to reset my instance to how it was before > 2. I'll just do a snapshot in case I need any data in the future > 3. rebuild > 4. oops Yep, for sure. I think if there are snapshots, we have to refuse to do te thing. My comment was about the "does nova have authority to destroy the root volume during a rebuild" and I think it does, if delete_on_termination=True, and if there are no snapshots. --Dan From mriedemos at gmail.com Thu Mar 15 22:35:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 17:35:01 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> Message-ID: <93666d2a-c543-169c-fe07-499e5340622b@gmail.com> On 3/15/2018 5:29 PM, Dan Smith wrote: > Yep, for sure. I think if there are snapshots, we have to refuse to do > te thing. My comment was about the "does nova have authority to destroy > the root volume during a rebuild" and I think it does, if > delete_on_termination=True, and if there are no snapshots. Agree with this. Things do get a bit weird with delete_on_termination and if nova 'owns' the volume. delete_on_termination is False by default, even if you're doing boot from volume with source_type of 'image' or 'snapshot' where nova creates the volume for you. If a user really cared about preserving the volume, they'd probably pre-create it (with their favorite volume type since you can't tell nova the volume type to use) and pass it to nova with delete_on_termination=False explicitly. Given the defaults, I'm not sure how many people are going to specify delete_on_termination=True, thinking about the implications, which then means they can't rebuild their volume-backed instance later because nova can't / won't delete the volume. If we can solve this without deleting the volume at all and just re-image it, then it's a non-issue. -- Thanks, Matt From chris at openstack.org Thu Mar 15 22:44:41 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 15 Mar 2018 15:44:41 -0700 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s Message-ID: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> As I've been working more in the Kubernetes community, I've been evaluating the different points of integration between OpenStack services and the Kubernetes application platform. One of the weaker points of integration has been in using the OpenStack LBaaS APIs to create load balancers for Kubernetes applications. Using this as a framing device, I'd like to begin a discussion about the general development, deployment, and usage of the LBaaS API and how different parts of our community can rally around and strengthen the API in the coming year. I'd like to note right from the beginning that this isn't a disparagement of the fantastic work that's being done by the Octavia team, but rather an evaluation of the current state of the API and a call to our rich community of developers, cloud deployers, users, and app developers to help move the API to a place where it is expected to be present and shows the same level of consistency across deployments that we see with the Nova, Cinder, and Neutron core APIs. The seed of this discussion comes from my efforts to enable third-party Kubernetes cloud provider testing, as well as discussions with the Kubernetes-SIG-OpenStack community in the #sig-openstack Slack channel in the Kubernetes organization[0]. As a full disclaimer, my recounting of this discussion represents my own impressions, and although I mention active participants by name I do not represent their views. Any mistakes I make are my own. To set the stage, Kubernetes uses a third-party load-balancer service (either from a Kubernetes hosted application or from a cloud-provider API) to provide high-availability for the applications it manages. The OpenStack provider offers a generic interface to the LBaaSv2, with an option to enable Octavia instead of the Neutron API. The provider is build off of the GopherCloud SDK. In my own efforts to enable testing of this provider, I'm using Terraform to orchestrate the K8s deployment and installation. Since I needed to use a public cloud provider to turn this automated testing over to a third party, I chose Vexxhost, as they have been generous donors in this effort for the the CloudLab efforts in general, and have provided tremendous support in debugging problems I've run in to. The first major issue I ran in to was a race condition in using the Neutron LBaaSv2 API. It turns out that with Terraform, it's possible to tear down resources in a way that causes Neutron to leak administrator-privileged resources that can not be deleted by a non-privileged users. In discussions with the Neutron and Octavia teams, it was strongly recommended that I move away from the Neutron LBaaSv2 API and instead adopt Octavia. Vexxhost graciously installed Octavia and my request and I was able to move past this issue. This raises a fundamental issue facing our community with regards to the load balancer APIs: there is little consistency as to which API is deployed, and we have installations that still deploy on the LBaaSv1 API. Indeed, the OpenStack User Survey reported in November of 2017 that only 7% of production installations were running Octavia[1]. Meanwhile, Neutron LBaaSv1 was deprecated in Liberty, and Neutron LBaaSv2 was recently deprecated in the Queens release. The lack of a migration path from v1 to v2 helped to slow adoption, and the additional requirements for installing Octavia has also been a factor in increasing adoption of the supported LBaaSv2 implementation. This highlights the first call to action for our public and private cloud community: encouraging the rapid migration from older, unsupported APIs to Octavia. Because of this wide range of deployed APIs, I changed my own deployment code to launch a user-space VM and install a non-tls-terminating Nginx load balancer for my Kubernetes control plane[2]. I'm not the only person who has adopted an approach like this. In the #sig-openstack channel, Saverio Proto (zioproto) discussed how he uses the K8s Nginx ingress load balancer[3] in favor of the OpenStack provider load balancer. My take away from his description is that it's preferable to use the K8s-based ingress load balancer because: * The common LBaaSv2 API does not support TLS termination. * You don't need provision an additional virtual machine. * You aren't dependent on an appropriate and supported API being available on your cloud. German Eichberger (xgerman) and Adam Harwell (rm_you) from the Octavia team were present for the discussion, and presented a strong case for using the Octavia APIs. My take away was: * Octavia does support TLS termination, and it's the dependence on the Neutron API that removes the ability to take advantage of it. * It provides a lot more than just a "VM with haproxy", and has stability guarantees. This highlights a second call to action for the SDK and provider developers: recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding support for more advanced Octavia features. As part of the discussion, we also talked about facilitating adoption by improving the installation experience. My take away was that this is an active development goal for the Octavia Rocky release, leading to my third call to action: improving the installation and upgrade experience for the Octavia LBaaS APIs to help with adoption by both deployers and developers. To quote myself from that discussion: "As with any open source project, I want users to have a choice in what they want to use. Having provider code that gives a reliable user experience is critical. The Nova, Neutron, and Cinder APIs are very stable and expected to be present in every cloud. It's why the compute parts of the provider work really well. To me the shifting landscape of the LBaaSv2 APIs makes it difficult for users to rely on having a consistent and reliable experience. That doesn't mean we give up on trying to make that experience positive, though. It’s an issue where the Octavia devs, public clouds, deployment tools, and provider authors should be collaborating to make the experience consistent and reliable." For the short term, despite us having an upstream implementation in the OpenStack cloud provider, my plan is to steer users more in the direction of K8s-based solutions, primarily to help them have a consistent experience. However, I feel that a longer-term goal of the SIG-K8s should be in encouraging the adoption of Octavia and improving the provider implementation. -Chris [0] https://kubernetes.slack.com/archives/C0LSA3T7C/p1521039818000960 [1] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf [2] https://github.com/crosscloudci/cross-cloud/tree/master/openstack/modules/loadbalancer [3] https://github.com/kubernetes/ingress-nginx [4] https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html [5] https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation From matt at oliver.net.au Thu Mar 15 22:46:18 2018 From: matt at oliver.net.au (Matthew Oliver) Date: Fri, 16 Mar 2018 09:46:18 +1100 Subject: [openstack-dev] [First Contact][SIG] Weekly Meeting In-Reply-To: References: Message-ID: sorry I missed the meeting, I've been off with family related gastro fun, fun times... first me and then the pregnant wife which means spending the whole morning yesterday in hostpital re-hydrating the wife, just as a precaution to keep the baby safe. good news was the toddler wasn't hit as bad as us. Anyway, I'll be at the next one. My apologies, this week ended up being kind of a write off for me :( Matt On Wed, Mar 14, 2018 at 10:13 AM, Kendall Nelson wrote: > Hello! > > [1] has been merged and we have an agenda [2] so we are full steam ahead > for the upcoming meeting! > > Our inaugural First Contact SIG meeting will be in #openstack-meeting at > 0800 UTC Wednesday! > > Hope to see you all in ~9 hours! > > -Kendall (diablo_rojo) > > [1]https://review.openstack.org/#/c/549849/ > [2] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 15 23:04:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 18:04:46 -0500 Subject: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron In-Reply-To: References: Message-ID: <82358229-b86f-073c-5019-6e762be73722@gmail.com> On 3/15/2018 3:30 PM, melanie witt wrote: >     * We don't need to block bandwidth-based scheduling support for > doing port creation in conductor (it's not trivial), however, if nova > creates a port on a network with a QoS policy, nova is going to have to > munge the allocations and update placement (from nova-compute) ... so > maybe we should block this on moving port creation to conductor after all This is not the current direction in the spec. The spec is *large* and detailed, and this is one of the things being discussed in there. For the latest on all of it, gonna need to get caught up on the spec. But it won't be updated for awhile because Brother Gib is on vacation. >   * On routed provider networks: >     * On the Neutron side, this is already done: I still don't know how this works because the summit videos talk about needing port creation happening in conductor which we never got done in nova. I suggested to Miguel the other day that we/someone should setup a multi-node CI job which does all of the configuration required here to make sure this stuff actually works, because it's quite complicated given there are three services involved in making it all work. Hopefully anyone that's already using this successfully in production, or is thinking about using it, would help out with setting up the CI configuration for testing this out - we could do it in the nova-next job if we wanted to make that multinode. Maybe jroll can be guilted into helping since he was the one asking about this at the PTG I think. -- Thanks, Matt From doug at doughellmann.com Thu Mar 15 23:15:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 19:15:23 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <20180315150550.nwznbsj3xeytwa35@gentoo.org> References: <1521110096-sup-3634@lrrr.local> <09e1597e-0217-7bed-2b89-d6146c8e79ca@openstack.org> <1521121433-sup-7650@lrrr.local> <20180315150550.nwznbsj3xeytwa35@gentoo.org> Message-ID: <1521155696-sup-8614@lrrr.local> Excerpts from Matthew Thode's message of 2018-03-15 10:05:50 -0500: > On 18-03-15 09:45:38, Doug Hellmann wrote: > > Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100: > > > Doug Hellmann wrote: > > > > [...] > > > > TL;DR > > > > ----- > > > > > > > > Let's stop copying exact dependency specifications into all our > > > > projects to allow them to reflect the actual versions of things > > > > they depend on. The constraints system in pip makes this change > > > > safe. We still need to maintain some level of compatibility, so the > > > > existing requirements-check job (run for changes to requirements.txt > > > > within each repo) will change a bit rather than going away completely. > > > > We can enable unit test jobs to verify the lower constraint settings > > > > at the same time that we're doing the other work. > > > > > > Thanks for the very detailed plan, Doug. It all makes sense to me, > > > although I have a precision question (see below). > > > > > > > [...] > > > > We also need to change requirements-check to look at the exclusions > > > > to ensure they all appear in the global-requirements.txt list > > > > (the local list needs to be a subset of the global list, but > > > > does not have to match it exactly). We can't have one project > > > > excluding a version that others do not, because we could then > > > > end up with a conflict with the upper constraints list that could > > > > wedge the gate as we had happen in the past. > > > > [...] > > > > 2. We should stop syncing dependencies by turning off the > > > > propose-update-requirements job entirely. > > > > > > > > Turning off the job will stop the bot from proposing more > > > > dependency updates to projects. > > > > [...] > > > > After these 3 steps are done, the requirements team will continue > > > > to maintain the global-requirements.txt and upper-constraints.txt > > > > files, as before. Adding a new dependency to a project will still > > > > involve a review step to add it to the global list so we can monitor > > > > licensing, duplication, python 3 support, etc. But adjusting the > > > > version numbers once that dependency is in the global list will be > > > > easier. > > > > > > How would you set up an exclusion in that new world order ? We used to > > > add it to the global-requirements file and the bot would automatically > > > sync it to various consuming projects. > > > > > > Now since any exclusion needs to also appear on the global file, you > > > would push it first in the global-requirements, then to the project > > > itself, is that correct ? In the end the global-requirements file would > > > only contain those exclusions, right ? > > > > > > > The first step would need to be adding it to the global-requirements.txt > > list. After that, it would depend on how picky we want to be. If the > > upper-constraints.txt list is successfully updated to avoid the release, > > we might not need anything in the project. If the project wants to > > provide detailed guidance about compatibility, then they could add the > > exclusion. For example, if a version of oslo.config breaks cinder but > > not nova, we might only put the exclusion in global-requirements.txt and > > the requirements.txt for cinder. > > > > I wonder if we'd be able to have projects decide via a flag in their tox > or zuul config if they'd like to opt into auto-updating exclusions only. > We could just change the job that does the sync and use the existing projects.txt file, couldn't we? Doug From doug at doughellmann.com Thu Mar 15 23:22:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 19:22:03 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <20180315142848.we3w3lzohjv6yg2s@yuggoth.org> References: <1521110096-sup-3634@lrrr.local> <20180315142848.we3w3lzohjv6yg2s@yuggoth.org> Message-ID: <1521155776-sup-1453@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-03-15 14:28:49 +0000: > On 2018-03-15 07:03:11 -0400 (-0400), Doug Hellmann wrote: > [...] > > 1. Update the requirements-check test job to change the check for > > an exact match to be a check for compatibility with the > > upper-constraints.txt value. > [...] > > I thought it might be possible to even just do away with this job > entirely, but some cursory testing shows that if you supply a > required versionspec which excludes your constrained version of the > same package, you'll still get the constrained version installed > even though you indicated it wasn't in your "supported" range. Might > be a nice patch to work on upstream in pip, making it explicitly > error on such a mismatch (and _then_ we might be able to stop > bothering with this job). > > > We also need to change requirements-check to look at the exclusions > > to ensure they all appear in the global-requirements.txt list > > (the local list needs to be a subset of the global list, but > > does not have to match it exactly). We can't have one project > > excluding a version that others do not, because we could then > > end up with a conflict with the upper constraints list that could > > wedge the gate as we had happen in the past. > [...] > > At first it seems like this wouldn't end up being necessary; as long > as you're not setting an upper bound or excluding the constrained > version, there shouldn't be a coinstallability problem right? Though That second case is what this prevents. There's a race condition between updating the requirements range (and exclusions) in a project tree and updating the upper-constraints.txt list. The check forces those lists to be updated in an order that avoids a case where the version in constraints is not compatible with an app installed in an integration test job. > I suppose there are still a couple of potential pitfalls if we don't > check exclusions: setting an exclusion for a future version which > hasn't been released yet or is otherwise higher than the global > upper constraint; situations where we need to roll back a constraint > to an earlier version (e.g., we discover a bug in it) and some > project has that earlier version excluded. So I suppose there is > some merit to centrally coordinating these, making sure we can still > pick sane constraints which work for all projects (mental > exercise: do we also need to build a tool which can make sure that > proposed exclusions don't eliminate all possible version numbers?). Yes, those are all good failure cases that this prevents, too. > > As the minimum > > versions of dependencies diverge within projects, there will no > > longer *be* a real global set of minimum values. Tracking a list of > > "highest minimums", would either require rebuilding the list from the > > settings in all projects, or requiring two patches to change the > > minimum version of a dependency within a project. > [...] > > It's also been suggested in the past that package maintainers for > some distributions relied on the ranges in our global requirements > list to determine what the minimum acceptable version of a > dependency is so they know whether/when it needs updating (fairly > critical when you consider that within a given distro some > dependencies may be shared by entirely unrelated software outside > our ecosystem and may not be compatible with new versions as soon as > we are). On the other hand, we never actually _test_ our lower > bounds, so this was to some extent a convenient fiction anyway. The lack of testing is an issue, but the tight coupling of those lower bounds is a bigger problem to me. I know that distros don't necessarily package exactly what we have in the upper-constraints.txt list, but they're doing their own testing with those alternatives. > > > 1. Set up a new tox environment called "lower-constraints" with > > base-python set to "python3" and with the deps setting configured > > to include a copy of the existing global lower constraints file > > from the openstack/requirements repo. > [...] > > I didn't realize lower-constraints.txt already existed (looks like > it got added a little over a week ago). Reviewing the log it seems Yes, Dirk did that work. > to have been updated based on individual projects' declared minimums > so far which seems to make it a questionable starting point for a > baseline. I suppose the assumption is that projects have been > merging requirements proposals which bump their declared > lower-bounds, though experience suggests that this doesn't happen > consistently in projects receiving g-r updates today (they will > either ignore the syncs or amend them to undo the lower-bounds > changes before merging). At any rate, I suppose that's a separate > conversation to be had, and as you say it's just a place to start > from but projects will be able to change it to whatever values they > want at that point. Right. The fact that the values aren't necessarily accurate won't be affected by the change to stop syncing, and the additional unit tests should help us catch at least some of the issues. Doug From doug at doughellmann.com Thu Mar 15 23:29:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 19:29:37 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <20180315152410.rijbohawx6tqq37v@gentoo.org> References: <1521110096-sup-3634@lrrr.local> <20180315152410.rijbohawx6tqq37v@gentoo.org> Message-ID: <1521156158-sup-1121@lrrr.local> Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500: > On 18-03-15 07:03:11, Doug Hellmann wrote: > > What I Want to Do > > ----------------- > > > > 1. Update the requirements-check test job to change the check for > > an exact match to be a check for compatibility with the > > upper-constraints.txt value. > > > > We would check the value for the dependency from upper-constraints.txt > > against the range of allowed values in the project. If the > > constraint version is compatible, the dependency range is OK. > > > > This rule means that in order to change the dependency settings > > for a project in a way that are incompatible with the constraint, > > the constraint (and probably the global requirements list) would > > have to be changed first in openstack/requirements. However, if > > the change to the dependency is still compatible with the > > constraint, no change would be needed in openstack/requirements. > > For example, if the global list constraints a library to X.Y.Z > > and a project lists X.Y.Z-2 as the minimum version but then needs > > to raise that because it needs a feature in X.Y.Z-1, it can do > > that with a single patch in-tree. > > > > I think what may be better is for global-requirements to become a > gathering place for projects that requirements watches to have their > smallest constrainted installable set defined in. > > Upper-constraints has a req of foo===2.0.3 > Project A has a req of foo>=1.0.0,!=1.6.0 > Project B has a req of foo>=1.4.0 > Global reqs would be updated with foo>=1.4.0,!=1.6.0 > Project C comes along and sets foo>=2.0.0 > Global reqs would be updated with foo>=2.0.0 > > This would make global-reqs descriptive rather than prescriptive for > versioning and would represent the 'true' version constraints of > openstack. It sounds like you're suggesting syncing in the other direction, which could be useful. I think we can proceed with what I've described and consider the work to build what you describe as a separate project. > > > We also need to change requirements-check to look at the exclusions > > to ensure they all appear in the global-requirements.txt list > > (the local list needs to be a subset of the global list, but > > does not have to match it exactly). We can't have one project > > excluding a version that others do not, because we could then > > end up with a conflict with the upper constraints list that could > > wedge the gate as we had happen in the past. > > > > How would this happen when using constraints? A project is not allowed > to have a requirement that masks a constriant (and would be verified via > the requirements-check job). If project A excludes version X before the constraint list is updated to use it, and then project B starts trying to depend on version X, they become incompatible. We need to continue to manage our declarations of incompatible versions to ensure that the constraints list is a good list of versions to test everything under. > There's a failure mode not covered, a project could add a mask (!=) to > their requirements before we update constraints. The project that was > passing the requirements-check job would then become incompatable. This > means that the requirements-check would need to be run for each > changeset to catch this as soon as it happens, instead of running only > on requirements changes. I'm not clear on what you're describing here, but it sounds like a variation of the failure modes that would be prevented if we require exclusions to exist in the global list before they could be added to the local list. > > > We also need to verify that projects do not cap dependencies for > > the same reason. Caps prevent us from advancing to versions of > > dependencies that are "too new" and possibly incompatible. We > > can manage caps in the global requirements list, which would > > cause that list to calculate the constraints correctly. > > > > This change would immediately allow all projects currently > > following the global requirements lists to specify different > > lower bounds from that global list, as long as those lower bounds > > still allow the dependencies to be co-installable. (The upper > > bounds, managed through the upper-constraints.txt list, would > > still be built by selecting the newest compatible version because > > that is how pip's dependency resolver works.) > > > > 2. We should stop syncing dependencies by turning off the > > propose-update-requirements job entirely. > > > > Turning off the job will stop the bot from proposing more > > dependency updates to projects. > > > > As part of deleting the job we can also remove the "requirements" > > case from playbooks/proposal/propose_update.sh, since it won't > > need that logic any more. We can also remove the update-requirements > > command from the openstack/requirements repository, since that > > is the tool that generates the updated list and it won't be > > needed if we aren't proposing updates any more. > > > > 3. Remove the minimum specifications from the global requirements > > list to make clear that the global list is no longer expressing > > minimums. > > > > This clean-up step has been a bit more controversial among the > > requirements team, but I think it is a key piece. As the minimum > > versions of dependencies diverge within projects, there will no > > longer *be* a real global set of minimum values. Tracking a list of > > "highest minimums", would either require rebuilding the list from the > > settings in all projects, or requiring two patches to change the > > minimum version of a dependency within a project. > > > > Maintaining a global list of minimums also implies that we > > consider it OK to run OpenStack as a whole with that list. This > > message conflicts with the message we've been sending about the > > upper constraints list since that was established, which is that > > we have a known good list of versions and deploying all of > > OpenStack with different versions of those dependencies is > > untested. > > > > As noted above I think that gathering the min versions/maskings from > openstack projects to be valuable (especially to packagers who already > use our likely invalid values already). OK. I don't feel that strongly about the cleanup work, so if we want to keep the lower bounds in place I think that's OK. > > > After these 3 steps are done, the requirements team will continue > > to maintain the global-requirements.txt and upper-constraints.txt > > files, as before. Adding a new dependency to a project will still > > involve a review step to add it to the global list so we can monitor > > licensing, duplication, python 3 support, etc. But adjusting the > > version numbers once that dependency is in the global list will be > > easier. > > > > Thanks for writing this up, I think it looks good in general, but like > you mentioned before, there is some discussion to be had about gathering > and creating a versionspec from all of openstack for requirements. > From prometheanfire at gentoo.org Thu Mar 15 23:36:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 15 Mar 2018 18:36:50 -0500 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521156158-sup-1121@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <20180315152410.rijbohawx6tqq37v@gentoo.org> <1521156158-sup-1121@lrrr.local> Message-ID: <20180315233650.eexpwkfmkdyojl5n@gentoo.org> On 18-03-15 19:29:37, Doug Hellmann wrote: > Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500: > > On 18-03-15 07:03:11, Doug Hellmann wrote: > > > What I Want to Do > > > ----------------- > > > > > > 1. Update the requirements-check test job to change the check for > > > an exact match to be a check for compatibility with the > > > upper-constraints.txt value. > > > > > > We would check the value for the dependency from upper-constraints.txt > > > against the range of allowed values in the project. If the > > > constraint version is compatible, the dependency range is OK. > > > > > > This rule means that in order to change the dependency settings > > > for a project in a way that are incompatible with the constraint, > > > the constraint (and probably the global requirements list) would > > > have to be changed first in openstack/requirements. However, if > > > the change to the dependency is still compatible with the > > > constraint, no change would be needed in openstack/requirements. > > > For example, if the global list constraints a library to X.Y.Z > > > and a project lists X.Y.Z-2 as the minimum version but then needs > > > to raise that because it needs a feature in X.Y.Z-1, it can do > > > that with a single patch in-tree. > > > > > > > I think what may be better is for global-requirements to become a > > gathering place for projects that requirements watches to have their > > smallest constrainted installable set defined in. > > > > Upper-constraints has a req of foo===2.0.3 > > Project A has a req of foo>=1.0.0,!=1.6.0 > > Project B has a req of foo>=1.4.0 > > Global reqs would be updated with foo>=1.4.0,!=1.6.0 > > Project C comes along and sets foo>=2.0.0 > > Global reqs would be updated with foo>=2.0.0 > > > > This would make global-reqs descriptive rather than prescriptive for > > versioning and would represent the 'true' version constraints of > > openstack. > > It sounds like you're suggesting syncing in the other direction, which > could be useful. I think we can proceed with what I've described and > consider the work to build what you describe as a separate project. > Yes, this would be a follow-on thing. > > > > > We also need to change requirements-check to look at the exclusions > > > to ensure they all appear in the global-requirements.txt list > > > (the local list needs to be a subset of the global list, but > > > does not have to match it exactly). We can't have one project > > > excluding a version that others do not, because we could then > > > end up with a conflict with the upper constraints list that could > > > wedge the gate as we had happen in the past. > > > > > > > How would this happen when using constraints? A project is not allowed > > to have a requirement that masks a constriant (and would be verified via > > the requirements-check job). > > If project A excludes version X before the constraint list is updated to > use it, and then project B starts trying to depend on version X, they > become incompatible. > > We need to continue to manage our declarations of incompatible versions > to ensure that the constraints list is a good list of versions to test > everything under. > > > There's a failure mode not covered, a project could add a mask (!=) to > > their requirements before we update constraints. The project that was > > passing the requirements-check job would then become incompatable. This > > means that the requirements-check would need to be run for each > > changeset to catch this as soon as it happens, instead of running only > > on requirements changes. > > I'm not clear on what you're describing here, but it sounds like a > variation of the failure modes that would be prevented if we require > exclusions to exist in the global list before they could be added to the > local list. > Yes, that'd work (require exclusions to be global before local). > > > > > We also need to verify that projects do not cap dependencies for > > > the same reason. Caps prevent us from advancing to versions of > > > dependencies that are "too new" and possibly incompatible. We > > > can manage caps in the global requirements list, which would > > > cause that list to calculate the constraints correctly. > > > > > > This change would immediately allow all projects currently > > > following the global requirements lists to specify different > > > lower bounds from that global list, as long as those lower bounds > > > still allow the dependencies to be co-installable. (The upper > > > bounds, managed through the upper-constraints.txt list, would > > > still be built by selecting the newest compatible version because > > > that is how pip's dependency resolver works.) > > > > > > 2. We should stop syncing dependencies by turning off the > > > propose-update-requirements job entirely. > > > > > > Turning off the job will stop the bot from proposing more > > > dependency updates to projects. > > > > > > As part of deleting the job we can also remove the "requirements" > > > case from playbooks/proposal/propose_update.sh, since it won't > > > need that logic any more. We can also remove the update-requirements > > > command from the openstack/requirements repository, since that > > > is the tool that generates the updated list and it won't be > > > needed if we aren't proposing updates any more. > > > > > > 3. Remove the minimum specifications from the global requirements > > > list to make clear that the global list is no longer expressing > > > minimums. > > > > > > This clean-up step has been a bit more controversial among the > > > requirements team, but I think it is a key piece. As the minimum > > > versions of dependencies diverge within projects, there will no > > > longer *be* a real global set of minimum values. Tracking a list of > > > "highest minimums", would either require rebuilding the list from the > > > settings in all projects, or requiring two patches to change the > > > minimum version of a dependency within a project. > > > > > > Maintaining a global list of minimums also implies that we > > > consider it OK to run OpenStack as a whole with that list. This > > > message conflicts with the message we've been sending about the > > > upper constraints list since that was established, which is that > > > we have a known good list of versions and deploying all of > > > OpenStack with different versions of those dependencies is > > > untested. > > > > > > > As noted above I think that gathering the min versions/maskings from > > openstack projects to be valuable (especially to packagers who already > > use our likely invalid values already). > > OK. I don't feel that strongly about the cleanup work, so if we want to > keep the lower bounds in place I think that's OK. > > > > > > After these 3 steps are done, the requirements team will continue > > > to maintain the global-requirements.txt and upper-constraints.txt > > > files, as before. Adding a new dependency to a project will still > > > involve a review step to add it to the global list so we can monitor > > > licensing, duplication, python 3 support, etc. But adjusting the > > > version numbers once that dependency is in the global list will be > > > easier. > > > > > > > Thanks for writing this up, I think it looks good in general, but like > > you mentioned before, there is some discussion to be had about gathering > > and creating a versionspec from all of openstack for requirements. > > > -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Thu Mar 15 23:43:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 15 Mar 2018 19:43:49 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <20180315233650.eexpwkfmkdyojl5n@gentoo.org> References: <1521110096-sup-3634@lrrr.local> <20180315152410.rijbohawx6tqq37v@gentoo.org> <1521156158-sup-1121@lrrr.local> <20180315233650.eexpwkfmkdyojl5n@gentoo.org> Message-ID: <1521157402-sup-6763@lrrr.local> Excerpts from Matthew Thode's message of 2018-03-15 18:36:50 -0500: > On 18-03-15 19:29:37, Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500: > > > On 18-03-15 07:03:11, Doug Hellmann wrote: > > > > What I Want to Do > > > > ----------------- > > > > > > > > 1. Update the requirements-check test job to change the check for > > > > an exact match to be a check for compatibility with the > > > > upper-constraints.txt value. > > > > > > > > We would check the value for the dependency from upper-constraints.txt > > > > against the range of allowed values in the project. If the > > > > constraint version is compatible, the dependency range is OK. > > > > > > > > This rule means that in order to change the dependency settings > > > > for a project in a way that are incompatible with the constraint, > > > > the constraint (and probably the global requirements list) would > > > > have to be changed first in openstack/requirements. However, if > > > > the change to the dependency is still compatible with the > > > > constraint, no change would be needed in openstack/requirements. > > > > For example, if the global list constraints a library to X.Y.Z > > > > and a project lists X.Y.Z-2 as the minimum version but then needs > > > > to raise that because it needs a feature in X.Y.Z-1, it can do > > > > that with a single patch in-tree. > > > > > > > > > > I think what may be better is for global-requirements to become a > > > gathering place for projects that requirements watches to have their > > > smallest constrainted installable set defined in. > > > > > > Upper-constraints has a req of foo===2.0.3 > > > Project A has a req of foo>=1.0.0,!=1.6.0 > > > Project B has a req of foo>=1.4.0 > > > Global reqs would be updated with foo>=1.4.0,!=1.6.0 > > > Project C comes along and sets foo>=2.0.0 > > > Global reqs would be updated with foo>=2.0.0 > > > > > > This would make global-reqs descriptive rather than prescriptive for > > > versioning and would represent the 'true' version constraints of > > > openstack. > > > > It sounds like you're suggesting syncing in the other direction, which > > could be useful. I think we can proceed with what I've described and > > consider the work to build what you describe as a separate project. > > > > Yes, this would be a follow-on thing. > > > > > > > > We also need to change requirements-check to look at the exclusions > > > > to ensure they all appear in the global-requirements.txt list > > > > (the local list needs to be a subset of the global list, but > > > > does not have to match it exactly). We can't have one project > > > > excluding a version that others do not, because we could then > > > > end up with a conflict with the upper constraints list that could > > > > wedge the gate as we had happen in the past. > > > > > > > > > > How would this happen when using constraints? A project is not allowed > > > to have a requirement that masks a constriant (and would be verified via > > > the requirements-check job). > > > > If project A excludes version X before the constraint list is updated to > > use it, and then project B starts trying to depend on version X, they > > become incompatible. > > > > We need to continue to manage our declarations of incompatible versions > > to ensure that the constraints list is a good list of versions to test > > everything under. > > > > > There's a failure mode not covered, a project could add a mask (!=) to > > > their requirements before we update constraints. The project that was > > > passing the requirements-check job would then become incompatable. This > > > means that the requirements-check would need to be run for each > > > changeset to catch this as soon as it happens, instead of running only > > > on requirements changes. > > > > I'm not clear on what you're describing here, but it sounds like a > > variation of the failure modes that would be prevented if we require > > exclusions to exist in the global list before they could be added to the > > local list. > > > > Yes, that'd work (require exclusions to be global before local). OK. That's what I was trying to describe as the new rules. > > > > > > > > We also need to verify that projects do not cap dependencies for > > > > the same reason. Caps prevent us from advancing to versions of > > > > dependencies that are "too new" and possibly incompatible. We > > > > can manage caps in the global requirements list, which would > > > > cause that list to calculate the constraints correctly. > > > > > > > > This change would immediately allow all projects currently > > > > following the global requirements lists to specify different > > > > lower bounds from that global list, as long as those lower bounds > > > > still allow the dependencies to be co-installable. (The upper > > > > bounds, managed through the upper-constraints.txt list, would > > > > still be built by selecting the newest compatible version because > > > > that is how pip's dependency resolver works.) > > > > > > > > 2. We should stop syncing dependencies by turning off the > > > > propose-update-requirements job entirely. > > > > > > > > Turning off the job will stop the bot from proposing more > > > > dependency updates to projects. > > > > > > > > As part of deleting the job we can also remove the "requirements" > > > > case from playbooks/proposal/propose_update.sh, since it won't > > > > need that logic any more. We can also remove the update-requirements > > > > command from the openstack/requirements repository, since that > > > > is the tool that generates the updated list and it won't be > > > > needed if we aren't proposing updates any more. > > > > > > > > 3. Remove the minimum specifications from the global requirements > > > > list to make clear that the global list is no longer expressing > > > > minimums. > > > > > > > > This clean-up step has been a bit more controversial among the > > > > requirements team, but I think it is a key piece. As the minimum > > > > versions of dependencies diverge within projects, there will no > > > > longer *be* a real global set of minimum values. Tracking a list of > > > > "highest minimums", would either require rebuilding the list from the > > > > settings in all projects, or requiring two patches to change the > > > > minimum version of a dependency within a project. > > > > > > > > Maintaining a global list of minimums also implies that we > > > > consider it OK to run OpenStack as a whole with that list. This > > > > message conflicts with the message we've been sending about the > > > > upper constraints list since that was established, which is that > > > > we have a known good list of versions and deploying all of > > > > OpenStack with different versions of those dependencies is > > > > untested. > > > > > > > > > > As noted above I think that gathering the min versions/maskings from > > > openstack projects to be valuable (especially to packagers who already > > > use our likely invalid values already). > > > > OK. I don't feel that strongly about the cleanup work, so if we want to > > keep the lower bounds in place I think that's OK. > > > > > > > > > After these 3 steps are done, the requirements team will continue > > > > to maintain the global-requirements.txt and upper-constraints.txt > > > > files, as before. Adding a new dependency to a project will still > > > > involve a review step to add it to the global list so we can monitor > > > > licensing, duplication, python 3 support, etc. But adjusting the > > > > version numbers once that dependency is in the global list will be > > > > easier. > > > > > > > > > > Thanks for writing this up, I think it looks good in general, but like > > > you mentioned before, there is some discussion to be had about gathering > > > and creating a versionspec from all of openstack for requirements. > > > > > > From gmann at ghanshyammann.com Fri Mar 16 00:31:52 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 16 Mar 2018 09:31:52 +0900 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: <20180315124535.ekyibc5wowjjjogg@pacific.linksys.moosehall> References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> <35078e57-2acb-f500-59c0-18eebdf9db04@redhat.com> <20180315124535.ekyibc5wowjjjogg@pacific.linksys.moosehall> Message-ID: On Thu, Mar 15, 2018 at 9:45 PM, Adam Spiers wrote: > Raoul Scarazzini wrote: >> >> On 15/03/2018 01:57, Ghanshyam Mann wrote: >>> >>> Thanks all for starting the collaboration on this which is long pending >>> things and we all want to have some start on this. >>> Myself and SamP talked about it during OPS meetup in Tokyo and we talked >>> about below draft plan- >>> - Update the Spec - https://review.openstack.org/#/c/443504/. which is >>> almost ready as per SamP and his team is working on that. >>> - Start the technical debate on tooling we can use/reuse like Yardstick >>> etc, which is more this mailing thread. >>> - Accept the new repo for Eris under QA and start at least something in >>> Rocky cycle. >>> I am in for having meeting on this which is really good idea. non-IRC >>> meeting is totally fine here. Do we have meeting place and time setup ? >>> -gmann >> >> >> Hi Ghanshyam, >> as I wrote earlier in the thread it's no problem for me to offer my >> bluejeans channel, let's sort out which timeslice can be good. I've >> added to the main etherpad [1] my timezone (line 53), let's do all that >> so that we can create the meeting invite. >> >> [1] https://etherpad.openstack.org/p/extreme-testing-contacts > > > Good idea! I've added mine. We're still missing replies from several > key stakeholders though (lines 62++) - probably worth getting buy-in > from a few more people before we organise anything. I'm pinging a few > on IRC with reminders about this. > Thanks rasca, aspiers. I have added myself there and yea good ides to ping remaining on IRC. -gmann > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Fri Mar 16 01:49:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 15 Mar 2018 20:49:42 -0500 Subject: [openstack-dev] [gate] Bug 1741275 is our top gate failure since March 13th Message-ID: <8635415b-f9b5-6179-e05a-3dbcdd493f13@gmail.com> If you're noticed any volume-related tests failing this week, it's not just you. There is an old bug that is back where the c-sch CapacityFilter is kicking out the host because there is too much going on in the single host at once. http://status.openstack.org/elastic-recheck/#1741275 Rechecking changes at this point isn't going to help much. It looks like the spike started around March 13 so can people help look for things like new tests which maybe pushed over the number of concurrently created volumes/snapshots in a single test run? -- Thanks, Matt From giuseppe.decandia at gmail.com Fri Mar 16 03:18:29 2018 From: giuseppe.decandia at gmail.com (Pino de Candia) Date: Thu, 15 Mar 2018 22:18:29 -0500 Subject: [openstack-dev] [Tatu][Nova] Handling instance destruction In-Reply-To: References: Message-ID: Hi Michael, Thanks for your message... and thanks for your vendordata work! About your question, Tatu listens to events on the oslo message bus. Specifically, it reacts to compute.instance.delete.end by cleaning up per-instance resources. It also listens to project creation and user role assignment changes. The code is at: https://github.com/openstack/tatu/blob/master/tatu/notifications.py best, Pino On Thu, Mar 15, 2018 at 3:42 PM, Michael Still wrote: > Heya, > > I've just stumbled across Tatu and the design presentation [1], and I am > wondering how you handle cleaning up instances when they are deleted given > that nova vendordata doesn't expose a "delete event". > > Specifically I'm wondering if we should add support for such an event to > vendordata somehow, given I can now think of a couple of use cases for it. > > Thanks, > Michael > > 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If- > A5Zi4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Mar 16 03:31:11 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 16 Mar 2018 14:31:11 +1100 Subject: [openstack-dev] [api] APAC-friendly API-SIG meeting times In-Reply-To: References: <6D342053-79C2-4AAB-8F8B-6687F8CA6C29@leafe.com> Message-ID: Hi, Any chance we can progress on this one? I believe there are not enough participants to split the API SIG meeting in 2, and also more likely because of the same lack of people across the 2 it could make it pretty inefficient. Therefore I think changing the main meeting time to another might be better but I could be wrong. Anyway in all cases I can't make progress with a meeting in the middle of the night for me so I would appreciate if we could re-activate this discussion. Thanks, Gilles On 13/12/17 02:22, Ed Leafe wrote: > Re-sending this in the hope of getting more responses. If you’re in the APAC region and interested in contributing to our discussions, please indicate your preferences on the link below. > >>> That brought up another issue: the current meeting time for the API-SIG is 1600UTC, which is not very convenient for APAC contributors. Gilles is in Australia, and so it was the middle of the night for him! As one of the goals for the API-SIG is to expand our audience and membership, edleafe committed to seeing if there is an available meeting slot at 2200UTC, which would be convenient for APAC, and still early enough for US people to attend. If an APAC-friendly meeting time would be good for you, please keep an eye out on the mailing list for an announcement if we are able to set that up, and then please attend and participate! >> Looking at the current meeting schedule, there are openings at 2200UTC on Tuesday, Wednesday, and Thursday mornings in APAC (Monday, Tuesday, and Wednesday afternoons in the US). >> >> I’ve set up a doodle so that people can record their preferences: >> >> https://doodle.com/poll/bec9gfff38zvh3ud >> >> If you’re interested in attending API-SIG meetings, please fill out the form at that URL with your preferences. I’ll summarize the results at the next API-SIG meeting. >> >> >> -- Ed Leafe >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer, Openstack DFG Integration Mobile: +61 400 894 219 Email: gilles at redhat.com GitHub/IRC: gildub From zhipengh512 at gmail.com Fri Mar 16 04:00:25 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 16 Mar 2018 12:00:25 +0800 Subject: [openstack-dev] [cyborg]Summary of Mar 14 Meeting Message-ID: Hi Team, Here are the meeting summary for our post-ptg kickoff meeting. 0. Meeting recordings: https://www.youtube.com/watch?v=6AZn0SUC_hw , https://www.youtube.com/watch?v=-wE2GkSibDo , https://www.youtube.com/watch?v=E40JOm311WI 1. PoC from Shaohe and Dolpher ( https://etherpad.openstack.org/p/cyborg-nova-poc) (1) Agree with the resource class and trait custom definition (2) Move the claim/release design to the os-acc lib. Also change the allocation to assigment in order to avoid confusion with Placement funcationality. (3) Should avoid cache image as much as possible, but when necessary it is recommended that cyborg config a default temp folder for the image cache. Vendoer implementation could point to that location with subfolders for their images (4) Agreed that cyborg-agent will be responsible for pulling the image and cyborg-conductor for coordination with Placement. (5) Agreed that the programming operation should be a blocking one (if it fails then everything fails) since that although the delay of programming varies generally it should not be a major concern. 2. Rocky Cycle Task Assignments: Please refer to the meeting minutes about the action items: http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-03-14-14.07.html -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From joe at topjian.net Fri Mar 16 04:01:10 2018 From: joe at topjian.net (Joe Topjian) Date: Thu, 15 Mar 2018 22:01:10 -0600 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> Message-ID: Hi Chris, I wear a number of hats related to this discussion, so I'll add a few points of view :) It turns out that with > Terraform, it's possible to tear down resources in a way that causes > Neutron to > leak administrator-privileged resources that can not be deleted by a > non-privileged users. In discussions with the Neutron and Octavia teams, > it was > strongly recommended that I move away from the Neutron LBaaSv2 API and > instead > adopt Octavia. Vexxhost graciously installed Octavia and my request and I > was > able to move past this issue. > Terraform hat! I want to slightly nit-pick this one since the words "leak" and "admin-priv" can sound scary: Terraform technically wasn't doing anything wrong. The problem was that Octavia was creating resources but not setting ownership to the tenant. When it came time to delete the resources, Octavia was correctly refusing, though it incorrectly created said resources. >From reviewing the discussion, other parties were discovering this issue and patching in parallel to your discovery. Both xgerman and Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost quickly applied the patch. It was a really awesome collaboration between yourself, dims, xgerman, and Vexxhost. > This highlights the first call to action for our public and private cloud > community: encouraging the rapid migration from older, unsupported APIs to > Octavia. > Operator hat! The clouds my team and I run are more compute-based. Our users would be more excited if we increased our GPU pool than enhanced the networking services. With that in mind, when I hear it said that "Octavia is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that means we can keep running Neutron LBaaS v2 for now" and focus our efforts elsewhere. I totally get why Octavia is advertised this way and it's very much appreciated. When I learned about Octavia, my knee-jerk reaction was "oh no, not another load balancer" but that was remedied when I learned it's more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not our primary focus and we can still squeak by with Neutron's LBaaS v2. If you *really* wanted us to deploy Octavia ASAP, then a migration guide would be wonderful. I read over the "Developer / Operator Quick Start Guide" and found it very well written! I groaned over having to build an image but I also really appreciate the image builder script. If there can't be pre-built images available for testing, the second-best option is that script. > This highlights a second call to action for the SDK and provider > developers: > recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding > support for more advanced Octavia features. > Gophercloud hat! We've supported Octavia for a few months now, but purely by having the load-balancer client piggyback off of the Neutron LBaaS v2 API. We made the decision this morning, coincidentally enough, to have Octavia be a first-class service peered with Neutron rather than think of Octavia as a Neutron/network child. This will allow Octavia to fully flourish without worry of affecting the existing LBaaS v2 API (which we'll still keep around separately). Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Fri Mar 16 04:57:22 2018 From: sam47priya at gmail.com (Sam P) Date: Fri, 16 Mar 2018 13:57:22 +0900 Subject: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream In-Reply-To: References: <3bbeffd7-5950-bd17-d608-c28f96fab779@redhat.com> <20180306122700.vh7s26mype66mfxw@pacific.linksys.moosehall> <9a45d40f-078d-06c0-c1f1-30bf345663c9@redhat.com> <20180307102058.dkmavc5hzvylvhvu@pacific.linksys.moosehall> <20180308160353.hugvam2pg5pt7ffe@pacific.linksys.moosehall> <4252aa3b-b46d-5680-fb1d-89a84d72d3be@redhat.com> <35078e57-2acb-f500-59c0-18eebdf9db04@redhat.com> <20180315124535.ekyibc5wowjjjogg@pacific.linksys.moosehall> Message-ID: Hi All, Sorry, Late to the party... I have added myself. --- Regards, Sampath On Fri, Mar 16, 2018 at 9:31 AM, Ghanshyam Mann wrote: > On Thu, Mar 15, 2018 at 9:45 PM, Adam Spiers wrote: > > Raoul Scarazzini wrote: > >> > >> On 15/03/2018 01:57, Ghanshyam Mann wrote: > >>> > >>> Thanks all for starting the collaboration on this which is long pending > >>> things and we all want to have some start on this. > >>> Myself and SamP talked about it during OPS meetup in Tokyo and we > talked > >>> about below draft plan- > >>> - Update the Spec - https://review.openstack.org/#/c/443504/. which is > >>> almost ready as per SamP and his team is working on that. > >>> - Start the technical debate on tooling we can use/reuse like Yardstick > >>> etc, which is more this mailing thread. > >>> - Accept the new repo for Eris under QA and start at least something in > >>> Rocky cycle. > >>> I am in for having meeting on this which is really good idea. non-IRC > >>> meeting is totally fine here. Do we have meeting place and time setup ? > >>> -gmann > >> > >> > >> Hi Ghanshyam, > >> as I wrote earlier in the thread it's no problem for me to offer my > >> bluejeans channel, let's sort out which timeslice can be good. I've > >> added to the main etherpad [1] my timezone (line 53), let's do all that > >> so that we can create the meeting invite. > >> > >> [1] https://etherpad.openstack.org/p/extreme-testing-contacts > > > > > > Good idea! I've added mine. We're still missing replies from several > > key stakeholders though (lines 62++) - probably worth getting buy-in > > from a few more people before we organise anything. I'm pinging a few > > on IRC with reminders about this. > > > > Thanks rasca, aspiers. I have added myself there and yea good ides to > ping remaining on IRC. > > -gmann > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Mar 16 05:19:42 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 16 Mar 2018 16:19:42 +1100 Subject: [openstack-dev] [all][api] POST /api-sig/news In-Reply-To: References: Message-ID: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> Hello, In order to continue and progress on the API Schema guideline [1] as mentioned in [2] to make APIs more machine-discoverable and also discussed during [3]. Unfortunately until a new or either a second meeting time slot has been allocated,  inconveniently for everyone, have to be done by emails. > We felt that this was more of a one-off need rather than something we'd like to see rolled out across all OpenStack APIs. Of course new features have to be decided (voted) by the community but how does that work when there are not enough people voting in? It seems unfair to decide not to move forward and ignore the request because the others people interested are not participating at this level. It's very important  to consider the fact "I" am representing more than just myself but an Openstack integration team, whose members are supporting me, and our work impacts others teams involved in their open source product consuming OpenStack. I'm sorry if I haven't made this more clear from the beginning, I guess I'm still learning on the particiaption process. So from now on, I'm going to use "us" instead. Also from discussions with other developers from AT&T (OpenStack summit in Sydney) and SAP (Misty project) who are already using automation to consume APIs, this is really needed. I've also mentioned the now known fact that no SDK has full time resources to maintain it (which was the initial trigger for us) more automation is the only sustainable way to continue the journey. Finally how can we dare say no to more automation? Unless of course, only artisan work done by real hipster is allowed ;) > Furthermore, API-Schema will be problematic for services that use microversions. If you have some insight or opinions on this, please add your comments to that review. I understand microversion standardization (OpenAPI) has not happened yet or if it ever does but that shouldn't preclude making progress. As a matter of fact we can represent different versions of a resource in many ways just not in standardized fashion, and in the simplest (or worst case) scenario an API Schema can represent only one microversion version, more likely the latest at a specific point of time such Schema was built. Also responding to [4]: the goal, from gilles, is being able to create client code that works against real deployments In some initial brainstorm discussion effectively came up the idea of doing of building the SDK client dynamically against any OpenStack service. For instance, depending on the specific (micro)versions supported by the servers the API schema would be given in response. Although an interesting idea, we are not talking about such feature, which could be an interesting academic topic (or not!). So summarize and clarify, we are talking about SDK being able to build their interface to Openstack APIs in an automated way but statically from API Schema generated by every project. Such API Schema is already built in memory during API reference documentation generation and could be saved in JSON format (for instance) (see [5]). [1] https://review.openstack.org/#/c/524467/ [2] http://lists.openstack.org/pipermail/openstack-dev/2018-February/127140.html [3] eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-08-16.00.log.html#l-95 [4] http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-08-16.00.log.html#l-127 [5] https://review.openstack.org/#/c/528801 Cheers, Gilles On 16/03/18 04:30, Chris Dent wrote: > > Greetings OpenStack community, > > A rousing good time at the API-SIG meeting today. We opened with some > discussion on what might be missing from the Methods [7] section of > the HTTP guidelines. At the PTG we had discussed that perhaps we > needed more info on which methods were appropriate when. It turns that > what we probably need is better discoverability, so we're going to > work on that but at the same time do a general review of that entire > page. > > We then talked about microversions a bit (because it wouldn't be an > API-SIG without them). There's an in-progress history of microversions > document (linked below) that we need to decide if we'll revive. If you > have a strong opinion, let us know. > > And finally we explored the options for how or if Neutron can cleanly > resolve the handling of invalid query parameters. This was raised a > while back in an email thread [8]. It's generally a good idea not to > break existing client code, but what if that client code is itself > broken? The next step will be to make the choice configurable. Neutron > doesn't support microversions so "throw another microversion at it" > won't work. > > As always if you're interested in helping out, in addition to coming > to the meetings, there's also: > > * The list of bugs [5] indicates several missing or incomplete > guidelines. > * The existing guidelines [2] always need refreshing to account for > changes over time. If you find something that's not quite right, > submit a patch [6] to fix it. > * Have you done something for which you think guidance would have made > things easier but couldn't find any? Submit a patch and help others [6]. > > # Newly Published Guidelines > > None this week. > > # API Guidelines Proposed for Freeze > > Guidelines that are ready for wider review by the whole community. > > None this week. > > # Guidelines Currently Under Review [3] > > * Add guidance on needing cache-control headers >   https://review.openstack.org/550468 > > * Add guideline on exposing microversions in SDKs >   https://review.openstack.org/#/c/532814/ > > * Add API-schema guide (still being defined) >   https://review.openstack.org/#/c/524467/ > > * A (shrinking) suite of several documents about doing version and > service discovery >   Start at https://review.openstack.org/#/c/459405/ > > * WIP: microversion architecture archival doc (very early; not yet > ready for review) >   https://review.openstack.org/444892 > > # Highlighting your API impacting issues > > If you seek further review and insight from the API SIG about APIs > that you are developing or changing, please address your concerns in > an email to the OpenStack developer mailing list[1] with the tag > "[api]" in the subject. In your email, you should include any relevant > reviews, links, and comments to help guide the discussion of the > specific challenge you are facing. > > To learn more about the API SIG mission and the work we do, see our > wiki page [4] and guidelines [2]. > > Thanks for reading and see you next week! > > # References > > [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > [2] http://specs.openstack.org/openstack/api-wg/ > [3] > https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z > [4] https://wiki.openstack.org/wiki/API_SIG > [5] https://bugs.launchpad.net/openstack-api-wg > [6] https://git.openstack.org/cgit/openstack/api-wg > [7] > http://specs.openstack.org/openstack/api-wg/guidelines/http.html#http-methods > [8] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128023.html > > Meeting Agenda > https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda > Past Meeting Records > http://eavesdrop.openstack.org/meetings/api_sig/ > Open Bugs > https://bugs.launchpad.net/openstack-api-wg > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer, Openstack DFG Integration Mobile: +61 400 894 219 Email: gilles at redhat.com GitHub/IRC: gildub -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Fri Mar 16 05:34:28 2018 From: mikal at stillhq.com (Michael Still) Date: Fri, 16 Mar 2018 16:34:28 +1100 Subject: [openstack-dev] [Tatu][Nova] Handling instance destruction In-Reply-To: References: Message-ID: Thanks for this. I read the README for the project after this and I do now realise you're using notifications for some of these events. I guess I'm still pondering if its reasonable to have everyone listen to notifications to build systems like these, or if we should messages to vendordata to handle these actions. Vendordata is intended at deployers, so having a simple and complete interface seems important. There were also comments in the README about wanting to change the data that appears in the metadata server over time. I'm wondering how that maps into the configdrive universe. Could you explain those comments a bit more please? Thanks for your quick reply, Michael On Fri, Mar 16, 2018 at 2:18 PM, Pino de Candia wrote: > Hi Michael, > > Thanks for your message... and thanks for your vendordata work! > > About your question, Tatu listens to events on the oslo message bus. > Specifically, it reacts to compute.instance.delete.end by cleaning up > per-instance resources. It also listens to project creation and user role > assignment changes. The code is at: > https://github.com/openstack/tatu/blob/master/tatu/notifications.py > > best, > Pino > > > On Thu, Mar 15, 2018 at 3:42 PM, Michael Still wrote: > >> Heya, >> >> I've just stumbled across Tatu and the design presentation [1], and I am >> wondering how you handle cleaning up instances when they are deleted given >> that nova vendordata doesn't expose a "delete event". >> >> Specifically I'm wondering if we should add support for such an event to >> vendordata somehow, given I can now think of a couple of use cases for it. >> >> Thanks, >> Michael >> >> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Z >> i4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ileixe at gmail.com Fri Mar 16 06:22:52 2018 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Fri, 16 Mar 2018 15:22:52 +0900 Subject: [openstack-dev] [nova] Does not hook for validating resource name (name/hostname for instance) required? Message-ID: Hi. I have a question for operating Openstack cluster. Since it's my first mail to mailing list, if it's not a right place to do, sorry for that and please let me know the right one. :) Our company operates Openstack clusters and we had legacy DNS system, and it needs to check hostname check more strictly including RFC952. Also our operators demands for unique hostname in a region (we do not have tenant network yet using l3 only network). So for those reasons, we maintained custom validation logic for instance name. But as everyone knows maintenance for custom codes are so burden, I am trying to find the applicable location for the demand. imho, since there is schema validation for every resource, if any validation hooking API provided we can happily use it. Does anyone experience similar issue? Any advices will be appreciated. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Fri Mar 16 06:49:42 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Fri, 16 Mar 2018 08:49:42 +0200 Subject: [openstack-dev] [Tatu][Nova] Handling instance destruction In-Reply-To: References: Message-ID: Having an interface for vendordata that gets deletes would be quite nice. Right now for novajoin we listen to the nova notifications for updates and deletes; if this could be handled natively by vendordata, it would simplify our codebase. BR On Fri, Mar 16, 2018 at 7:34 AM, Michael Still wrote: > Thanks for this. I read the README for the project after this and I do now > realise you're using notifications for some of these events. > > I guess I'm still pondering if its reasonable to have everyone listen to > notifications to build systems like these, or if we should messages to > vendordata to handle these actions. Vendordata is intended at deployers, so > having a simple and complete interface seems important. > > There were also comments in the README about wanting to change the data > that appears in the metadata server over time. I'm wondering how that maps > into the configdrive universe. Could you explain those comments a bit more > please? > > Thanks for your quick reply, > Michael > > > > > On Fri, Mar 16, 2018 at 2:18 PM, Pino de Candia < > giuseppe.decandia at gmail.com> wrote: > >> Hi Michael, >> >> Thanks for your message... and thanks for your vendordata work! >> >> About your question, Tatu listens to events on the oslo message bus. >> Specifically, it reacts to compute.instance.delete.end by cleaning up >> per-instance resources. It also listens to project creation and user role >> assignment changes. The code is at: >> https://github.com/openstack/tatu/blob/master/tatu/notifications.py >> >> best, >> Pino >> >> >> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still wrote: >> >>> Heya, >>> >>> I've just stumbled across Tatu and the design presentation [1], and I am >>> wondering how you handle cleaning up instances when they are deleted given >>> that nova vendordata doesn't expose a "delete event". >>> >>> Specifically I'm wondering if we should add support for such an event to >>> vendordata somehow, given I can now think of a couple of use cases for it. >>> >>> Thanks, >>> Michael >>> >>> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Z >>> i4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From duonghq at vn.fujitsu.com Fri Mar 16 06:57:30 2018 From: duonghq at vn.fujitsu.com (duonghq at vn.fujitsu.com) Date: Fri, 16 Mar 2018 06:57:30 +0000 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: +1 From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] Sent: Monday, March 12, 2018 9:07 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan ​​Kolla core reviewer team, It is my pleasure to nominate caoyuan for kolla core team. caoyuan's output is fantastic over the last cycle. And he is the most active non-core contributor on Kolla project for last 180 days[1]. He focuses on configuration optimize and improve the pre-checks feature. Consider this nomination a +1 vote from me. A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 is a veto. Voting is open for 7 days until Mar 12th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/180 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Fri Mar 16 08:38:09 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 16 Mar 2018 08:38:09 +0000 Subject: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa] In-Reply-To: References: Message-ID: Hello, That's what it always was, but it was hidden in the pages. Now that I refactored the pages to be more visible, you spotted it :) Congratulations! More seriously, I'd like to remove that requirement, showing people can do whatever they like. It all depends on how/where they store images, ephemeral storage... Will commit a patch today. Best regards, Jean-Philippe Evrard On 15 March 2018 at 18:31, Gordon, Kent S wrote: > Compute host disk requirements for Openstack Ansible seem high in the > documentation. > > I think I have used smaller compute hosts in the past. > Did something change in Queens? > > https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html > > > Compute hosts > > Disk space requirements depend on the total number of instances running on > each host and the amount of disk space allocated to each instance. > > Compute hosts must have a minimum of 1 TB of disk space available. > > > > > -- > Kent S. Gordon > kent.gordon at verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jean-philippe at evrard.me Fri Mar 16 08:41:51 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 16 Mar 2018 08:41:51 +0000 Subject: [openstack-dev] [infra][all] Anyone using our ubuntu-mariadb mirror? In-Reply-To: References: Message-ID: Hello, We were using it until a couple of weeks ago, when 10.1.31 got out. 10.1.31 got issues with clustering and we moved to use a mirror of 10.1. (here 10.1.30), instead of 10.1. We haven't decided if we'll move back to 10.1 when 10.1.32 will be out. You can remove it for now, I think we can discuss this again when 10.1.32 will be out. Best regards, JP On 14 March 2018 at 22:50, Ian Wienand wrote: > Hello, > > We discovered an issue with our mariadb package mirroring that > suggests it hasn't been updating for some time. > > This would be packages from > > http://mirror.X.Y.openstack.org/ubuntu-mariadb/10.<1|2> > > This was originally added in [1]. AFAICT from codesearch, it is > currently unused. We export the top-level directory in the mirror > config scripts as NODEPOOL_MARIADB_MIRROR, which is not referenced in > any jobs [2], and I couldn't find anything setting up apt repos > pointing to it. > > Thus since it's not updating and nothing seems to reference it, I am > going to assume it is unused and remove it next week. If not, please > respond and we can organise a fix. > > -i > > [1] https://review.openstack.org/#/c/307831/ > [2] http://codesearch.openstack.org/?q=NODEPOOL_MARIADB_MIRROR&i=nope&files=&repos= > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Mar 16 08:55:45 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 16 Mar 2018 08:55:45 +0000 (GMT) Subject: [openstack-dev] [all][api] POST /api-sig/news In-Reply-To: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> References: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> Message-ID: Meta: When responding to lists, please do not cc individuals, just repond to the list. Thanks, response within. On Fri, 16 Mar 2018, Gilles Dubreuil wrote: > In order to continue and progress on the API Schema guideline [1] as > mentioned in [2] to make APIs more machine-discoverable and also discussed > during [3]. > > Unfortunately until a new or either a second meeting time slot has been > allocated,  inconveniently for everyone, have to be done by emails. I'm sorry that the meeting time is excluding you and others, but our efforts to have either a second meeting or to change the time have met with limited response (except from you). In any case, the meeting are designed to be checkpoints where we resolve stuck questions and checkpoint where we are on things. It is better that most of the work be done in emails and on reviews as that's the most inclusive, and is less dependent on time-related variables. So moving the discussion about schemas here is the right thing and the fact that it hasn't happened (until now) is the reason for what appears to be a rather lukewarm reception from the people writing the API-SIG newsletter: if there's no traffic on either the gerrit review or here in email then there's no evidence of demand. You're asserting here that there is; that's great. > Of course new features have to be decided (voted) by the community but how > does that work when there are not enough people voting in? > It seems unfair to decide not to move forward and ignore the request because > the others people interested are not participating at this level. In a world of limited resources we can't impose work on people. The SIG is designed to be a place where people can come to make progress on API-related issues. If people don't show up, progress can't be made. Showing up doesn't have to mean show up at an IRC meeting. In fact I very much hope that it never means that. Instead it means writing things (like your email message) and seeking out collaborators to push your idea(s) forward. > It's very important  to consider the fact "I" am representing more than just > myself but an Openstack integration team, whose members are supporting me, > and our work impacts others teams involved in their open source product > consuming OpenStack. I'm sorry if I haven't made this more clear from the > beginning, I guess I'm still learning on the particiaption process. So from > now on, I'm going to use "us" instead. Can some of those "us" show up on the mailing list, the gerrit reviews, and prototype work that Graham has done? > Also from discussions with other developers from AT&T (OpenStack summit in > Sydney) and SAP (Misty project) who are already using automation to consume > APIs, this is really needed. Them too. > I've also mentioned the now known fact that no SDK has full time resources to > maintain it (which was the initial trigger for us) more automation is the > only sustainable way to continue the journey. > > Finally how can we dare say no to more automation? Unless of course, only > artisan work done by real hipster is allowed ;) Nobody is saying no to automation (as far as I'm aware). Some people (e.g., me, but not just me) are saying "unless there's an active community to do this work and actively publish about it and the related use cases that drive it it's impossible to make it a priority". Some other people (also me, but not just me) are also saying "schematizing API client generation is not my favorite thing" but that's just a personal opinion and essentially meaningless because yet other people are saying "I love API schema!". What's missing, though, is continuous enagement on producing children of that love. >> Furthermore, API-Schema will be problematic for services that use > microversions. If you have some insight or opinions on this, please add your > comments to that review. > > I understand microversion standardization (OpenAPI) has not happened yet or > if it ever does but that shouldn't preclude making progress. Of course, but who are you expecting to make that progress? The API-SIGs statement of "not something we're likely to pursue as a part of guidance" is about apparent unavailability of interested people. If that changes then the guidance situation probably changes too. But not writing guiadance is different from provide a place to talk about it. That's what a SIG is for. Think of it as a room with coffee and snacks where it is safe to talk about anything related to APIs. And that room exists in email just as much as it does in IRC and at the PTG. Ideally it exists _most_ in email. > So summarize and clarify, we are talking about SDK being able to build their > interface to Openstack APIs in an automated way but statically from API > Schema generated by every project. Such API Schema is already built in memory > during API reference documentation generation and could be saved in JSON > format (for instance) (see [5]). What do you see as the current roadblocks preventing this work from continuing to make progress? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jean-philippe at evrard.me Fri Mar 16 09:02:26 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 16 Mar 2018 09:02:26 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: Message-ID: Hello, For OpenStack-Ansible, we don't need to do anything for that community goal. I am not sure how we can remove our name from the storyboard, so I just inform you here. Jean-Philippe Evrard (evrardjp) On 28 February 2018 at 05:27, ChangBo Guo wrote: > Hi ALL, > > TC approved the goal [0] a week ago , so it's time to finish the work. we > also have a short discussion in oslo meeting at PTG, find more details in > [1] , > we use storyboard to check the goal in > https://storyboard.openstack.org/#!/story/2001545. It's appreciated PTL set > the owner in time . > Feel free to reach me( gcb) in IRC if you have any questions. > > > [0] https://review.openstack.org/#/c/534605/ > [1] https://etherpad.openstack.org/p/oslo-ptg-rocky From line 175 > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cbelu at cloudbasesolutions.com Fri Mar 16 09:26:44 2018 From: cbelu at cloudbasesolutions.com (Claudiu Belu) Date: Fri, 16 Mar 2018 09:26:44 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: , Message-ID: Interesting. I'll take a look as well (Winstackers). Just an FYI, SIGHUP doesn't exist in Windows, so for services like nova-compute, neutron-hyperv-agent, neutron-ovs-agent, ceilometer-polling we'd have to use something else. Best regards, Claudiu Belu ________________________________________ From: Jean-Philippe Evrard [jean-philippe at evrard.me] Sent: Friday, March 16, 2018 11:02 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime Hello, For OpenStack-Ansible, we don't need to do anything for that community goal. I am not sure how we can remove our name from the storyboard, so I just inform you here. Jean-Philippe Evrard (evrardjp) On 28 February 2018 at 05:27, ChangBo Guo wrote: > Hi ALL, > > TC approved the goal [0] a week ago , so it's time to finish the work. we > also have a short discussion in oslo meeting at PTG, find more details in > [1] , > we use storyboard to check the goal in > https://storyboard.openstack.org/#!/story/2001545. It's appreciated PTL set > the owner in time . > Feel free to reach me( gcb) in IRC if you have any questions. > > > [0] https://review.openstack.org/#/c/534605/ > [1] https://etherpad.openstack.org/p/oslo-ptg-rocky From line 175 > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From majopela at redhat.com Fri Mar 16 09:36:49 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Fri, 16 Mar 2018 09:36:49 +0000 Subject: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa] In-Reply-To: References: Message-ID: Right, that's a little absurd, 1TB? :-) , I completely agree. They could live with anything, but I'd try to estimate minimums across distributions for example, an RDO test deployment with containers looks like: (undercloud) [stack at undercloud ~]$ ssh heat-admin at 192.168.24.8 "sudo df -h ; sudo free -h;" Filesystem Size Used Avail Use% Mounted on /dev/vda2 50G *7.4G* 43G 15% / devtmpfs 2.9G 0 2.9G 0% /dev [....] tmpfs 581M 0 581M 0% /run/user/1000 total used free shared buff/cache available Mem: 5.7G *1.1G * 188M 2.4M 4.4G 4.1G Swap: 0B 0B 0B Which looks rather lightweight. We need to consider logging space etc.. I'd say 20G could be enough without considering instance disks? On Fri, Mar 16, 2018 at 9:39 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > That's what it always was, but it was hidden in the pages. Now that I > refactored the pages to be more visible, you spotted it :) > Congratulations! > > More seriously, I'd like to remove that requirement, showing people > can do whatever they like. It all depends on how/where they store > images, ephemeral storage... > > Will commit a patch today. > > Best regards, > Jean-Philippe Evrard > > > > On 15 March 2018 at 18:31, Gordon, Kent S > wrote: > > Compute host disk requirements for Openstack Ansible seem high in the > > documentation. > > > > I think I have used smaller compute hosts in the past. > > Did something change in Queens? > > > > > https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html > > > > > > Compute hosts > > > > Disk space requirements depend on the total number of instances running > on > > each host and the amount of disk space allocated to each instance. > > > > Compute hosts must have a minimum of 1 TB of disk space available. > > > > > > > > > > -- > > Kent S. Gordon > > kent.gordon at verizonwireless.com Work:682-831-3601 <(682)%20831-3601> > Mobile: 817-905-6518 <(817)%20905-6518> > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Mar 16 09:59:34 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 16 Mar 2018 10:59:34 +0100 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <1ac5b323-f521-712c-8b7b-ee889cae4b78@suse.com> thanks for the proposal, Doug. I need an example to understand how things will work out... so, let me use a real-life example (version numbers are made up): openstackdocstheme uses sphinx and needs sphinx 1.6.0 or higher but knows version 1.6.7 is broken. So, openstackdocstheme would add to its requirements file: sphinx>=1.6.0,!=1.6.7 Any project might assume they work with an older version, and have in their requirements file: Sphinx>=1.4.0 openstackdocstheme The global requirements file would just contain: openstackdocstheme sphinx!=1.6.7 The upper-constraints file would contain: sphinx===1.7.1 If we need to block sphinx 1.7.x - as we do right now - , we only update requirements repo to have in global requirements file: openstackdocstheme sphinx!=1.6.7,<1.7.0 and have in upper-constraints: sphinx===1.6.6 But projects should *not* add the cap to their projects like: sphinx>=1.6.0,!=1.6.7,<=1.7.0 Is that all correct? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From james.page at ubuntu.com Fri Mar 16 10:10:23 2018 From: james.page at ubuntu.com (James Page) Date: Fri, 16 Mar 2018 10:10:23 +0000 Subject: [openstack-dev] [ptg][sig][upgrades] Upgrade SIG Message-ID: Hi All I finally got round to writing up my summary of the Upgrades session at the PTG in Dublin (see [0]). One outcome of that session was to form a new SIG centered on Upgrading OpenStack - I'm pleased to announce that the SIG has been formally accepted! The objective of the Upgrade SIG is to improve the overall upgrade process for OpenStack Clouds, covering both offline ‘fast-forward’ upgrades and online ‘rolling’ upgrades, by providing a forum for cross-project collaboration between operators and developers to document and codify best practice for upgrading OpenStack. If you are interested in participating in the SIG please add your details to the wiki page under 'Interested Parties': https://wiki.openstack.org/wiki/Upgrade_SIG I'll be working with the other SIG leads to setup regular IRC meetings in the next week or so - we expect to alternate between slots that are compatible with all time zones. Regards James [0] https://javacruft.wordpress.com/2018/03/16/winning-with-openstack-upgrades/ [1] https://governance.openstack.org/sigs/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Mar 16 10:31:52 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 16 Mar 2018 18:31:52 +0800 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron Message-ID: recently, a new patch is merged[0]. It adds neutron and horizon itself into upper-constraints.txt. But this will break installing horizon and neutron with upper-constraints.txt. Now it breaks the kolla's master branch patch. And have to remove the "horizon" and "neutron" in the files. check[1][2]. The easier way to re-produce this is git clone https://github.com/openstack/horizon.git cd horizon pip install -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt . So the question is, is this expected? if so, what's the correct way to install horizon develop branch with upper-constraints.txt file? [0] https://review.openstack.org/#/c/550475/ [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/Dockerfile.j2 [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Fri Mar 16 10:42:47 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Fri, 16 Mar 2018 11:42:47 +0100 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: References: Message-ID: This is related to the topic in "[horizon][neutron] tools/tox_install changes - breakage with constraints". proposes to remove these projects from upper-constraints (for a different reason)https://review.openstack.org/#/c/552865 that adds other projects to global-requirements, explicitly postpone their addition to upper-constraints to a later step Perhaps neutron and horizon should be removed from upper-constraints for now ? (ie restore https://review.openstack.org/#/c/553030 ?) Jeffrey Zhang, 2018-03-16 18:31: > recently, a new patch is merged[0]. It adds neutron and horizon > itself into > upper-constraints.txt. But this will break installing horizon and > neutron with > upper-constraints.txt. > > Now it breaks the kolla's master branch patch. And have to remove the > "horizon" > and "neutron" in the files. check[1][2]. > > The easier way to re-produce this is > > git clone https://github.com/openstack/horizon.git > cd horizon > pip install -c https://git.openstack.org/cgit/openstack/requirement > s/plain/upper-constraints.txt . > > So the question is, is this expected? if so, what's the correct way > to install horizon develop > branch with upper-constraints.txt file? > > > [0] https://review.openstack.org/#/c/550475/ > [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron- > base/Dockerfile.j2 > [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfi > le.j2 > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.morin at orange.com Fri Mar 16 10:46:23 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Fri, 16 Mar 2018 11:46:23 +0100 Subject: [openstack-dev] [Neutron] Dublin PTG Summary In-Reply-To: References: Message-ID: Miguel Lavalle, 2018-03-12 13:45: > * Ruijing Guo proposed to support VLAN transparency in Neutron OVS > agent. > > [...] - While on this topic, the conversation temporarily forked to > the use of registers instead of ovsdb port tags in L2 agent br-int > and possibly remove br-tun. Thomas Morin committed to draft a RFE for > this. Here is the RFE: https://bugs.launchpad.net/neutron/+bug/1756296 It does not yet cover a possible following step where br-tun would be removed. Cheers, -Thomas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Mar 16 10:47:17 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 16 Mar 2018 18:47:17 +0800 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: References: Message-ID: thanks Thomas, i will move my question into that topics. anyone who are interesting this issue, please reply in "[horizon][neutron] tools/tox_install changes - breakage with constraints". On Fri, Mar 16, 2018 at 6:42 PM, Thomas Morin wrote: > This is related to the topic in "[horizon][neutron] tools/tox_install > changes - breakage with constraints". > > proposes to remove these projects from upper-constraints (for a different > reason) > https://review.openstack.org/#/c/552865 that adds other projects to > global-requirements, explicitly postpone their addition to > upper-constraints to a later step > > Perhaps neutron and horizon should be removed from upper-constraints for > now ? (ie restore https://review.openstack.org/#/c/553030 ?) > > -Thomas > > > Jeffrey Zhang, 2018-03-16 18:31: > > recently, a new patch is merged[0]. It adds neutron and horizon itself into > upper-constraints.txt. But this will break installing horizon and neutron > with > upper-constraints.txt. > > Now it breaks the kolla's master branch patch. And have to remove the > "horizon" > and "neutron" in the files. check[1][2]. > > The easier way to re-produce this is > > git clone https://github.com/openstack/horizon.git > cd horizon > pip install -c https://git.openstack.org/cgit/openstack/requirements/ > plain/upper-constraints.txt . > > So the question is, is this expected? if so, what's the correct way to > install horizon develop > branch with upper-constraints.txt file? > > > [0] https://review.openstack.org/#/c/550475/ > [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/ > Dockerfile.j2 > [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2 > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Mar 16 10:49:14 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 16 Mar 2018 18:49:14 +0800 Subject: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints Message-ID: Now it breaks the kolla's master branch jobs. And have to remove the "horizon" and "neutron" in the upper-constraints.txt file. check[1][2]. i wanna know what's the correct way to install horizon develop branch with upper-constraints.txt file? [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/ Dockerfile.j2 [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2 On Thu, Mar 15, 2018 at 9:28 PM, Doug Hellmann wrote: > Excerpts from Thomas Morin's message of 2018-03-15 10:15:38 +0100: > > Hi Doug, > > > > Doug Hellmann, 2018-03-14 23:42: > > > We keep doing lots of infra-related work to make it "easy" to do > > > when it comes to > > > managing dependencies. There are three ways to address the issue > > > with horizon and neutron, and none of them involve adding features > > > to pbr. > > > > > > 1. Things that are being used like libraries need to release like > > > libraries. Real releases. With appropriate version numbers. So > > > that other things that depend on them can express valid > > > dependencies. > > > > > > 2. Extract the relevant code into libraries and release *those*. > > > > > > 3. Things that are not stable enough to be treated as a library > > > shouldn't be used that way. Move the things that use the > > > application > > > code as library code back into the repo with the thing that they > > > are tied to but that we don't want to (or can't) treat like a > > > library. > > > > What about the case where there is co-development of features across > > repos ? One specific case I have in mind is the Neutron stadium where > > We do that all the time with the Oslo libraries. It's not as easy as > having everything in one repo, but we manage. > > > we sometimes have features in neutron repo that are worked on as a pre- > > requisite for things that will be done in a neutron-* or networking-* > > project. Another is a case for instance where we need to add in project > > X a tempest test to validate the resolution of a bug for which the fix > > actually happened in project B (and where B is not a library). > > If the tempest test can't live in B because it uses part of X, then I > think X and B are really one thing and you're doing more work than you > need to be doing to keep them in separate libraries. > > > My intuition is that it is not illegitimate to expect this kind of > > development workflow to be feasible; but at the same time I read your > > suggestion above as meaning that it belongs to the real of "things we > > shouldn't be doing in the first place". The only way I can reconcile > > You read me correctly. > > We install a bunch of components from source for integration tests > in devstack-gate because we want the final releases to work together. > But those things only interact via REST APIs, and don't import each > other. The cases with neutron and horizon are different. Even the > *unit* tests of the add-ons require code from the "parent" app. That > indicates a level of coupling that is not being properly addressed by > the release model and code management practices for the parent apps. > > > the two would be to conclude we should collapse all the module in > > neutron-*/networking-* into neutron, but doing that would have quite a > > lot of side effects (yes, this is an understatement). > > That's not the only way to do it. The other way would be to properly > decompose the shared code into a library and then provide *stable > APIs* so code can be consumed by the add-on modules. That will make > evolving things a little more difficult because of the stability > requirement. So it's a trade off. I think the teams involved should > make that trade off (in one direction or another), instead of > building tools to continue to avoid dealing with it. > > So let's start by examining the root of the problem: Why are the things > that need to import neutron/horizon not part of the neutron/horizon > repositories in the first place? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Mar 16 10:53:33 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 16 Mar 2018 11:53:33 +0100 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: References: Message-ID: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> On 2018-03-16 11:42, Thomas Morin wrote: > This is related to the topic in "[horizon][neutron] tools/tox_install > changes - breakage with constraints". > > proposes to remove these projects from upper-constraints (for a > different reason) > https://review.openstack.org/#/c/552865 > that adds other projects to > global-requirements, explicitly postpone their addition to > upper-constraints to a later step > > Perhaps neutron and horizon should be removed from upper-constraints for > now ? (ie restore https://review.openstack.org/#/c/553030 ?) Yes, that would be one option. but I like to understand whether that would be a temporary solution - or the end solution. Jeffrey, how exactly are you installing neutron? From git? From tarballs? Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From aj at suse.com Fri Mar 16 10:57:38 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 16 Mar 2018 11:57:38 +0100 Subject: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints In-Reply-To: References: Message-ID: <47799889-3ff5-064a-aa4f-bcfdb84cb9fd@suse.com> On 2018-03-16 11:49, Jeffrey Zhang wrote: > Now it breaks the kolla's master branch jobs. And have to remove the > "horizon" > and "neutron" in the upper-constraints.txt file. check[1][2].  > > i wanna know what's the correct way to install horizon develop > branch with upper-constraints.txt file? > > > [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/Dockerfile.j2 > > [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2 > Sorry, that is too much magic for me to be able to help you. What are those doing? How do you install today? Please give me some instructions Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From thierry at openstack.org Fri Mar 16 11:04:40 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 16 Mar 2018 12:04:40 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, March 16th Message-ID: <5564b315-bc1e-a558-7755-238e3b1f9ea8@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of all open topics (updated twice a week) at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker If you are working on something (or plan to work on something) governance-related that is not reflected on the tracker yet, please feel free to add to it ! == Recently-approved changes == Nothing merged this week, but we are getting closer to final approval on several long-standing proposals, see below! == Voting in progress == One of the option for clarifying testing for interop programs seems to be consensual enough to pass. It extends potential test locations to match the current state of things and what teams agree to support. Please review and chime in on: https://review.openstack.org/#/c/550571/ The definition of "extended maintenance" is also reaching its conclusion, with a proposal that is being consensual so far. Under this proposal stable branches shall remain open to accept fixes as long as reasonably possible, enabling people to step up and maintain branches beyond the minimal maintenance window guaranteed by the stable branch maintenance team. Please review and comment on: https://review.openstack.org/#/c/548916/ Tony Breeds proposed to rename the clarify the scope for the recently-added PowerStackers team, to reflect its focus on PowerVM (rather than all things POWER). We are still missing a number of votes to pass that change. Please see: https://review.openstack.org/#/c/551413/ Finally, we have a change to move Zuul out of the OpenStack Infra project team repositories and OpenStack governance. This is the first step in establishing our infrastructure tooling under its own brand, to make it easier to promote it beyond OpenStack. The rationale was explained by Jim Blair in more detail at: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html While the decision is more of an infra team internal decision on which repositories they directly maintain, feel free to chime in on the thread or the review with your thoughts on this. This already has majority support from the TC, and will be approved on Tuesday unless new objections are posted: https://review.openstack.org/#/c/552637/ == Under discussion == There is a proposal about splitting the Kolla-kubernetes team out of the Kolla/Kolla-ansible team, to reflect the fact that the teams are actually separate. This obviously creates naming/namespace questions, which are as everyone knows the hardest problem in computer science. Please chime in on: https://review.openstack.org/#/c/552531/ A new project team was just proposed to make the Adjutant project an official OpenStack deliverable. Adjutant is a service built to help manage certain elements of operations processes by providing micro APIs around complex underlying workflow. Please review the proposal at: https://review.openstack.org/#/c/553643/ == TC member actions/focus/discussions for the coming week(s) == For this week I expect final votes on the Extended maintenance proposal and the interop tests locations, which may trigger additional last-minute discussions. I'll create stories in StoryBoard to track high-level TC initiatives. A general overview of the stories to create is available for review at: https://etherpad.openstack.org/p/rocky-tc-stories We should also establish an etherpad to discuss potential Forum sessions we'd like to file for the Vancouver Summit. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From dougal at redhat.com Fri Mar 16 11:08:30 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 16 Mar 2018 11:08:30 +0000 Subject: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client In-Reply-To: References: Message-ID: On 13 March 2018 at 18:51, Eric K wrote: > Hi Mistral folks and others, > > I'm working on Congress tempest tests [1] for integration with Mistral. In > the tests, we use a Mistral service client to call Mistral APIs and > compare results against those obtained by Mistral driver for Congress. > > Regarding the service client, Congress can either import directly from > Mistral tempest plugin [2] or maintain its own copy within Congress > tempest plugin. I'm not sure whether Mistral team expects the service > client to be internal use only, so I hope to hear folks' thoughts on which > approach is preferred. Thanks very much! > I don't have a strong opinion here. I am happy for you to use the Mistral service client, but it will be hard to guarantee stability. It has been stable (since it hasn't changed), but we have a temptest refactor planned (once we move the final tempest tests from mistraclient to mistral-tempest-plugin). So there is a fair chance we will break the API at that point, however, I don't know when it will happen, as nobody is currently working on it. I have cc'ed Chandan - hopefully he can provide some input. He has advised me and the Mistral team regarding tempest before. > > Eric > > [1] https://review.openstack.org/#/c/538336/ > [2] > https://github.com/openstack/mistral-tempest-plugin/blob/ > master/mistral_tem > pest_tests/services/v2/mistral_client.py > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Mar 16 11:23:12 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 16 Mar 2018 11:23:12 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] placement update 18-11 Message-ID: Here's a placement update! # Most Important While work has started on some of the already approved specs, there are still a fair few under review, and a couple yet to be written. Given the number of specs we've got going it's entirely likely we've bitten off more than we can chew, but we'll see. Getting specs landed early makes it easier to get the functionality merged sooner, so: review some specs. In active code reviews, the update provider tree stuff remains very important as it's the keystone in getting the nova-side of placement interaction working best. # What's Changed All the resource provider objects (the file resource_provider.py) have moved under nova/api/openstack/placement and now inherit directly of OVO. This is to harden and signal the boundary between nova and placement, helping not just in the eventual extraction of placement, but also in making placement lighter. More on related code in the extraction section below. Standard resource class fields have been moved to a top level file, rc_fields.py. This is a stopgap until os-resource-classes is created. A series of conversations, nicely summarized by Eric on this list http://lists.openstack.org/pipermail/openstack-dev/2018-March/128383.html , showed that the way we are managing the addition and removal of traits and aggregates in the compute environment needs some tweaks to control how and by whom changes can be made. Code is in progress to deal with that, but the posting is worth a read to catch up on the reasoning. It's not simple, but neither is the situation. Aggregates can be managed with a generation now and probably today code will merge that allows a generation in the a response when POSTing to create a resource provider. The nova-scheduler process can run with multiple workers if the driver is the filter scheduler. It will be run that way in devstack, henceforth. # Questions [Add yours here?] # Bugs * Placement related bugs without owners: https://goo.gl/TgiPXb 15, +1 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 11, no data for last week (because I only realized today I should do this) # Specs * https://review.openstack.org/#/c/550244/ Propose standardized provider descriptor file * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/549184/ Spec: report client placement version discovery * https://review.openstack.org/#/c/548237/ Update placement aggregates spec to clarify generation handling * https://review.openstack.org/#/c/418393/ Provide error codes for placement API * https://review.openstack.org/#/c/545057/ mirror nova host aggregates to placement API * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552927/ Spec for isolating configuration of placement database * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/4 Spec on preemptible servers # Main Themes ## Update Provider Tree The ability of virt drivers to represent what resource providers they know about--whether that be numa, or clustered resources--is supported by the update_provider_tree method. Part of it is done, but some details remain: https://review.openstack.org/#/q/topic:bp/update-provider-tree There's new stuff in here for the add/remove traits and aggregates stuff discussed above. ## Request Filters These are a way for the nova scheduler to doctor the request being sent to placement, using a sane interface. https://review.openstack.org/#/q/topic:bp/placement-req-filter That is waiting on the member_of functionality to merge: https://review.openstack.org/#/c/552098/ ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates It's part of what will make the req filters above useful. ## Forbidden Traits A way of expressing "I'd like resources that do _not_ have trait X". Spec for this has been approved, but the code hasn't been started yet. ## Consumer Generations In discussion yesterday it was agreed that edleafe will start the ball rolling on this and I (cdent) will be his virtual pair. # Extraction As mentioned above there's been some progress here: objects have moved under the placement hierarchy. The next patch in that stack is to move some exceptions https://review.openstack.org/#/c/549862/ followed by code to use a different configuration setting and setup for the placement database connection. This has an old -2 on it, requesting a spec to describe what's going on. That spec is here: https://review.openstack.org/#/c/552927/ There's work in progress to move some of the resource provider db-related functional tests to a more placement-ish location: https://review.openstack.org/#/q/topic:placement_functional_test_split At some point we'll need to start the process of creating os-resource-classes. Are there any volunteers for this? # Other * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/c/547812/ Migrate legacy-osc-placement-dsvm-functional job in-tree * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/548983/ report client: placement API version discovery * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes * https://review.openstack.org/#/c/495356/ These are changes to microversion-parse to move the placement microversion handling into a library usable by others. * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/533195/ Fix comments in get_all_with_shared() * https://review.openstack.org/#/q/topic:bug/1732731 Fixes related to shared providers * https://review.openstack.org/#/c/513264/ Add more functional test for placement.usage * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent [Add yours here?] # End /me blinks -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thomas.morin at orange.com Fri Mar 16 11:31:15 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Fri, 16 Mar 2018 12:31:15 +0100 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> References: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> Message-ID: <7b989b93c534fc1332440811a48bb6023e048954.camel@orange.com> Hi Andreas, In the documentation for networking-bgpvpn, we suggest to install these packages with "pip install -c https://git.openstack.org/cgit/openstack/ requirements/plain/upper-constraints.txtt networking-bgpvpn=8.0.0" .In many cases this can work well enough for people wanting to try this component on top of an existing installation, assuming they follow a few extra steps explained in the rest of the doc. Adding networing-bgpvpn to upper-constraints.txt will break this way of doing things. -Thomas Andreas Jaeger, 2018-03-16 11:53: > On 2018-03-16 11:42, Thomas Morin wrote: > > This is related to the topic in "[horizon][neutron] > > tools/tox_install > > changes - breakage with constraints". > > > > proposes to remove these projects from upper-constraints (for a > > different reason) > > https://review.openstack.org/#/c/552865 > > that adds other projects > > to > > global-requirements, explicitly postpone their addition to > > upper-constraints to a later step > > > > Perhaps neutron and horizon should be removed from upper- > > constraints for > > now ? (ie restore https://review.openstack.org/#/c/553030 ?) > > Yes, that would be one option. but I like to understand whether that > would be a temporary solution - or the end solution. > > Jeffrey, how exactly are you installing neutron? From git? From > tarballs? > > Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Mar 16 12:09:51 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 16 Mar 2018 13:09:51 +0100 Subject: [openstack-dev] [charms] [tripleo] [puppet] [fuel] [kolla] [openstack-ansible] [cloudcafe] [magnum] [mogan] [sahara] [shovel] [watcher] [helm] [rally] Heads up: ironic classic drivers deprecation Message-ID: <81cbb170-b970-6d4c-2dec-e173b95fd1cc@redhat.com> Hi all, If you see your project name in the subject that is because a global search revived usage of "pxe_ipmitool", "agent_ipmitool" or "pxe_ssh" drivers in the non-unit-test context in one or more of your repositories. The classic drivers, such as pxe_ipmitool, were deprecated in Queens, and we're on track with removing them in Rocky. Please read [1] about differences between classic drivers and newer hardware types. Please refer to [2] on how to update your code. Finally, the pxe_ssh driver was removed some time ago. Please use the standard IPMI driver with the virtualbmc project [3] instead. Please reach out to the ironic team (here or on #openstack-ironic) if you have any questions or need help with the transition. Dmitry [1] https://docs.openstack.org/ironic/latest/install/enabling-drivers.html [2] https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html [3] https://github.com/openstack/virtualbmc From anlin.kong at gmail.com Fri Mar 16 12:21:07 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sat, 17 Mar 2018 01:21:07 +1300 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> Message-ID: Just FYI, l7 policy/rule support for Neutron LBaaS V2 and Octavia is on its way[1], because we will have both octavia and magnum deployed on our openstack based public cloud this year, an ingress controller for openstack(octavia) is also on our TODO list, any kind of collaboration are welcomed :-) [1]: https://github.com/gophercloud/gophercloud/pull/833 Cheers, Lingxian Kong (Larry) On Fri, Mar 16, 2018 at 5:01 PM, Joe Topjian wrote: > Hi Chris, > > I wear a number of hats related to this discussion, so I'll add a few > points of view :) > > It turns out that with >> Terraform, it's possible to tear down resources in a way that causes >> Neutron to >> leak administrator-privileged resources that can not be deleted by a >> non-privileged users. In discussions with the Neutron and Octavia teams, >> it was >> strongly recommended that I move away from the Neutron LBaaSv2 API and >> instead >> adopt Octavia. Vexxhost graciously installed Octavia and my request and I >> was >> able to move past this issue. >> > > Terraform hat! I want to slightly nit-pick this one since the words "leak" > and "admin-priv" can sound scary: Terraform technically wasn't doing > anything wrong. The problem was that Octavia was creating resources but not > setting ownership to the tenant. When it came time to delete the resources, > Octavia was correctly refusing, though it incorrectly created said > resources. > > From reviewing the discussion, other parties were discovering this issue > and patching in parallel to your discovery. Both xgerman and Vexxhost > jumped in to confirm the behavior seen by Terraform. Vexxhost quickly > applied the patch. It was a really awesome collaboration between yourself, > dims, xgerman, and Vexxhost. > > >> This highlights the first call to action for our public and private cloud >> community: encouraging the rapid migration from older, unsupported APIs to >> Octavia. >> > > Operator hat! The clouds my team and I run are more compute-based. Our > users would be more excited if we increased our GPU pool than enhanced the > networking services. With that in mind, when I hear it said that "Octavia > is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that > means we can keep running Neutron LBaaS v2 for now" and focus our efforts > elsewhere. > > I totally get why Octavia is advertised this way and it's very much > appreciated. When I learned about Octavia, my knee-jerk reaction was "oh > no, not another load balancer" but that was remedied when I learned it's > more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not > our primary focus and we can still squeak by with Neutron's LBaaS v2. > > If you *really* wanted us to deploy Octavia ASAP, then a migration guide > would be wonderful. I read over the "Developer / Operator Quick Start > Guide" and found it very well written! I groaned over having to build an > image but I also really appreciate the image builder script. If there can't > be pre-built images available for testing, the second-best option is that > script. > > >> This highlights a second call to action for the SDK and provider >> developers: >> recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding >> support for more advanced Octavia features. >> > > Gophercloud hat! We've supported Octavia for a few months now, but purely > by having the load-balancer client piggyback off of the Neutron LBaaS v2 > API. We made the decision this morning, coincidentally enough, to have > Octavia be a first-class service peered with Neutron rather than think of > Octavia as a Neutron/network child. This will allow Octavia to fully > flourish without worry of affecting the existing LBaaS v2 API (which we'll > still keep around separately). > > Thanks, > Joe > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Mar 16 12:22:43 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 16 Mar 2018 20:22:43 +0800 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> References: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> Message-ID: kolla install openstack packages through master tarball file on kolla master branch[0]. On stable branch, kolla install through neutron tag tarball. But i think there will be also some issue here. How about i want to install neutron-12.0.1.tar.gz, whereas neutron===12.0.0 exist in the upper-constraints.txt file? [0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz On Fri, Mar 16, 2018 at 6:53 PM, Andreas Jaeger wrote: > On 2018-03-16 11:42, Thomas Morin wrote: > > This is related to the topic in "[horizon][neutron] tools/tox_install > > changes - breakage with constraints". > > > > proposes to remove these projects from upper-constraints (for a > > different reason) > > https://review.openstack.org/#/c/552865 > > that adds other projects to > > global-requirements, explicitly postpone their addition to > > upper-constraints to a later step > > > > Perhaps neutron and horizon should be removed from upper-constraints for > > now ? (ie restore https://review.openstack.org/#/c/553030 ?) > > Yes, that would be one option. but I like to understand whether that > would be a temporary solution - or the end solution. > > Jeffrey, how exactly are you installing neutron? From git? From tarballs? > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Mar 16 12:26:42 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 16 Mar 2018 20:26:42 +0800 Subject: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints In-Reply-To: <47799889-3ff5-064a-aa4f-bcfdb84cb9fd@suse.com> References: <47799889-3ff5-064a-aa4f-bcfdb84cb9fd@suse.com> Message-ID: kolla install openstack packages through master tarball file on kolla master branch[0]. like pip install -c upper-constraints.txt neutron-master.tar.gz On stable branch, kolla install through neutron tag tarball. so it should work. But i think there will be also some issue here. How about i want to install neutron-12.0.1.tar.gz, whereas neutron===12.0.0 exist in the upper-constraints.txt file? [0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz On Fri, Mar 16, 2018 at 6:57 PM, Andreas Jaeger wrote: > On 2018-03-16 11:49, Jeffrey Zhang wrote: > > Now it breaks the kolla's master branch jobs. And have to remove the > > "horizon" > > and "neutron" in the upper-constraints.txt file. check[1][2]. > > > > i wanna know what's the correct way to install horizon develop > > branch with upper-constraints.txt file? > > > > > > [1] https://review.openstack.org/#/c/549456/4/docker/ > neutron/neutron-base/Dockerfile.j2 > > neutron-base/Dockerfile.j2> > > [2] https://review.openstack.org/#/c/549456/4/docker/ > horizon/Dockerfile.j2 > > > > Sorry, that is too much magic for me to be able to help you. > > What are those doing? How do you install today? Please give me some > instructions > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Mar 16 12:28:31 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 16 Mar 2018 13:28:31 +0100 Subject: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron In-Reply-To: References: <8fa634f3-6421-7413-ce9f-e777cd1ae100@suse.com> Message-ID: On 2018-03-16 13:22, Jeffrey Zhang wrote: > kolla install openstack packages through master tarball file on kolla > master branch[0]. > > On stable branch, kolla install through neutron tag tarball. But i think > there will be also > some issue here. How about i want to install neutron-12.0.1.tar.gz, > whereas neutron===12.0.0 > exist in the upper-constraints.txt file?  > > [0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz I see, thanks. Let me restore https://review.openstack.org/#/c/553030, it should get us moving forward here - and then we can figure out whether there are other options, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From cgoncalves at redhat.com Fri Mar 16 12:58:32 2018 From: cgoncalves at redhat.com (Carlos Goncalves) Date: Fri, 16 Mar 2018 13:58:32 +0100 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> Message-ID: On Fri, Mar 16, 2018 at 5:01 AM, Joe Topjian wrote: > Hi Chris, > > I wear a number of hats related to this discussion, so I'll add a few > points of view :) > > It turns out that with >> Terraform, it's possible to tear down resources in a way that causes >> Neutron to >> leak administrator-privileged resources that can not be deleted by a >> non-privileged users. In discussions with the Neutron and Octavia teams, >> it was >> strongly recommended that I move away from the Neutron LBaaSv2 API and >> instead >> adopt Octavia. Vexxhost graciously installed Octavia and my request and I >> was >> able to move past this issue. >> > > Terraform hat! I want to slightly nit-pick this one since the words "leak" > and "admin-priv" can sound scary: Terraform technically wasn't doing > anything wrong. The problem was that Octavia was creating resources but not > setting ownership to the tenant. When it came time to delete the resources, > Octavia was correctly refusing, though it incorrectly created said > resources. > > From reviewing the discussion, other parties were discovering this issue > and patching in parallel to your discovery. Both xgerman and Vexxhost > jumped in to confirm the behavior seen by Terraform. Vexxhost quickly > applied the patch. It was a really awesome collaboration between yourself, > dims, xgerman, and Vexxhost. > > >> This highlights the first call to action for our public and private cloud >> community: encouraging the rapid migration from older, unsupported APIs to >> Octavia. >> > > Operator hat! The clouds my team and I run are more compute-based. Our > users would be more excited if we increased our GPU pool than enhanced the > networking services. With that in mind, when I hear it said that "Octavia > is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that > means we can keep running Neutron LBaaS v2 for now" and focus our efforts > elsewhere. > > I totally get why Octavia is advertised this way and it's very much > appreciated. When I learned about Octavia, my knee-jerk reaction was "oh > no, not another load balancer" but that was remedied when I learned it's > more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not > our primary focus and we can still squeak by with Neutron's LBaaS v2. > > If you *really* wanted us to deploy Octavia ASAP, then a migration guide > would be wonderful. I read over the "Developer / Operator Quick Start > Guide" and found it very well written! I groaned over having to build an > image but I also really appreciate the image builder script. If there can't > be pre-built images available for testing, the second-best option is that > script. > Periodic builds of Ubuntu and CentOS pre-built test images coming soon: https://review.openstack.org/#/c/549259/ Periodic builds by the RDO project: https://images.rdoproject.org/octavia/master/ ( https://review.rdoproject.org/r/#/c/11805/) > >> This highlights a second call to action for the SDK and provider >> developers: >> recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding >> support for more advanced Octavia features. >> > > Gophercloud hat! We've supported Octavia for a few months now, but purely > by having the load-balancer client piggyback off of the Neutron LBaaS v2 > API. We made the decision this morning, coincidentally enough, to have > Octavia be a first-class service peered with Neutron rather than think of > Octavia as a Neutron/network child. This will allow Octavia to fully > flourish without worry of affecting the existing LBaaS v2 API (which we'll > still keep around separately). > > Thanks, > Joe > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Mar 16 13:34:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 16 Mar 2018 08:34:28 -0500 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: Message-ID: <51D9C75B-8062-4B15-9E7E-D789751834EB@gmx.com> > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard wrote: > > Hello, > > For OpenStack-Ansible, we don't need to do anything for that community > goal. I am not sure how we can remove our name from the storyboard, > so I just inform you here. > > Jean-Philippe Evrard (evrardjp) I believe you can just mark the task as done if there is no additional work required. From fungi at yuggoth.org Fri Mar 16 13:43:00 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Mar 2018 13:43:00 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <51D9C75B-8062-4B15-9E7E-D789751834EB@gmx.com> References: <51D9C75B-8062-4B15-9E7E-D789751834EB@gmx.com> Message-ID: <20180316134300.gjjcmu475zuosv5c@yuggoth.org> On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote: > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard wrote: > > > For OpenStack-Ansible, we don't need to do anything for that > > community goal. I am not sure how we can remove our name from > > the storyboard, so I just inform you here. > > I believe you can just mark the task as done if there is no > additional work required. Yeah, either "merged" or "invalid" states should work. I'd lean toward suggesting "invalid" in this case since the task did not require any changes merged to your source code. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Louie.Kwan at windriver.com Fri Mar 16 14:29:51 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Fri, 16 Mar 2018 14:29:51 +0000 Subject: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 Message-ID: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 are described in openstack's upper-constraints.txt, https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411 https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297 If I do > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens And then stack.sh We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0 Having said that, we need the older version, how to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 Thanks. Louie From simon.leinen at switch.ch Fri Mar 16 14:40:44 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Fri, 16 Mar 2018 15:40:44 +0100 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: (Joe Topjian's message of "Thu, 15 Mar 2018 22:01:10 -0600") References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> Message-ID: Joe Topjian writes: > Terraform hat! I want to slightly nit-pick this one since the words > "leak" and "admin-priv" can sound scary: Terraform technically wasn't > doing anything wrong. The problem was that Octavia was creating > resources but not setting ownership to the tenant. When it came time > to delete the resources, Octavia was correctly refusing, though it > incorrectly created said resources. I dunno... if Octavia created those lower-layer resources on behalf of the user, then Octavia shouldn't refuse to remove those resources when the same user later asks it to - independent of what ownership Octavia chose to apply to those resources. (It would be different it Neutron or Nova were asked by the user directly to remove the resources created by Octavia.) > From reviewing the discussion, other parties were discovering this > issue and patching in parallel to your discovery. Both xgerman and > Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost > quickly applied the patch. It was a really awesome collaboration > between yourself, dims, xgerman, and Vexxhost. Speaking as another operator: Does anyone seriously expect us to deploy a service (Octavia) in production at a stage where it exhibits this kind of behavior? Having to clean up leftover resources because the users who created them cannot remove them is not my idea of fun. (And note that like most operators, we're a few releases behind, so we might not even get access to backports IF this gets fixed.) In our case we're not a compute-oriented cloud provider, and some of our customers would really like to have a good LBaaS as part of our IaaS offering. But our experience with this was so-so in the past - for example, we had to help customers migrate from LBaaSv1 to LBaaSv2. Our resources (people, tolerance to user-affecting bugs and forced upgrades etc.) are limited, so we've become careful. For users who want to use Kubernetes on our OpenStack service, we rather point them to Kubernetes's Ingress controller, which performs the LB function without requiring much from the underlying cloud. Seems like a fine solution. -- Simon. From jim at jimrollenhagen.com Fri Mar 16 14:41:37 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 16 Mar 2018 14:41:37 +0000 Subject: [openstack-dev] [nova][placement] update_provider_tree design updates In-Reply-To: References: <90a9be02-cbba-cc7a-9275-9c7060797c2a@fried.cc> Message-ID: > > ...then there's no way I can know ahead of time what all those might be. > (In particular, if I want to support new devices without updating my > code.) I.e. I *can't* write the corresponding > provider_tree.remove_trait(...) condition. Maybe that never becomes a > real problem because we'll never need to remove a dynamic trait. Or > maybe we can tolerate "leakage". Or maybe we do something > clever-but-ugly with namespacing (if > trait.startswith('CUSTOM_DEV_VENDORID_')...). We're consciously kicking > this can down the road. > > And note that this "dynamic" problem is likely to be a much larger > portion (possibly all) of the domain when we're talking about aggregates. > > Then there's ironic, which is currently set up to get its traits blindly > from Inspector. So Inspector not only needs to maintain the "owned > traits" list (with all the same difficulties as above), but it must also > either a) communicate that list to ironic virt so the latter can manage > the add/remove logic; or b) own the add/remove logic and communicate the > individual traits with a +/- on them so virt knows whether to add or > remove them. Just a nit, Ironic doesn't necessarily get its traits from inspector. Ironic gets them from *some* API client, which may be an operator, or inspector, or something else. Inspector is totally optional. Anyway, I'm inclined to kick this can down the road a bit, as you mention. I imagine that the ideal situation is for Ironic to remove traits from placement on the fly when they are removed in Ironic. Any other traits that nova-compute knows about (but Ironic doesn't), nova-compute can manage the removal the same way as another virt driver. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Mar 16 14:53:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 16 Mar 2018 09:53:25 -0500 Subject: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> Message-ID: <27224a7b-7821-bbf3-7aea-e37269f15761@gmail.com> On 3/16/2018 9:29 AM, Kwan, Louie wrote: > In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 are described in openstack's upper-constraints.txt, > > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411 > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297 > > If I do > >> git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens > > And then stack.sh > > We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0 > > Having said that, we need the older version, how to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 > You could try setting this in your local.conf: https://github.com/openstack-dev/devstack/blob/master/stackrc#L547 GITBRANCH["python-openstacksdk"]=0.11.3 But I don't see a similar entry for os-service-types. I don't know if ^ will work, but it's what I'd try. -- Thanks, Matt From mriedemos at gmail.com Fri Mar 16 15:04:10 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 16 Mar 2018 10:04:10 -0500 Subject: [openstack-dev] [nova] Does not hook for validating resource name (name/hostname for instance) required? In-Reply-To: References: Message-ID: <86403cd1-776c-bb03-d69f-9efefb8c3e0f@gmail.com> On 3/16/2018 1:22 AM, 양유석 wrote: > Our company operates Openstack clusters and we had legacy DNS system, > and it needs to check hostname check more strictly including RFC952. > Also our operators demands for unique hostname in a region (we do not > have tenant network yet using l3 only network). So for those reasons, we > maintained custom validation logic for instance name. > > But as everyone knows maintenance for custom codes are so burden, I am > trying to find the applicable location for the demand. > > imho, since there is schema validation for every resource, if any > validation hooking API provided we can happily use it. Does anyone > experience similar issue? Any advices will be appreciated. There is a config option, "osapi_compute_unique_server_name_scope", which you can set to 'global' which should enforce in the DB layer that instance hostnames are unique. However, thinking about this now, it's validated down in the cell DB layer, which is not global, so this likely doesn't work if you're using multiple cells, but I doubt you are right now. Another related option is "multi_instance_display_name_template" but I see that's deprecated now, but I'm not aware of a proposed alternative for that option. The names used should conform to RFC952, see: https://github.com/openstack/nova/blob/7cbb5764d499dfdc90ef4a963daf217d58c840d4/nova//utils.py#L543 -- Thanks, Matt From openstack-dev at storpool.com Fri Mar 16 15:33:30 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Fri, 16 Mar 2018 17:33:30 +0200 Subject: [openstack-dev] [nova] New image backend: StorPool Message-ID: Hi, A couple of years ago I created a Nova spec for the StorPool image backend: https://review.openstack.org/#/c/137830/ There was some discussion, but then our company could not immediately allocate the resources to write the driver itself, so the spec languished and was eventually abandoned. Now that StorPool has a fully maintained Cinder driver and also a fully maintained Nova volume attachment driver, both included in the Queens release, and a Cinder third-party CI that runs all the tests tagged with "volume", including some simple Nova tests, we'd like to resurrect this spec and implement a Nova image backend, too. Actually, it looks like due to customer demand we will write the driver anyway and possibly maintain it outside the tree, but it would be preferable (and, obviously, easier to catch up with wide-ranging changes) to have it in. Would there be any major opposition to adding a StorPool shared storage image backend, so that our customers are not limited to volume-backed instances? Right now, creating a StorPool volume and snapshot from a Glance image and then booting instances from that snapshot works great, but in some cases, including some provisioning and accounting systems on top of OpenStack, it would be preferable to go the Nova way and let the hypervisor think that it has a local(ish) image to work with, even though it's on shared storage anyway. This will go hand-in-hand with our planned Glance image driver, so that creating a new instance from a Glance image would happen instantaneously (create a StorPool volume from the StorPool snapshot corresponding to the Glance image). If this will help the decision, we do have plans for adding a full-blown Nova third-party CI in the near future, so that both our volume attachment driver, this driver, and our upcoming Glance image driver will see some more testing. Thanks in advance, and keep up the great work! Best regards, Peter From ed at leafe.com Fri Mar 16 15:53:38 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 16 Mar 2018 10:53:38 -0500 Subject: [openstack-dev] [api] APAC-friendly API-SIG meeting times In-Reply-To: References: <6D342053-79C2-4AAB-8F8B-6687F8CA6C29@leafe.com> Message-ID: <28230616-4F62-429A-8EBD-D88237B56DA0@leafe.com> On Mar 15, 2018, at 10:31 PM, Gilles Dubreuil wrote: > > Any chance we can progress on this one? > > I believe there are not enough participants to split the API SIG meeting in 2, and also more likely because of the same lack of people across the 2 it could make it pretty inefficient. Therefore I think changing the main meeting time to another might be better but I could be wrong. > > Anyway in all cases I can't make progress with a meeting in the middle of the night for me so I would appreciate if we could re-activate this discussion. What range of times would work for you? -- Ed Leafe From rosmaita.fossdev at gmail.com Fri Mar 16 15:58:50 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 16 Mar 2018 11:58:50 -0400 Subject: [openstack-dev] [glance] bug squad meetings start Monday Message-ID: The Glance Bug Squad will meet biweekly on Monday of even-numbered ISO weeks at 10:00 UTC in #openstack-glance. The meeting will last 45 minutes. The first meeting will be 19 March 2018. Agenda and notes: https://etherpad.openstack.org/p/glance-bug-squad-meeting From s at cassiba.com Fri Mar 16 16:00:04 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 16 Mar 2018 09:00:04 -0700 Subject: [openstack-dev] [chef] State of the Kitchen - 2nd Edition Message-ID: This is the second edition of what is going on in Chef OpenStack. The goal is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome, as this is an iterative thing. Appetizers ======== => Pike has been branched! Supermarket has also received a round of updates. https://supermarket.chef.io/users/openstack => chef-client 13.8 has been released, allowing the scenarios to continue tracking the latest 13 series. https://discourse.chef.io/t/chef-client-13-8-released/12652 Entrees ====== => Queens development has commenced. Preliminary lab testing has yielded positive results in Test Kitchen. Most changes seem to revolve around deprecation chasing. https://review.openstack.org/550963 & https://review.openstack.org/#/q/status:open+topic:queens_updates => Nova is continuing the trend of operating as an Apache web service. https://review.openstack.org/552299 Desserts ======= => The client (fog wrapper) and dns (Designate) cookbooks will be coming home after stabilizing in Pike. => Chef 14 and ChefDK 3 is a thing next month. A heads-up will be sent to this ML before this enters the gate. https://blog.chef.io/2018/02/16/preparing-for-chef-14-and-chef-12-end-of-life/ => More to come with upgrades. Stay tuned for specs and patches. On The Menu =========== => Buffalo Chicken Dip -- 3-4 raw chicken breasts (flash-frozen gives a slightly different mouth feel. it still makes food, so, you do you, boo) -- 8 ounces (226g) cream cheese / Neufchatel -- 1 cup (128g) hot sauce (Frank's RedHot recommended. substitute for your own preferred pepper sauce) -- 1 ounce (28g) dry ranch seasoning (substitute for store-bought powder, or salad dressing from a bottle, if you must - ranch or bleu cheese works here) -- 4 ounces (113g) butter (grass-fed recommended because delicious) Optional: -- 4 slices cooked and crumbled (streaky) bacon -- Cheese (shredded or cubed for melting consistency) Add the chicken to a slowcooker in a single layer, if you have room. Add hot sauce, butter, ranch right on top of the chicken. Cook on high for 4 hours. Remove heat, drain juices, reserving juices. Shred chicken. Add cream cheese, incorporate thoroughly. Reincorporate the juices, gradually and thoroughly, taking care not to obliterate the chicken, unless you like tangy, cheesy chicken mash. Serve as an appetizer, or dig in with a fork. Your humble cook, Samuel Cassiba From mriedemos at gmail.com Fri Mar 16 16:00:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 16 Mar 2018 11:00:21 -0500 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: References: Message-ID: <5ec5db72-df4c-a100-1e54-6b1fbea5bf01@gmail.com> On 3/16/2018 10:33 AM, Peter Penchev wrote: > Would there be any major opposition to adding a StorPool shared > storage image backend, so that our customers are not limited to > volume-backed instances? Right now, creating a StorPool volume and > snapshot from a Glance image and then booting instances from that > snapshot works great, but in some cases, including some provisioning > and accounting systems on top of OpenStack, it would be preferable to > go the Nova way and let the hypervisor think that it has a local(ish) > image to work with, even though it's on shared storage anyway. This > will go hand-in-hand with our planned Glance image driver, so that > creating a new instance from a Glance image would happen > instantaneously (create a StorPool volume from the StorPool snapshot > corresponding to the Glance image). > Ask the EMC ScaleIO team how well this has gone for them: https://review.openstack.org/#/c/407440/ There has been a lot of discussion about a generic Cinder image backend driver in nova so that we don't need to have the same storage backend driver explosion that Cinder has, and we could also then replace the nova lvm/rbd image backends and just use Cinder volumes for volume-backed instances. I could find lots of discussion references about this, but basically no one is planning to step up to work on that, and the existing libvirt imagebackend code is a mess, so piling more backends into the mix isn't very attractive. Anyway, just FYI on all of that history. > If this will help the decision, we do have plans for adding a > full-blown Nova third-party CI in the near future, so that both our > volume attachment driver, this driver, and our upcoming Glance image > driver will see some more testing. 3rd party CI would be a requirement to get it added anyway, it's not really an option. -- Thanks, Matt From sean.mcginnis at gmx.com Fri Mar 16 16:04:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 16 Mar 2018 11:04:06 -0500 Subject: [openstack-dev] [glance] New image backend: StorPool In-Reply-To: References: Message-ID: <20180316160405.GA22212@sm-xps> Just updating the subject line tag to glance. ;) On Fri, Mar 16, 2018 at 05:33:30PM +0200, Peter Penchev wrote: > Hi, > > A couple of years ago I created a Nova spec for the StorPool image > backend: https://review.openstack.org/#/c/137830/ There was some > discussion, but then our company could not immediately allocate the > resources to write the driver itself, so the spec languished and was > eventually abandoned. > > Now that StorPool has a fully maintained Cinder driver and also a > fully maintained Nova volume attachment driver, both included in the > Queens release, and a Cinder third-party CI that runs all the tests > tagged with "volume", including some simple Nova tests, we'd like to > resurrect this spec and implement a Nova image backend, too. > Actually, it looks like due to customer demand we will write the > driver anyway and possibly maintain it outside the tree, but it would > be preferable (and, obviously, easier to catch up with wide-ranging > changes) to have it in. > > Would there be any major opposition to adding a StorPool shared > storage image backend, so that our customers are not limited to > volume-backed instances? Right now, creating a StorPool volume and > snapshot from a Glance image and then booting instances from that > snapshot works great, but in some cases, including some provisioning > and accounting systems on top of OpenStack, it would be preferable to > go the Nova way and let the hypervisor think that it has a local(ish) > image to work with, even though it's on shared storage anyway. This > will go hand-in-hand with our planned Glance image driver, so that > creating a new instance from a Glance image would happen > instantaneously (create a StorPool volume from the StorPool snapshot > corresponding to the Glance image). > > If this will help the decision, we do have plans for adding a > full-blown Nova third-party CI in the near future, so that both our > volume attachment driver, this driver, and our upcoming Glance image > driver will see some more testing. > > Thanks in advance, and keep up the great work! > > Best regards, > Peter > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Fri Mar 16 16:03:50 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Fri, 16 Mar 2018 16:03:50 +0000 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C08629E@EX10MBOX03.pnnl.gov> What about the other way around? An Octavia plugin that simply manages k8s Ingress objects on a k8s cluster? Depending on how operators are deploying openstack, this might be a much easier way to deploy Octavia. Thanks, Kevin ________________________________ From: Lingxian Kong [anlin.kong at gmail.com] Sent: Friday, March 16, 2018 5:21 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s Just FYI, l7 policy/rule support for Neutron LBaaS V2 and Octavia is on its way[1], because we will have both octavia and magnum deployed on our openstack based public cloud this year, an ingress controller for openstack(octavia) is also on our TODO list, any kind of collaboration are welcomed :-) [1]: https://github.com/gophercloud/gophercloud/pull/833 Cheers, Lingxian Kong (Larry) On Fri, Mar 16, 2018 at 5:01 PM, Joe Topjian > wrote: Hi Chris, I wear a number of hats related to this discussion, so I'll add a few points of view :) It turns out that with Terraform, it's possible to tear down resources in a way that causes Neutron to leak administrator-privileged resources that can not be deleted by a non-privileged users. In discussions with the Neutron and Octavia teams, it was strongly recommended that I move away from the Neutron LBaaSv2 API and instead adopt Octavia. Vexxhost graciously installed Octavia and my request and I was able to move past this issue. Terraform hat! I want to slightly nit-pick this one since the words "leak" and "admin-priv" can sound scary: Terraform technically wasn't doing anything wrong. The problem was that Octavia was creating resources but not setting ownership to the tenant. When it came time to delete the resources, Octavia was correctly refusing, though it incorrectly created said resources. >From reviewing the discussion, other parties were discovering this issue and patching in parallel to your discovery. Both xgerman and Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost quickly applied the patch. It was a really awesome collaboration between yourself, dims, xgerman, and Vexxhost. This highlights the first call to action for our public and private cloud community: encouraging the rapid migration from older, unsupported APIs to Octavia. Operator hat! The clouds my team and I run are more compute-based. Our users would be more excited if we increased our GPU pool than enhanced the networking services. With that in mind, when I hear it said that "Octavia is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that means we can keep running Neutron LBaaS v2 for now" and focus our efforts elsewhere. I totally get why Octavia is advertised this way and it's very much appreciated. When I learned about Octavia, my knee-jerk reaction was "oh no, not another load balancer" but that was remedied when I learned it's more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not our primary focus and we can still squeak by with Neutron's LBaaS v2. If you *really* wanted us to deploy Octavia ASAP, then a migration guide would be wonderful. I read over the "Developer / Operator Quick Start Guide" and found it very well written! I groaned over having to build an image but I also really appreciate the image builder script. If there can't be pre-built images available for testing, the second-best option is that script. This highlights a second call to action for the SDK and provider developers: recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding support for more advanced Octavia features. Gophercloud hat! We've supported Octavia for a few months now, but purely by having the load-balancer client piggyback off of the Neutron LBaaS v2 API. We made the decision this morning, coincidentally enough, to have Octavia be a first-class service peered with Neutron rather than think of Octavia as a Neutron/network child. This will allow Octavia to fully flourish without worry of affecting the existing LBaaS v2 API (which we'll still keep around separately). Thanks, Joe __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at openstack.org Fri Mar 16 16:08:00 2018 From: chris at openstack.org (Chris Hoge) Date: Fri, 16 Mar 2018 09:08:00 -0700 Subject: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s In-Reply-To: References: <27126DC0-9C72-4442-9F93-05B6E3745BED@openstack.org> Message-ID: > On Mar 16, 2018, at 7:40 AM, Simon Leinen wrote: > > Joe Topjian writes: >> Terraform hat! I want to slightly nit-pick this one since the words >> "leak" and "admin-priv" can sound scary: Terraform technically wasn't >> doing anything wrong. The problem was that Octavia was creating >> resources but not setting ownership to the tenant. When it came time >> to delete the resources, Octavia was correctly refusing, though it >> incorrectly created said resources. > > I dunno... if Octavia created those lower-layer resources on behalf of > the user, then Octavia shouldn't refuse to remove those resources when > the same user later asks it to - independent of what ownership Octavia > chose to apply to those resources. (It would be different it Neutron or > Nova were asked by the user directly to remove the resources created by > Octavia.) > >> From reviewing the discussion, other parties were discovering this >> issue and patching in parallel to your discovery. Both xgerman and >> Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost >> quickly applied the patch. It was a really awesome collaboration >> between yourself, dims, xgerman, and Vexxhost. > > Speaking as another operator: Does anyone seriously expect us to deploy > a service (Octavia) in production at a stage where it exhibits this kind > of behavior? Having to clean up leftover resources because the users who > created them cannot remove them is not my idea of fun. (And note that > like most operators, we're a few releases behind, so we might not even > get access to backports IF this gets fixed.) Simon and Joe, one thing that I was not clear on (again, goes back to the statement that mistakes I make are my own), is that this is behavior, admin-scoped resources being created then not released, was seen in the Neutron LBaaSv2 service. The fix _was_ to deploy Octavia and not use the Neutron API. As such, I'm reluctant to use Terraform (or really, any other SDK) to deploy load balancers against the Neutron API. I don't want to be leaking a bunch of resources I can't delete. It's not good for the apps I’m trying to run and it’s definitely not good for the cloud provider. I have much more confidence developing against the Octavia service. We figured this out as a group effort between Vexxhost, Joe, and the Octavia team, and I'm exceptionally grateful to all of them for helping me to sort those issues out. Now, I ultimately dropped it in my own code because I can't rely on the existence of Octavia across all clouds. It had nothing to do with the either the reliability of the GopherCloud/Terraform SDKs or Octavia itself. So, to repeat, leaking admin-scoped resources is a Neutron LBaaSv2 bug, not an Octavia bug. > In our case we're not a compute-oriented cloud provider, and some of our > customers would really like to have a good LBaaS as part of our IaaS > offering. But our experience with this was so-so in the past - for > example, we had to help customers migrate from LBaaSv1 to LBaaSv2. Our > resources (people, tolerance to user-affecting bugs and forced upgrades > etc.) are limited, so we've become careful. > > For users who want to use Kubernetes on our OpenStack service, we rather > point them to Kubernetes's Ingress controller, which performs the LB > function without requiring much from the underlying cloud. Seems like a > fine solution. > -- > Simon. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Fri Mar 16 16:23:11 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 16 Mar 2018 09:23:11 -0700 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: References: Message-ID: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> On Fri, 16 Mar 2018 17:33:30 +0200, Peter Penchev wrote: > Would there be any major opposition to adding a StorPool shared > storage image backend, so that our customers are not limited to > volume-backed instances? Right now, creating a StorPool volume and > snapshot from a Glance image and then booting instances from that > snapshot works great, but in some cases, including some provisioning > and accounting systems on top of OpenStack, it would be preferable to > go the Nova way and let the hypervisor think that it has a local(ish) > image to work with, even though it's on shared storage anyway. Can you be more specific about what is limiting you when you use volume-backed instances? We've been kicking around the idea of beefing up support of boot-from-volume in nova such that "automatic boot-from-volume for instance create" works well enough that we could consider boot-from-volume the first-class way to support the vast variety of cinder storage backends and let cinder handle the details instead of trying to re-implement support of various storage backends in nova on a selective basis. I'd like to better understand what is lacking for you when you use boot-from-volume to leverage StorPool and determine whether it's something we could address in nova. Cheers, -melanie From openstack-dev at storpool.com Fri Mar 16 16:27:07 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Fri, 16 Mar 2018 18:27:07 +0200 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: <20180316160405.GA22212@sm-xps> References: <20180316160405.GA22212@sm-xps> Message-ID: <20180316162707.GB4118@office.storpool.com> On Fri, Mar 16, 2018 at 11:04:06AM -0500, Sean McGinnis wrote: > Just updating the subject line tag to glance. ;) Errr, sorry, but no, this is for a Nova image backend (yes, namespace overload, I know) - the driver that lets a Nova host create "local" images for non-volume-backed instances. > On Fri, Mar 16, 2018 at 05:33:30PM +0200, Peter Penchev wrote: > > Hi, > > > > A couple of years ago I created a Nova spec for the StorPool image > > backend: https://review.openstack.org/#/c/137830/ There was some > > discussion, but then our company could not immediately allocate the > > resources to write the driver itself, so the spec languished and was > > eventually abandoned. > > > > Now that StorPool has a fully maintained Cinder driver and also a > > fully maintained Nova volume attachment driver, both included in the > > Queens release, and a Cinder third-party CI that runs all the tests > > tagged with "volume", including some simple Nova tests, we'd like to > > resurrect this spec and implement a Nova image backend, too. > > Actually, it looks like due to customer demand we will write the > > driver anyway and possibly maintain it outside the tree, but it would > > be preferable (and, obviously, easier to catch up with wide-ranging > > changes) to have it in. > > > > Would there be any major opposition to adding a StorPool shared > > storage image backend, so that our customers are not limited to > > volume-backed instances? Right now, creating a StorPool volume and > > snapshot from a Glance image and then booting instances from that > > snapshot works great, but in some cases, including some provisioning > > and accounting systems on top of OpenStack, it would be preferable to > > go the Nova way and let the hypervisor think that it has a local(ish) > > image to work with, even though it's on shared storage anyway. This > > will go hand-in-hand with our planned Glance image driver, so that > > creating a new instance from a Glance image would happen > > instantaneously (create a StorPool volume from the StorPool snapshot > > corresponding to the Glance image). > > > > If this will help the decision, we do have plans for adding a > > full-blown Nova third-party CI in the near future, so that both our > > volume attachment driver, this driver, and our upcoming Glance image > > driver will see some more testing. > > > > Thanks in advance, and keep up the great work! Best regards, Peter From Eric.Young at dell.com Fri Mar 16 16:27:16 2018 From: Eric.Young at dell.com (young, eric) Date: Fri, 16 Mar 2018 16:27:16 +0000 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: <5ec5db72-df4c-a100-1e54-6b1fbea5bf01@gmail.com> References: <5ec5db72-df4c-a100-1e54-6b1fbea5bf01@gmail.com> Message-ID: <9D73A379-31E2-43C6-B350-90B36CAE73F5@emc.com> I can provide some insights from the Dell EMC ScaleIO side. As you can see from the patch that Matt pointed to, it is possible to add ephemeral/image backend support to Nova. That said, it is not easy and [IMHO] prone to error. There is no ‘driver model’ like there is in Cinder, where you just implement a spec and run tests. You have to go into the Nova code itself and add a whole bunch of logic specific to your backend. Once complete and you have your CI setup to run the Nova test suite, getting reviews complete is tough due to all of the different priorities. You’ll also need to keep an eye on others things going into Nova and be sure that your CI continues to report success. If I had it to do all over again, I would strongly suggest that developers looking to add ‘yet another backend’ combine their resources and tackle the cleanup of the existing libvirt code as well as the generic Cinder backend support that Matt mentioned. The patch below, which adds ephemeral/image support for ScaleIo has not yet merged upstream; I am currently working with ScaleIO customers to determine how important it really is. I may find myself volunteering for the generic approach as I think it is a much better route. Eric On 3/16/18, 12:00 PM, "Matt Riedemann" wrote: >On 3/16/2018 10:33 AM, Peter Penchev wrote: >> Would there be any major opposition to adding a StorPool shared >> storage image backend, so that our customers are not limited to >> volume-backed instances? Right now, creating a StorPool volume and >> snapshot from a Glance image and then booting instances from that >> snapshot works great, but in some cases, including some provisioning >> and accounting systems on top of OpenStack, it would be preferable to >> go the Nova way and let the hypervisor think that it has a local(ish) >> image to work with, even though it's on shared storage anyway. This >> will go hand-in-hand with our planned Glance image driver, so that >> creating a new instance from a Glance image would happen >> instantaneously (create a StorPool volume from the StorPool snapshot >> corresponding to the Glance image). >> > >Ask the EMC ScaleIO team how well this has gone for them: > >https://review.openstack.org/#/c/407440/ > >There has been a lot of discussion about a generic Cinder image backend >driver in nova so that we don't need to have the same storage backend >driver explosion that Cinder has, and we could also then replace the >nova lvm/rbd image backends and just use Cinder volumes for >volume-backed instances. > >I could find lots of discussion references about this, but basically no >one is planning to step up to work on that, and the existing libvirt >imagebackend code is a mess, so piling more backends into the mix isn't >very attractive. > >Anyway, just FYI on all of that history. > >> If this will help the decision, we do have plans for adding a >> full-blown Nova third-party CI in the near future, so that both our >> volume attachment driver, this driver, and our upcoming Glance image >> driver will see some more testing. > >3rd party CI would be a requirement to get it added anyway, it's not >really an option. > >-- > >Thanks, > >Matt > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Fri Mar 16 16:39:14 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 16 Mar 2018 09:39:14 -0700 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> (melanie witt's message of "Fri, 16 Mar 2018 09:23:11 -0700") References: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> Message-ID: > Can you be more specific about what is limiting you when you use > volume-backed instances? Presumably it's because you're taking a trip over iscsi instead of using the native attachment mechanism for the technology that you're using? If so, that's a valid argument, but it's hard to see the tradeoff working in favor of adding all these drivers to nova as well. If cinder doesn't support backend-specific connectors, maybe that's something we could work on? People keep saying that "cinder is where I put my storage, that's how I want to back my instances" when it comes to justifying BFV, and that argument is starting to resonate with me more and more. --Dan From doug at doughellmann.com Fri Mar 16 16:56:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 16 Mar 2018 12:56:54 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <20180316134300.gjjcmu475zuosv5c@yuggoth.org> References: <51D9C75B-8062-4B15-9E7E-D789751834EB@gmx.com> <20180316134300.gjjcmu475zuosv5c@yuggoth.org> Message-ID: <1521219390-sup-9639@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-03-16 13:43:00 +0000: > On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote: > > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard wrote: > > > > > For OpenStack-Ansible, we don't need to do anything for that > > > community goal. I am not sure how we can remove our name from > > > the storyboard, so I just inform you here. > > > > I believe you can just mark the task as done if there is no > > additional work required. > > Yeah, either "merged" or "invalid" states should work. I'd lean > toward suggesting "invalid" in this case since the task did not > require any changes merged to your source code. Yes, we've been using "invalid" to indicate that no work was needed. Doug From colleen at gazlene.net Fri Mar 16 17:00:23 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 16 Mar 2018 18:00:23 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 12 March 2018 Message-ID: <1521219623.4124162.1305711056.1B2733B2@webmail.messagingengine.com> # Keystone Team Update - Week of 12 March 2018 ## News ### Keystone Admin-ness: the Future At the Denver PTG, while grappling with the concept of admin-ness, we had a moment of clarity when we realized that there were some classes of admin actions that could be described as "global" across keystone projects, like listing all servers in all projects, and other admin actions that were better classified as "system" actions that operated on no project at all, like creating endpoints. From this came the new system scope[1] for operating on system-level APIs. But we have yet to properly deal with the global-across-projects case. There are conflicting views within the keystone team on how best to support this going forward[2], and whether we should enable system-scoped tokens to work on project-level operations or if we can lean on Hierarchical Multitenancy to enable this. Somewhat intermixed in this issue is how, or whether, to deal with cleaning up resources in other services that are tied to keystone projects when the service has no insight into keystone internals. If you have thoughts on these issues, please discuss on Adam's thread[3]. [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T22:42:44 [3] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128302.html ### Edge Computing We've previously gotten requests to support syncing data across different keystone deployments at the application level rather than at the data storage level[4]. As Edge Computing gains stronger footing in our community[5], we need to start thinking about use cases like this and how to support them. We discussed this a bit[6] but we are a ways off from having a concrete plan. If you have thoughts on this, please reach out to us! [4] https://review.openstack.org/#/c/323499/ [5] http://markvoelker.github.io/blog/dublin-ptg-edge-sessions/ [6] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T13:50:03 ### JWT We have a spec proposed[7] to implement JSON Web Tokens as a new token format similar to fernet. We discussed some of the particulars[8] with regard to whether the token needs to be encrypted and token size considerations. Implementing this might make a good Outreachy project since it is interesting and reasonably self-contained, but we will want to nail down these details before dumping it on an intern. [7] https://review.openstack.org/#/c/541903/ [8] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T20:03:56 ### Milestone Planning Meeting We had a conference call meeting to organize our Rocky roadmap[9] and do some sprint-like planning for the first milestone. If you're working on something in the roadmap, please feel free to make updates to the Trello board as needed. [9] https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap ### Outreachy projects OpenStack didn't get into GSOC this year, but we still have a chance to submit applications for Outreachy[10]. We have some internship ideas[11] that we should add to and/or finalize ASAP. We need to have mentors assigned up-front who should submit the project idea themselves, but even if there is only one name attached to a project, we found last round that co-mentoring can be pretty successful for both the intern and the mentors. [10] https://www.outreachy.org/communities/cfp/openstack/ [11] https://etherpad.openstack.org/p/keystone-internship-ideas ## Open Specs Search query: https://goo.gl/eyTktx Since last week, a new spec has been proposed to provide proper usable multi-factor auth[12]. In total we have five specs proposed for Rocky that are awaiting feedback. We've also had a revival of a spec currently proposed to the backlog to improve OpenIDC support[13]. [12] https://review.openstack.org/#/c/553670 [13] https://review.openstack.org/#/c/373983 ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 13 changes this week. One of these was a significant bugfix to the template catalog backend[14]. We had postponed merging this with the idea that we might create a whole new, better, file-based catalog backend[15] but work on that had stalled (and is being picked up again). [14] https://review.openstack.org/#/c/482364/ [15] https://review.openstack.org/#/c/483514/ ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 36 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html We added our milestone goals to the release schedule[16]. The next deadline is the spec proposal freeze the week of April 16. [16] https://review.openstack.org/#/c/553502/ ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From openstack-dev at storpool.com Fri Mar 16 17:24:01 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Fri, 16 Mar 2018 19:24:01 +0200 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> References: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> Message-ID: <20180316172401.GD4118@office.storpool.com> On Fri, Mar 16, 2018 at 09:23:11AM -0700, melanie witt wrote: > On Fri, 16 Mar 2018 17:33:30 +0200, Peter Penchev wrote: > > Would there be any major opposition to adding a StorPool shared > > storage image backend, so that our customers are not limited to > > volume-backed instances? Right now, creating a StorPool volume and > > snapshot from a Glance image and then booting instances from that > > snapshot works great, but in some cases, including some provisioning > > and accounting systems on top of OpenStack, it would be preferable to > > go the Nova way and let the hypervisor think that it has a local(ish) > > image to work with, even though it's on shared storage anyway. > > Can you be more specific about what is limiting you when you use > volume-backed instances? It's not a problem for our current customers, but we had an OpenStack PoC last year for a customer who was using some proprietary provisioning+accounting system on top of OpenStack (sorry, I really can't remember the name). That particular system simply couldn't be bothered to create a volume-backed instance, so we "helped" by doing an insane hack: writing an almost-pass-through Compute API that would intercept the boot request and DTRT behind the scenes (send a modified request to the real Compute API), and then also writing an almost-pass-through Identity API that would intercept the requests to get the Compute API's endpoint and slip our API's address there. The customer ended up not using OpenStack for completely unrelated reasons, but there was certainly at least one instance of this. > We've been kicking around the idea of beefing up > support of boot-from-volume in nova such that "automatic boot-from-volume > for instance create" works well enough that we could consider > boot-from-volume the first-class way to support the vast variety of cinder > storage backends and let cinder handle the details instead of trying to > re-implement support of various storage backends in nova on a selective > basis. I'd like to better understand what is lacking for you when you use > boot-from-volume to leverage StorPool and determine whether it's something > we could address in nova. I'll see if I can remember anything more (ISTR also another case of something that couldn't boot a volume-backed instance, but I really cannot remember even what it was). The problem was certainly not with OpenStack proper, but with other systems built on top of it. Best regards, Peter From stdake at cisco.com Fri Mar 16 18:44:50 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Fri, 16 Mar 2018 18:44:50 +0000 Subject: [openstack-dev] [kolla] Dropping off kolla-kubernetes core reviewer team Message-ID: Hey folks, As many core reviewers in Kolla core teams may already know, I am focused on OpenStack board of director work and adjacent community work. This involves bridging the OpenStack ecosystem and its various strategic focus areas with adjacent community projects that make sense. My work in this area has led to my technical involvement in a Layer 7 networking project (https://istio.io) – specifically around the multicloud use case and connecting OpenStack public/private clouds with other cloud providers. As a result, I don’t have time to commit to properly furthering the development of kolla-kubernetes nor providing reviews for this specific Kolla project deliverable. I do plan to stay involved in Kolla as a reviewer in the other Kolla core teams and I am deeply committed to furthering OpenStack’s strategic focus areas in my board of director’s service. If you are curious about these SFAs, you might consider reading: https://blogs.cisco.com/cloud/openstack-solving-for-integration-in-open-source-adjacent-communities Regards, -steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.vamsikrishna at ericsson.com Fri Mar 16 19:25:58 2018 From: a.vamsikrishna at ericsson.com (A Vamsikrishna) Date: Fri, 16 Mar 2018 19:25:58 +0000 Subject: [openstack-dev] [Qos]Unable to apply qos policy with dscp marking rule to a port Message-ID: Hi Manjeet / Isaku, I am unable to apply qos policy with dscp marking rule to a port. 1.Create a Qos Policy 2.Create a dscp marking rule on to create qos policy 3.Apply above created policy to a port openstack network qos rule set --dscp-mark 22 dscp-marking 115e4f70-8034-41768fe9-2c47f8878a7d HttpException: Conflict (HTTP 409) (Request-ID: req-da7d8998-9d8c-4aea-a10b-326cc21b608e), Rule dscp_marking is not supported by port 115e4f70-8034-41768fe9-2c47f8878a7d stack at pike-ctrl:~/devstack$ Seeing above error during the qos policy application on a port. Any suggestions on this ? I see below review has been abandoned which is "Allow networking-odl to support DSCP Marking rule for qos driver": https://review.openstack.org/#/c/460470/ Is dscp marking supported in PIKE ? Can you please confirm ? I have raised below bug to track this issue: https://bugs.launchpad.net/networking-odl/+bug/1756132 Thanks, Vamsi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Fri Mar 16 19:26:31 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 16 Mar 2018 19:26:31 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1521219390-sup-9639@lrrr.local> References: <51D9C75B-8062-4B15-9E7E-D789751834EB@gmx.com> <20180316134300.gjjcmu475zuosv5c@yuggoth.org> <1521219390-sup-9639@lrrr.local> Message-ID: Thanks! On 16 March 2018 at 16:56, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-03-16 13:43:00 +0000: >> On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote: >> > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard wrote: >> > >> > > For OpenStack-Ansible, we don't need to do anything for that >> > > community goal. I am not sure how we can remove our name from >> > > the storyboard, so I just inform you here. >> > >> > I believe you can just mark the task as done if there is no >> > additional work required. >> >> Yeah, either "merged" or "invalid" states should work. I'd lean >> toward suggesting "invalid" in this case since the task did not >> require any changes merged to your source code. > > Yes, we've been using "invalid" to indicate that no work was needed. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jean-philippe at evrard.me Fri Mar 16 19:28:34 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 16 Mar 2018 19:28:34 +0000 Subject: [openstack-dev] [charms] [tripleo] [puppet] [fuel] [kolla] [openstack-ansible] [cloudcafe] [magnum] [mogan] [sahara] [shovel] [watcher] [helm] [rally] Heads up: ironic classic drivers deprecation In-Reply-To: <81cbb170-b970-6d4c-2dec-e173b95fd1cc@redhat.com> References: <81cbb170-b970-6d4c-2dec-e173b95fd1cc@redhat.com> Message-ID: Hello, Thanks for the notice! JP On 16 March 2018 at 12:09, Dmitry Tantsur wrote: > Hi all, > > If you see your project name in the subject that is because a global search > revived usage of "pxe_ipmitool", "agent_ipmitool" or "pxe_ssh" drivers in > the non-unit-test context in one or more of your repositories. > > The classic drivers, such as pxe_ipmitool, were deprecated in Queens, and > we're on track with removing them in Rocky. Please read [1] about > differences between classic drivers and newer hardware types. Please refer > to [2] on how to update your code. > > Finally, the pxe_ssh driver was removed some time ago. Please use the > standard IPMI driver with the virtualbmc project [3] instead. > > Please reach out to the ironic team (here or on #openstack-ironic) if you > have any questions or need help with the transition. > > Dmitry > > [1] https://docs.openstack.org/ironic/latest/install/enabling-drivers.html > [2] > https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html > [3] https://github.com/openstack/virtualbmc > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From duncan.thomas at gmail.com Fri Mar 16 20:32:05 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Fri, 16 Mar 2018 20:32:05 +0000 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: References: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> Message-ID: On 16 March 2018 at 16:39, Dan Smith wrote: >> Can you be more specific about what is limiting you when you use >> volume-backed instances? > > Presumably it's because you're taking a trip over iscsi instead of using > the native attachment mechanism for the technology that you're using? If > so, that's a valid argument, but it's hard to see the tradeoff working > in favor of adding all these drivers to nova as well. > > If cinder doesn't support backend-specific connectors, maybe that's > something we could work on? Cinder supports a range of connectors, and there has never been any opposition in principle to supporting more. I suggest looking at the RDB support in cinder as an example of a strongly supported native attachment method. -- Duncan Thomas From jim at jimrollenhagen.com Fri Mar 16 21:22:51 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 16 Mar 2018 21:22:51 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: Message-ID: knikolla brought up an interesting wedge in this goal in #openstack-keystone today. It seems mod_wsgi doesn't want python applications catching SIGHUP, as Apache expects to be able to catch that. By default, it even ensures signal handlers do not get registered.[0] I can't quickly find uwsgi's recommendations on this, but I'd assume it would be similar, as uwsgi uses SIGHUP as a signal to gracefully reload all workers and the master process. Given we just had a goal to make all API services runnable as a WSGI application, it seems wrong to enable mutable config for API services. It's a super useful thing though, so I'd love to figure out a way we can do it. Thoughts? [0] http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIRestrictSignal.html [1] http://uwsgi-docs.readthedocs.io/en/latest/Management.html#signals-for-controlling-uwsgi // jim On Wed, Feb 28, 2018 at 5:27 AM, ChangBo Guo wrote: > Hi ALL, > > TC approved the goal [0] a week ago , so it's time to finish the work. > we also have a short discussion in oslo meeting at PTG, find more details > in [1] , > we use storyboard to check the goal in https://storyboard.openstack. > org/#!/story/2001545. It's appreciated PTL set the owner in time . > Feel free to reach me( gcb) in IRC if you have any questions. > > > [0] https://review.openstack.org/#/c/534605/ > [1] https://etherpad.openstack.org/p/oslo-ptg-rocky From line 175 > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Fri Mar 16 21:23:45 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Fri, 16 Mar 2018 14:23:45 -0700 Subject: [openstack-dev] Zuul project evolution In-Reply-To: <87sh91gfym.fsf@meyer.lemoncheese.net> References: <87sh91gfym.fsf@meyer.lemoncheese.net> Message-ID: <5AAC35E1.80002@fastmail.com> Awesome! Might IMHO be useful to also start doing this with other projects. James E. Blair wrote: > Hi, > > To date, Zuul has (perhaps rightly) often been seen as an > OpenStack-specific tool. That's only natural since we created it > explicitly to solve problems we were having in scaling the testing of > OpenStack. Nevertheless, it is useful far beyond OpenStack, and even > before v3, it has found adopters elsewhere. Though as we talk to more > people about adopting it, it is becoming clear that the less experience > they have with OpenStack, the more likely they are to perceive that Zuul > isn't made for them. > > At the same time, the OpenStack Foundation has identified a number of > strategic focus areas related to open infrastructure in which to invest. > CI/CD is one of these. The OpenStack project infrastructure team, the > Zuul team, and the Foundation staff recently discussed these issues and > we feel that establishing Zuul as its own top-level project with the > support of the Foundation would benefit everyone. > > It's too early in the process for me to say what all the implications > are, but here are some things I feel confident about: > > * The folks supporting the Zuul running for OpenStack will continue to > do so. We love OpenStack and it's just way too fun running the > world's most amazing public CI system to do anything else. > > * Zuul will be independently promoted as a CI/CD tool. We are > establishing our own website and mailing lists to facilitate > interacting with folks who aren't otherwise interested in OpenStack. > You can expect to hear more about this over the coming months. > > * We will remain just as open as we have been -- the "four opens" are > intrinsic to what we do. > > As a first step in this process, I have proposed a change[1] to remove > Zuul from the list of official OpenStack projects. If you have any > questions, please don't hesitate to discuss them here, or privately > contact me or the Foundation staff. > > -Jim > > [1] https://review.openstack.org/552637 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Mar 16 21:33:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 16 Mar 2018 17:33:57 -0400 Subject: [openstack-dev] [Release-job-failures][neutron][arista] Release of openstack/networking-arista failed In-Reply-To: References: Message-ID: <1521235976-sup-1792@lrrr.local> This Arista release is failing because the packaging job can't run "tox -e venv" because neutron is listed in the requirements.txt for the Arista code and in the constraints file. Excerpts from zuul's message of 2018-03-16 19:50:48 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/25/25ac528d6771d3440fac428294194e08939fb5aa/release/release-openstack-python/e550904/ : FAILURE in 3m 30s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > From fungi at yuggoth.org Fri Mar 16 21:34:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 16 Mar 2018 21:34:42 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: Message-ID: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> On 2018-03-16 21:22:51 +0000 (+0000), Jim Rollenhagen wrote: [...] > It seems mod_wsgi doesn't want python applications catching SIGHUP, > as Apache expects to be able to catch that. By default, it even ensures > signal handlers do not get registered.[0] [...] > Given we just had a goal to make all API services runnable as a WSGI > application, it seems wrong to enable mutable config for API services. > It's a super useful thing though, so I'd love to figure out a way we can do > it. [...] Given these are API services, can the APIs grow a (hopefully standardized) method to trigger this in lieu of signal handling? Or if the authentication requirements are too much, Zuul and friends have grown RPC sockets which can be used to inject these sorts of low-level commands over localhost to their service daemons (or could probably also do similar things over UNIX sockets if you don't want listeners on the loopback interface). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack-dev at storpool.com Fri Mar 16 17:18:18 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Fri, 16 Mar 2018 19:18:18 +0200 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: References: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> Message-ID: <20180316171818.GC4118@office.storpool.com> On Fri, Mar 16, 2018 at 09:39:14AM -0700, Dan Smith wrote: > > Can you be more specific about what is limiting you when you use > > volume-backed instances? > > Presumably it's because you're taking a trip over iscsi instead of using > the native attachment mechanism for the technology that you're using? If > so, that's a valid argument, but it's hard to see the tradeoff working > in favor of adding all these drivers to nova as well. > > If cinder doesn't support backend-specific connectors, maybe that's > something we could work on? People keep saying that "cinder is where I > put my storage, that's how I want to back my instances" when it comes to > justifying BFV, and that argument is starting to resonate with me more > and more. Um, that's what we have os-brick for, isn't it? And yes, we also have an os-brick connector for the "STORPOOL" connection type that is also part of the Queens release. Best regards, Peter From mnaser at vexxhost.com Fri Mar 16 22:08:40 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 16 Mar 2018 18:08:40 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> Message-ID: On Fri, Mar 16, 2018 at 5:34 PM, Jeremy Stanley wrote: > On 2018-03-16 21:22:51 +0000 (+0000), Jim Rollenhagen wrote: > [...] >> It seems mod_wsgi doesn't want python applications catching SIGHUP, >> as Apache expects to be able to catch that. By default, it even ensures >> signal handlers do not get registered.[0] > [...] >> Given we just had a goal to make all API services runnable as a WSGI >> application, it seems wrong to enable mutable config for API services. >> It's a super useful thing though, so I'd love to figure out a way we can do >> it. > [...] > > Given these are API services, can the APIs grow a (hopefully > standardized) method to trigger this in lieu of signal handling? Or > if the authentication requirements are too much, Zuul and friends > have grown RPC sockets which can be used to inject these sorts of > low-level commands over localhost to their service daemons (or could > probably also do similar things over UNIX sockets if you don't want > listeners on the loopback interface). Throwing an idea out there, but maybe listening to file modification events using something like inotify could be a possibility? > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jrist at redhat.com Fri Mar 16 22:15:47 2018 From: jrist at redhat.com (Jason E. Rist) Date: Fri, 16 Mar 2018 16:15:47 -0600 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: On 03/02/2018 02:24 AM, Emilien Macchi wrote: > A quick update: > > - Discussed with Jiri Tomasek from TripleO UI squad and he agreed that his > squad would start to use Storyboard, and experiment it. > - I told him I would take care of making sure all UI bugs created in > Launchpad would be moved to Storyboard. > - Talked with Kendall and we agreed that we would move forward and migrate > TripleO UI bugs to Storyboard. > - TripleO UI Squad would report feedback about storyboard to the storyboard > team with the help of other TripleO folks (me at least, I'm willing to > help). > > Hopefully this is progress and we can move forward. More updates to come > about migration during the next days... > > Thanks everyone involved in these productive discussions. > > On Wed, Jan 17, 2018 at 12:33 PM, Thierry Carrez > wrote: > >> Clint Byrum wrote: >>> [...] >>> That particular example board was built from tasks semi-automatically, >>> using a tag, by this script running on a cron job somewhere: >>> >>> https://git.openstack.org/cgit/openstack-infra/zuul/ >> tree/tools/update-storyboard.py?h=feature/zuulv3 >>> >>> We did this so that we could have a rule "any task that is open with >>> the zuulv3 tag must be on this board". Jim very astutely noticed that >>> I was not very good at being a robot that did this and thus created the >>> script to ease me into retirement from zuul project management. >>> >>> The script adds new things in New, and moves tasks automatically to >>> In Progress, and then removes them when they are completed. We would >>> periodically groom the "New" items into an appropriate lane with the >> hopes >>> of building what you might call a rolling-sprint in Todo, and calling >>> out blocked tasks in a regular meeting. Stories were added manually as >>> a way to say "look in here and add tasks", and manually removed when >>> the larger effort of the story was considered done. >>> >>> I rather like the semi-automatic nature of it, and would definitely >>> suggest that something like this be included in Storyboard if other >>> groups find the board building script useful. This made a cross-project >>> effort between Nodepool and Zuul go more smoothly as we had some more >>> casual contributors to both, and some more full-time. >> >> That's a great example that illustrates StoryBoard design: rather than >> do too much upfront feature design, focus on primitives and expose them >> fully through a strong API, then let real-world usage dictate patterns >> that might result in future features. >> >> The downside of this approach is of course getting enough usage on a >> product that appears a bit "raw" in terms of features. But I think we >> are closing on getting that critical mass :) >> >> -- >> Thierry Carrez (ttx) >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > I just tried this but I think I might be doing something wrong... http://storyboard.macchi.pro:9000/ This URL mentioned in the previous storyboard evaluation email does not seem to work. http://lists.openstack.org/pipermail/openstack-dev/2018-January/126258.html Are you still evaluating this? Is the UI squad still expected to contribute? Do we have a better place to go for storyboard usage? I just ran into a bug and thought to myself "hey, I'll go drop this at the storyboard spot, since that's what had been the plan" but avast, I could not continue. Can you enlighten me to the status? -J -- Jason E. Rist Senior Software Engineer OpenStack User Interfaces Red Hat, Inc. Freenode: jrist github/twitter: knowncitizen From rochelle.grober at huawei.com Fri Mar 16 22:55:34 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 16 Mar 2018 22:55:34 +0000 Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests In-Reply-To: <20180315144034.3dgtk3f2kvx4vlrp@yuggoth.org> References: <5823E872-A563-4684-A124-6E509AFF0F8A@windriver.com> <746DEED6-D8E8-4125-87D6-936F2F06508A@windriver.com> <20180315144034.3dgtk3f2kvx4vlrp@yuggoth.org> Message-ID: Submission is no longer anonymous, but the results are not public, still. The submitter decides whether the guideline results are public, but if they do, only the guideline tests are made public. If the submitter does not actively select public availability for the test results, all results default to private. --Rocky > -----Original Message----- > From: Jeremy Stanley [mailto:fungi at yuggoth.org] > Sent: Thursday, March 15, 2018 7:41 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus > 'OpenStack Powered' Tests > > On 2018-03-15 14:16:30 +0000 (+0000), Arkady.Kanevsky at dell.com wrote: > [...] > > This can be submitted anonymously if you like. > > Anonymous submissions got disabled (and the existing set of data from them > deleted). See the announcement from a month ago for > details: > > http://lists.openstack.org/pipermail/openstack-dev/2018- > February/127103.html > > -- > Jeremy Stanley From gong.yongsheng at 99cloud.net Sat Mar 17 01:14:43 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Sat, 17 Mar 2018 09:14:43 +0800 (CST) Subject: [openstack-dev] [tacker] tacker rocky vPTG summary Message-ID: <943319a.34a.1623185ed45.Coremail.gong.yongsheng@99cloud.net> hi, tacker team has held a vPTG via zoom, which is recorded at https://etherpad.openstack.org/p/Tacker-PTG-Rocky in summary: P1 tasks: 1. web studio for vnf resources 2. sfc across k8s with openstack vims 3. make tacker server with monitoring features scaleable other tasks: 1. policy for placement 2. vnfs from opensource and vender providers 3. cluster features thanks for all participants. regards, gongysh tacker ptl 99cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Mar 17 08:16:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 17 Mar 2018 09:16:50 +0100 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: On Fri, Mar 16, 2018 at 11:15 PM, Jason E. Rist wrote: > > I just tried this but I think I might be doing something wrong... > > http://storyboard.macchi.pro:9000/ Sorry I removed the VM a few weeks ago (I needed to clear up some resources for my dev env). > This URL mentioned in the previous storyboard evaluation email does not > seem to work. > > http://lists.openstack.org/pipermail/openstack-dev/2018- > January/126258.html > > Are you still evaluating this? Is the UI squad still expected to > contribute? Do we have a better place to go for storyboard usage? I > just ran into a bug and thought to myself "hey, I'll go drop this at the > storyboard spot, since that's what had been the plan" but avast, I could > not continue. > > Can you enlighten me to the status? > The latest update is from March 2nd on this thread, where we're working with Kendall to migrate all bugs with "ui" tag in launchpad/tripleo into storyboard. It hasn't happened yet but I think it will over the next days or so. Once we get there, we'll provide more guidance on the plan. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Mar 17 08:34:24 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 17 Mar 2018 09:34:24 +0100 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG Message-ID: During the PTG we had some nice conversations about how TripleO can make progress on testing OpenStack deployments with Python 3. In CC, Haikel, Alfredo and Javier, please complete if I missed something. ## Goal As an OpenStack distribution, RDO would like to ensure that the OpenStack services (which aren't depending on Python 2) are packaged and can be containerized to be tested in TripleO CI. ## Challenges - Some services aren't fully Python 3, but we agreed this was not our problem but the project's problems. However, as a distribution, we'll make sure to ship what we can on Python 3. - CentOS 7 is not the Python 3 distro and there are high expectations from the next release but we aren't there yet. - Fedora is Python 3 friendly but we don't deploy TripleO on Fedora, and we don't want to do it (for now at least). ## Proposal - Continue to follow upstream projects who support Python3 only and ship rpms in RDO. - Investigate the build of Kolla containers on Fedora / Python 3 and push them to a registry (maybe in the same namespace with different name or maybe a new namespace). - Kick-off some TripleO CI experimental job that will use these containers to deploy TripleO (maybe on one basic scenario for now). ## Roadmap for Rocky For Rocky we agreed to follow the 3 steps part of the proposal (maybe more, please add what I've missed). That way, we'll be able to have some early testing on python3-only environments (thanks containers!) without changing the host OS. Thanks for your feedback and comments, it's an open discussion. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sat Mar 17 21:49:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 17 Mar 2018 17:49:56 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> Message-ID: <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> > On Mar 16, 2018, at 6:08 PM, Mohammed Naser wrote: > >> On Fri, Mar 16, 2018 at 5:34 PM, Jeremy Stanley wrote: >>> On 2018-03-16 21:22:51 +0000 (+0000), Jim Rollenhagen wrote: >>> [...] >>> It seems mod_wsgi doesn't want python applications catching SIGHUP, >>> as Apache expects to be able to catch that. By default, it even ensures >>> signal handlers do not get registered.[0] >> [...] >>> Given we just had a goal to make all API services runnable as a WSGI >>> application, it seems wrong to enable mutable config for API services. >>> It's a super useful thing though, so I'd love to figure out a way we can do >>> it. >> [...] >> >> Given these are API services, can the APIs grow a (hopefully >> standardized) method to trigger this in lieu of signal handling? Or >> if the authentication requirements are too much, Zuul and friends >> have grown RPC sockets which can be used to inject these sorts of >> low-level commands over localhost to their service daemons (or could >> probably also do similar things over UNIX sockets if you don't want >> listeners on the loopback interface). > > Throwing an idea out there, but maybe listening to file modification > events using something like inotify could be a possibility? Both of those are good ideas. I believe adding those things to oslo.service would make them available to all applications. > >> -- >> Jeremy Stanley >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Mar 18 04:00:01 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 18 Mar 2018 13:00:01 +0900 Subject: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client In-Reply-To: References: Message-ID: Hi All, Sorry for late response, i kept this mail unread but forgot to respond. reply inline. On Fri, Mar 16, 2018 at 8:08 PM, Dougal Matthews wrote: > > > On 13 March 2018 at 18:51, Eric K wrote: >> >> Hi Mistral folks and others, >> >> I'm working on Congress tempest tests [1] for integration with Mistral. In >> the tests, we use a Mistral service client to call Mistral APIs and >> compare results against those obtained by Mistral driver for Congress. >> >> Regarding the service client, Congress can either import directly from >> Mistral tempest plugin [2] or maintain its own copy within Congress >> tempest plugin. Maintaining own copy will leads to lot of issues and lot of duplicate code among many plugins. >I'm not sure whether Mistral team expects the service >> client to be internal use only, so I hope to hear folks' thoughts on which >> approach is preferred. Thanks very much! > > > I don't have a strong opinion here. I am happy for you to use the Mistral > service client, but it will be hard to guarantee stability. It has been > stable (since it hasn't changed), but we have a temptest refactor planned > (once we move the final tempest tests from mistraclient to > mistral-tempest-plugin). So there is a fair chance we will break the API at > that point, however, I don't know when it will happen, as nobody is > currently working on it. >From QA team, service clients are the main interface which can be used across tempest plugins. For example, congress need many other service clients from other Tempest Plugins liek Mistral. Tempest also declare all their in-tree service clients as library interface and we maintain them as per backward compatibility [3]. This way we make these service clients usable outside of Tempest also to avoid duplicate code/interface. For Service Clients defined in Tempest plugins (like Mistral service clients), we suggest (strongly) the same process which is to declare plugins's service clients as stable interface which gives 2 advantage: 1. By this you make sure that you are not allowing to change the API calling interface(service clietns) which indirectly means you are not allowing to change the APIs. Makes your tempest plugin testing more reliable. 2. Your service clients can be used in other Tempest plugins to avoid duplicate code/interface. If any other plugins use you service clients means, they also test your project so it is good to help them by providing the required interface as stable. Initial idea of owning the service clients in their respective plugins was to share them among plugins for integrated testing of more then one openstack service. Now on usage of service clients, Tempest provide a better way to do so than importing them directly [4]. You can see the example for Manila's tempest plugin [5]. This gives an advantage of discovering your registered service clients in other Tempest plugins automatically. They do not need to import other plugins service clients. QA is hoping that each tempest plugins will move to new service client registration process. Overall, we recommend to have service clients as stable interface so that other plugins can use them and test your projects in more integrated way. > > I have cc'ed Chandan - hopefully he can provide some input. He has advised > me and the Mistral team regarding tempest before. > >> >> >> Eric >> >> [1] https://review.openstack.org/#/c/538336/ >> [2] >> >> https://github.com/openstack/mistral-tempest-plugin/blob/master/mistral_tem >> pest_tests/services/v2/mistral_client.py >> >> ..3 http://git.openstack.org/cgit/openstack/tempest/tree/tempest/lib/services ..4 https://docs.openstack.org/tempest/latest/plugin.html#get_service_clients() ..5 https://review.openstack.org/#/c/334596/34 -gmann >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sundar.nadathur at intel.com Sun Mar 18 04:40:18 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Sat, 17 Mar 2018 21:40:18 -0700 Subject: [openstack-dev] [cyborg]Summary of Mar 14 Meeting In-Reply-To: References: Message-ID: <088eae42-274d-c849-b54c-f0e4794319c3@intel.com> Hi Howard and all,     Re. my AR to write a spec, please confirm the following: * Since the weigher is part of the overall scheduling flow, I presume the spec has to cover the scheduling flow that we hashed out in the PTG. The compute node aspects could be a separate spec. * Since there were many questions about the use cases as well, they would also need to be covered in the spec. * This spec would be complementary to current Cyborg-Nova spec . (It is in addition to it, does not replace it.) * The spec is not confined to FPGAs but should cover all devices, just as the current Cyborg-Nova spec . Thanks, Sundar On 3/15/2018 9:00 PM, Zhipeng Huang wrote: > Hi Team, > > Here are the meeting summary for our post-ptg kickoff meeting. > > [....] > 2. Rocky Cycle Task Assignments: > > Please refer to the meeting minutes about the action items: > http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-03-14-14.07.html > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Mar 18 05:02:47 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 18 Mar 2018 13:02:47 +0800 Subject: [openstack-dev] [cyborg]Summary of Mar 14 Meeting In-Reply-To: <088eae42-274d-c849-b54c-f0e4794319c3@intel.com> References: <088eae42-274d-c849-b54c-f0e4794319c3@intel.com> Message-ID: Hi Sundar, Thanks and yes I think these are the gist of our discussion during last meeting :) On Sun, Mar 18, 2018 at 12:40 PM, Nadathur, Sundar < sundar.nadathur at intel.com> wrote: > Hi Howard and all, > > Re. my AR to write a spec, please confirm the following: > * Since the weigher is part of the overall scheduling flow, I presume the > spec has to cover the scheduling flow that we hashed out in the PTG. The > compute node aspects could be a separate spec. > > * Since there were many questions about the use cases as well, they would > also need to be covered in the spec. > > * This spec would be complementary to current Cyborg-Nova spec > > . (It is in addition to it, does not replace it.) > > * The spec is not confined to FPGAs but should cover all devices, just as > the current Cyborg-Nova spec > > . > > Thanks, > > Sundar > > On 3/15/2018 9:00 PM, Zhipeng Huang wrote: > > Hi Team, > > Here are the meeting summary for our post-ptg kickoff meeting. > > [....] > 2. Rocky Cycle Task Assignments: > > Please refer to the meeting minutes about the action items: > http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/ > openstack_cyborg.2018-03-14-14.07.html > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sun Mar 18 08:54:47 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 18 Mar 2018 17:54:47 +0900 Subject: [openstack-dev] [horizon][plugins] mox -> mock migration Message-ID: Hi horizon plugin developers, As you know, mox-removal is one of the community goal in Rocky and horizon team is working on removing usage of mox [1]. This mail announces the plan of dropping mox dependencies in horizon test helpers (horizon.test.helpers.TestCase and/or openstack_dashboard.test.helpers.TestCase). 1) The first step is to introduce "use_mox" flag in horizon.test.helpers.TestCase. The flag is available now. If you set the flag to False, you can run your plugin test without mox. The default value of use_mox is False for horizon.test.helpers.TestCase [2] and True for openstack_dashboard.test.helpers.TestCase [3]. 2) After Rocky-1, use_mox of openstack_dashboard.test.helpers.TestCase will be changed from True to False. This means your plugin needs to set use_mox to True explicitly if your unit tests still depends on mox. Our suggestion is to set use_mox=True until Rocky-1 milestone if your tests depends on mox not to break your gate. 3) After Rocky RC1 is released, "use_mox" flag in the horizon repo will be dropped. This means use_mox flag will no longer be in effect. If your plugin tests still depends on mox at this stage, your plugin test needs to set up mox explicitly. Thanks, Akihiro Motoki (amotoki) [1] https://blueprints.launchpad.net/horizon/+spec/mock-framework-in-unit-tests [2] https://github.com/openstack/horizon/blob/6e29fdde1edc67a6797eba2c3f9c557f840d4ea7/horizon/test/helpers.py#L138 [3] https://github.com/openstack/horizon/blob/6e29fdde1edc67a6797eba2c3f9c557f840d4ea7/openstack_dashboard/test/helpers.py#L257 From sean.mcginnis at gmx.com Sun Mar 18 15:42:04 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 18 Mar 2018 10:42:04 -0500 Subject: [openstack-dev] [cinder][oslo] Stable check of openstack/cinder failed References: Message-ID: <8D169AC9-9416-4CF8-997D-114E689A278F@gmx.com> We are getting test failures on both stable/pike and stable/ocata branches with the error: oslo_utils/versionutils.py", line 45, in is_compatible if same_major and (requested_parts[0] != current_parts[0]): TypeError: 'Version' object does not support indexing This is due to a setuptools change that has been fixed in oslo.utils in master but is not in the upper-constraint versions for pike or ocata. I have proposed a backport for to stable/pike with https://review.openstack.org/#/c/554053/. Once that merges I will request a stable release, then do the same for stable/ocata. Sean > Begin forwarded message: > > From: "A mailing list for the OpenStack Stable Branch test reports." > Subject: [Openstack-stable-maint] Stable check of openstack/cinder failed > Date: March 18, 2018 at 01:26:09 CDT > To: openstack-stable-maint at lists.openstack.org > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/cinder/stable/pike/build-openstack-sphinx-docs/a0743cf/html/ : SUCCESS in 18m 15s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/cinder/stable/pike/openstack-tox-py27/0780d3f/ : FAILURE in 7m 03s > > _______________________________________________ > Openstack-stable-maint mailing list > Openstack-stable-maint at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint From sundar.nadathur at intel.com Sun Mar 18 16:34:10 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Sun, 18 Mar 2018 09:34:10 -0700 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: <9cab5a35-372b-1a20-6def-51b4e8e15fbe@intel.com> Sorry for the delayed response. I broadly agree with previous replies. For the concerns about the impact of Cyborg weigher on scheduling performance , there are some options (apart from filtering candidates as much as possible in Placement): * Handle hosts in bulk by extending BaseWeigher and overriding weigh_objects (), instead of handling one host at a time. * If we have to handle one host at a time for whatever reason, since the weigher is maintained by Cyborg, it could directly query Cyborg DB rather than go through Cyborg REST API. This will be not unlike other weighers. Given these and other possible optimizations, it may be too soon to worry about the performance impact. I am working on a spec that will capture the flow discussed in the PTG. I will try to address these aspects as well. Thanks & Regards, Sundar On 3/8/2018 4:53 AM, Zhipeng Huang wrote: > @jay I'm also against a weigher in nova/placement. This should be an > optional step depends on vendor implementation, not a default one. > > @Alex I think we should explore the idea of preferred trait. > > @Mathew: Like Sean said, Cyborg wants to support both reprogrammable > FPGA and pre-programed ones. > Therefore it is correct that in your description, the programming > operation should be a call from Nova to Cyborg, and cyborg will > complete the operation while nova waits. The only problem is that the > weigher step should be an optional one. > > > On Wed, Mar 7, 2018 at 9:21 PM, Jay Pipes > wrote: > > On 03/06/2018 09:36 PM, Alex Xu wrote: > > 2018-03-07 10:21 GMT+08:00 Alex Xu >>: > > > >     2018-03-06 22:45 GMT+08:00 Mooney, Sean K > >     >>: > >         __ __ > >         __ __ > >         *From:*Matthew Booth [mailto:mbooth at redhat.com > >         >] >         *Sent:* Saturday, March 3, 2018 4:15 PM >         *To:* OpenStack Development Mailing List (not for usage >         questions) >         >> >         *Subject:* Re: [openstack-dev] [Nova] [Cyborg] > Tracking multiple >         functions____ > >         __ __ > >         On 2 March 2018 at 14:31, Jay Pipes > >         >> wrote:____ > >             On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:____ > >                 Hello Nova team, > >                       During the Cyborg discussion at Rocky > PTG, we >                 proposed a flow for FPGAs wherein the request > spec asks >                 for a device type as a resource class, and > optionally a >                 function (such as encryption) in the extra > specs. This >                 does not seem to work well for the usage model > that I’ll >                 describe below. > >                 An FPGA device may implement more than one > function. For >                 example, it may implement both compression and >                 encryption. Say a cluster has 10 devices of > device type >                 X, and each of them is programmed to offer 2 > instances >                 of function A and 4 instances of function B. More >                 specifically, the device may implement 6 PCI > functions, >                 with 2 of them tied to function A, and the > other 4 tied >                 to function B. So, we could have 6 separate > instances >                 accessing functions on the same device.____ > >         __ __ > >         Does this imply that Cyborg can't reprogram the FPGA > at all?____ > >         */[Mooney, Sean K] cyborg is intended to support fixed > function >         acclerators also so it will not always be able to > program the >         accelerator. In this case where an fpga is > preprogramed with a >         multi function bitstream that is statically > provisioned cyborge >         will not be able to reprogram the slot if any of the > fuctions >         from that slot are already allocated to an instance. > In this >         case it will have to treat it like a fixed function > device and >         simply allocate a unused  vf  of the corret type if > available. >         ____/* > > >         ____ > > >                 In the current flow, the device type X is > modeled as a >                 resource class, so Placement will count how > many of them >                 are in use. A flavor for ‘RC device-type-X + > function A’ >                 will consume one instance of the RC > device-type-X.  But >                 this is not right because this precludes other > functions >                 on the same device instance from getting used. > >                 One way to solve this is to declare functions > A and B as >                 resource classes themselves and have the > flavor request >                 the function RC. Placement will then correctly > count the >                 function instances. However, there is still a > problem: >                 if the requested function A is not available, > Placement >                 will return an empty list of RPs, but we need > some way >                 to reprogram some device to create an instance of >                 function A.____ > > >             Clearly, nova is not going to be reprogramming > devices with >             an instance of a particular function. > >             Cyborg might need to have a separate agent that > listens to >             the nova notifications queue and upon seeing an > event that >             indicates a failed build due to lack of resources, > then >             Cyborg can try and reprogram a device and then try >             rebuilding the original request.____ > >         __ __ > >         It was my understanding from that discussion that we > intend to >         insert Cyborg into the spawn workflow for device > configuration >         in the same way that we currently insert resources > provided by >         Cinder and Neutron. So while Nova won't be reprogramming a >         device, it will be calling out to Cyborg to reprogram > a device, >         and waiting while that happens.____ > >         My understanding is (and I concede some areas are a little >         hazy):____ > >         * The flavors says device type X with function Y____ > >         * Placement tells us everywhere with device type X____ > >         * A weigher orders these by devices which already have an >         available function Y (where is this metadata stored?)____ > >         * Nova schedules to host Z____ > >         * Nova host Z asks cyborg for a local function Y and > blocks____ > >            * Cyborg hopefully returns function Y which is already >         available____ > >            * If not, Cyborg reprograms a function Y, then > returns it____ > >         Can anybody correct me/fill in the gaps?____ > >         */[Mooney, Sean K] that correlates closely to my > recollection >         also. As for the metadata I think the weigher may need > to call >         to cyborg to retrieve this as it will not be available > in the >         host state object./* > >     Is it the nova scheduler weigher or we want to support > weigh on >     placement? Function is traits as I think, so can we have >     preferred_traits? I remember we talk about that parameter > in the >     past, but we don't have good use-case at that time. This > is good >     use-case. > > > If we call the Cyborg from the nova scheduler weigher, that > will slow down the scheduling a lot also. > > > Right, which is why I don't want to do any weighing in Placement > at all. If folks want to sort by things that require long-running > code/callbacks or silly temporal things like metrics, they can do > that in a custom weigher in the nova-scheduler and take the > performance hit there. > > Best, > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tenobreg at redhat.com Sun Mar 18 20:36:12 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Sun, 18 Mar 2018 20:36:12 +0000 Subject: [openstack-dev] [sahara][storyboard] Migrating Sahara to Storyboard Message-ID: Hello Saharans and interested parties, For a while now we all have heard stories about Storyboard and that projects should start getting ready for migrating from Launchpad to Storyboard. Sahara is officially announcing that from Monday, March 19th, we are fully tracking Sahara on Storyboard. From now on, all bugs, features, tasks and so on must be registred on Storyboard. To all people involved in the migration, thanks for the hard work. To all people involved in Sahara, let me know if you have any problems with Storyboard and we will reach out to figure out the best way to make the transition as smooth as possible. Special thanks to Tosky who worked this weekend to get all patches from the Sahara side ready. Thanks all, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Sun Mar 18 23:29:54 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 19 Mar 2018 10:29:54 +1100 Subject: [openstack-dev] [api] APAC-friendly API-SIG meeting times In-Reply-To: <28230616-4F62-429A-8EBD-D88237B56DA0@leafe.com> References: <6D342053-79C2-4AAB-8F8B-6687F8CA6C29@leafe.com> <28230616-4F62-429A-8EBD-D88237B56DA0@leafe.com> Message-ID: On 17/03/18 02:53, Ed Leafe wrote: > On Mar 15, 2018, at 10:31 PM, Gilles Dubreuil wrote: >> Any chance we can progress on this one? >> >> I believe there are not enough participants to split the API SIG meeting in 2, and also more likely because of the same lack of people across the 2 it could make it pretty inefficient. Therefore I think changing the main meeting time to another might be better but I could be wrong. >> >> Anyway in all cases I can't make progress with a meeting in the middle of the night for me so I would appreciate if we could re-activate this discussion. > What range of times would work for you? > > -- Ed Leafe > > > I can do very early (like 6am), or alternatively late (10pm) if needed to avoid others to be in the red zone: https://www.timeanddate.com/worldclock/meetingtime.html?day=31&month=3&year=2018&p1=240&p2=195&p3=137&iv=0 Of course assuming we're talking about spreading across the globe for a single meeting, otherwise that would be easier as I'm quite flex. From mikal at stillhq.com Mon Mar 19 01:00:18 2018 From: mikal at stillhq.com (Michael Still) Date: Mon, 19 Mar 2018 12:00:18 +1100 Subject: [openstack-dev] [Tatu][Nova] Handling instance destruction In-Reply-To: References: Message-ID: I think it would simplify deployment a fair bit too -- a single API to provide instead of also having to setup a notification listener. I shall ponder and perhaps implement once I've finished the privsep stuff. Michael On Fri, Mar 16, 2018 at 5:49 PM, Juan Antonio Osorio wrote: > Having an interface for vendordata that gets deletes would be quite nice. > Right now for novajoin we listen to the nova notifications for updates and > deletes; if this could be handled natively by vendordata, it would simplify > our codebase. > > BR > > On Fri, Mar 16, 2018 at 7:34 AM, Michael Still wrote: > >> Thanks for this. I read the README for the project after this and I do >> now realise you're using notifications for some of these events. >> >> I guess I'm still pondering if its reasonable to have everyone listen to >> notifications to build systems like these, or if we should messages to >> vendordata to handle these actions. Vendordata is intended at deployers, so >> having a simple and complete interface seems important. >> >> There were also comments in the README about wanting to change the data >> that appears in the metadata server over time. I'm wondering how that maps >> into the configdrive universe. Could you explain those comments a bit more >> please? >> >> Thanks for your quick reply, >> Michael >> >> >> >> >> On Fri, Mar 16, 2018 at 2:18 PM, Pino de Candia < >> giuseppe.decandia at gmail.com> wrote: >> >>> Hi Michael, >>> >>> Thanks for your message... and thanks for your vendordata work! >>> >>> About your question, Tatu listens to events on the oslo message bus. >>> Specifically, it reacts to compute.instance.delete.end by cleaning up >>> per-instance resources. It also listens to project creation and user role >>> assignment changes. The code is at: >>> https://github.com/openstack/tatu/blob/master/tatu/notifications.py >>> >>> best, >>> Pino >>> >>> >>> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still >>> wrote: >>> >>>> Heya, >>>> >>>> I've just stumbled across Tatu and the design presentation [1], and I >>>> am wondering how you handle cleaning up instances when they are deleted >>>> given that nova vendordata doesn't expose a "delete event". >>>> >>>> Specifically I'm wondering if we should add support for such an event >>>> to vendordata somehow, given I can now think of a couple of use cases for >>>> it. >>>> >>>> Thanks, >>>> Michael >>>> >>>> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Z >>>> i4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Mon Mar 19 02:04:28 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 19 Mar 2018 03:04:28 +0100 Subject: [openstack-dev] [tripleo] Ceph integration topics discussed at PTG Message-ID: Hi, I wanted to share a short summary of the discussions happened around the Ceph integration (in TripleO) at the PTG. In no particular order: - ceph-{container,ansible} branching together with John Fulton and Guillaume Abrioux (and after PTG, Sebastien Han) we put some thought into how to make the Ceph container images and ceph ansible releases fit better the OpenStack model; the container images and ceph-ansible are in fact loosely coupled (not all versions of the container images work with all versions of ceph-ansible) and we wanted to move from a "rolling release" into a "point release" approach, mainly to permit regular maintenance of the previous versions known to work with the previous OpenStack versions. The plan goes more or less as follows: 1) ceph-{container,ansible} should be released together with the regular ceph updates 2) ceph-container will start using tags and stable branches like ceph-ansible does The changes for the ceph/daemon docker images are visible already: https://hub.docker.com/r/ceph/daemon/tags/ - multiple Ceph clusters in the attempt to support better the "edge computing" use case, we discussed addin support for the deployment of multiple Ceph clusters in the overcloud together with John Fulton and Steven Hardy (and after PTG, Gregory Charot) we realized this could be done using multiple stacks and by doing so, hopefully simplify managament of the "cells" and avoid potential issues due to orchestration of large clusters much of this will build on Shardy's blueprint to split the control plane, see spec at: https://review.openstack.org/#/c/523459/ the multiple Ceph clusters specifics will be tracked via another blueprint: https://blueprints.launchpad.net/tripleo/+spec/deploy-multiple-ceph-clusters - ceph-ansible testing with TripleO we had a very good chat with John Fulton, Guillaume Abrioux, Wesley Hayutin and Javier Pena on how to get tested new pull requests for ceph-ansible with TripleO; basically trigger an existing TripleO scenario on changes proposed to ceph-ansible Given ceph-ansible is hosted on github, Wesley's and Javier suggested this should be possible with Zuul v3 and volunteered to help; some of the complications are about building an RPM from uncommitted changes for testing - move ceph-ansible triggering from workflow_tasks to external_deploy_tasks this is a requirement for the Rocky release; we want to migrate away from using workflow_tasks and use external_deploy_tasks instead, to integrate into the "config-download" mechanism this work is tracked via a blueprint and we have a WIP submission on review: https://blueprints.launchpad.net/tripleo/+spec/ceph-ansible-external-deploy-tasks We're also working with Sofer Athlan-Guyot on the enablement of Ceph in the upgrade CI jobs and with Tom Barron on scenario004 to deploy Manila with Ganesha (and CephFS) instead of the CephFS native backend. Hopefullt I didn't forget much; to stay updated on the progress check our integration squad status at: https://etherpad.openstack.org/p/tripleo-integration-squad-status Thanks -- Giulio Fidente GPG KEY: 08D733BA From 270162781 at qq.com Mon Mar 19 07:27:33 2018 From: 270162781 at qq.com (=?ISO-8859-1?B?MjcwMTYyNzgx?=) Date: Mon, 19 Mar 2018 15:27:33 +0800 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that cannot attending the comming upstream meeting so I'm sending out this report: Last week there are some high priority bugs for neutron . Also some bugs need to attention, I list them here: High priority --------------- [CI failure] https://bugs.launchpad.net/neutron/+bug/1756301 Tempset DVR HA miltimode tests fails as no FIP connectivity. I suspect that it may be a devstack bug. https://bugs.launchpad.net/neutron/+bug/1755243 In DVR HA scenarios, it will hit the AttributeError if depoly a LB on the network node, and Neutron depolys a DvrEdgeRouter on the network node for LB, when server call "router_update" to agent, agent side want to check whether the router is an ha router. Then hit the error as DvrEdgeRouter doesn't have a attribute named "ha_state". I found Brain and Swaminathan already work around with the bug reporter. Medium priority ------------------- https://bugs.launchpad.net/neutron/+bug/1756406 Dvr mac address format may be invalid for non native openflow interface, Swamination has already work on it. Need attention ------------------ https://bugs.launchpad.net/neutron/+bug/1754695 As the bug description, the vxlan port on br-tun may miss in L3 DVR+HA large scale env, the probability of failure seems high. I have no idea about this, so need help from L3 team experts, then setting the priority. https://bugs.launchpad.net/neutron/+bug/1755810 plugins.ml2.plugin.Ml2Plugin#_bind_port_if_needed There is a concurrency issue here, it will lead to a missing RPC notification which in turn result in a port stuck in DOWN status following a live-migration. I think we can check the details in the Bug descrption, it is enough to explain what the problem is. https://bugs.launchpad.net/neutron/+bug/1737917 In l2pop scenarios, linux bridge agent want to rpc to server with "update_port_down", server will call list_router_ids_on_host with l3plugin, then it raise the error, agent will get the remote rpc error.So the process may looks like NeutronServer try to find l3 agent on compute node and failed. This bug is reported for a long time, as the reporter said he still hit the same issue on Queen/master, so I think it is worth to raise here. https://bugs.launchpad.net/neutron/+bug/1755414 This sounds like a enhancement for routing function. So need to raise it and discuss with our L3 experts about whether we need the "metric". https://bugs.launchpad.net/neutron/+bug/1756064 Trunk bridge residue if restart the nove-compute before remove the vm in ovs-dpdk. As armando said, it may be related DPDK process. Armando mark it as Incomplete now for need more description about the issue condition. Thanks, Best Regards, ZhaoBo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ponomarev at selectel.ru Mon Mar 19 09:38:56 2018 From: ponomarev at selectel.ru (Vadim Ponomarev) Date: Mon, 19 Mar 2018 12:38:56 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Hi, I support, that is a problem. It's unclear, how after removing the option prevent_arp_spoofing, I can manage the prevent ARP spoofing mechanism. Example: I use security groups but I don't want to use ARP spoofing protection. How do I can disable the protection? 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : > Sure, there is an ability to enable ARP spoofing for the port/network, but > it is impossible to make it enabled by default for all ports. > It looks a bit complicated to me and I think it would be better to have an > ability to set default port security via config file. > > Best regards, > Tatiana > > 2018-03-13 15:10 GMT+03:00 Claudiu Belu : > >> Hi, >> >> Indeed ARP spoofing is prevented by default, but AFAIK, if you want it >> enabled for a port / network, you can simply disable the security groups on >> that neutron network / port. >> >> Best regards, >> >> Claudiu Belu >> >> ------------------------------ >> *From:* Татьяна Холкина [holkina at selectel.ru] >> *Sent:* Tuesday, March 13, 2018 12:54 PM >> *To:* openstack-dev at lists.openstack.org >> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >> >> Hi, >> I'm using an ocata release of OpenStack where the option >> prevent_arp_spoofing can be managed via conf. But later in pike it was >> removed and it was decided to prevent spoofing by default. >> There are cases where security features should be disabled. As I can see >> now we can use a port_security option for these cases. But this option >> should be set for a particular port or network on create. The default value >> is set to True [1] and itt is impossible to change it. I'd like to >> suggest to get default value for port_security [2] from config option. >> It would be nice to know your opinion. >> >> [1] https://github.com/openstack/neutron-lib/blob/stable/ >> queens/neutron_lib/api/definitions/port_security.py#L21 >> [2] https://github.com/openstack/neutron/blob/stable/queens/ >> neutron/objects/extensions/port_security.py#L24 >> >> Best regards, >> Tatiana >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Vadim Ponomarev Developer of network automation department at Selectel Ltd. ---- This message may contain confidential information that can't be distributed without the consent of the sender or the authorized person Selectel Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Mon Mar 19 09:53:29 2018 From: kevin at benton.pub (Kevin Benton) Date: Mon, 19 Mar 2018 09:53:29 +0000 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Do you need to spoof arbitrary addresses? If not (i.e. a set you know ahead of time), you can put entries in the allowed_address_pairs field of the port that will allow you to send traffic using other MAC/IPs. On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" wrote: Hi, I support, that is a problem. It's unclear, how after removing the option prevent_arp_spoofing, I can manage the prevent ARP spoofing mechanism. Example: I use security groups but I don't want to use ARP spoofing protection. How do I can disable the protection? 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : > Sure, there is an ability to enable ARP spoofing for the port/network, but > it is impossible to make it enabled by default for all ports. > It looks a bit complicated to me and I think it would be better to have an > ability to set default port security via config file. > > Best regards, > Tatiana > > 2018-03-13 15:10 GMT+03:00 Claudiu Belu : > >> Hi, >> >> Indeed ARP spoofing is prevented by default, but AFAIK, if you want it >> enabled for a port / network, you can simply disable the security groups on >> that neutron network / port. >> >> Best regards, >> >> Claudiu Belu >> >> ------------------------------ >> *From:* Татьяна Холкина [holkina at selectel.ru] >> *Sent:* Tuesday, March 13, 2018 12:54 PM >> *To:* openstack-dev at lists.openstack.org >> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >> >> Hi, >> I'm using an ocata release of OpenStack where the option >> prevent_arp_spoofing can be managed via conf. But later in pike it was >> removed and it was decided to prevent spoofing by default. >> There are cases where security features should be disabled. As I can see >> now we can use a port_security option for these cases. But this option >> should be set for a particular port or network on create. The default value >> is set to True [1] and itt is impossible to change it. I'd like to >> suggest to get default value for port_security [2] from config option. >> It would be nice to know your opinion. >> >> [1] >> https://github.com/openstack/neutron-lib/blob/stable/queens/neutron_lib/api/definitions/port_security.py#L21 >> [2] >> https://github.com/openstack/neutron/blob/stable/queens/neutron/objects/extensions/port_security.py#L24 >> >> Best regards, >> Tatiana >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Vadim Ponomarev Developer of network automation department at Selectel Ltd. ---- This message may contain confidential information that can't be distributed without the consent of the sender or the authorized person Selectel Ltd. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ponomarev at selectel.ru Mon Mar 19 11:10:45 2018 From: ponomarev at selectel.ru (Vadim Ponomarev) Date: Mon, 19 Mar 2018 14:10:45 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Yes, there's really a need for mechanisms of high availability like corosync, vrrp etc. Another simple example: we have two servers with the active/standby HA configuration (for example keepalived + haproxy) and we have third-party monitoring system for these servers. The monitoring system gets some load metrics and when one of the servers is unavailable, the monitoring system scales architecture (adds new server to cluster) in this way saving the HA architecture. In your case, this monitoring system must do the following steps: create new instance, add new instance's MAC address to allowed_address_pairs and only after that reconfigure all other nodes. Otherwise cluster will not work. The solution to the problem is simple - disable the prevent ARP spoofing mechnism. Ok, we may used port_security options for this network with the HA cluster. For this case we must reconfigure our monitoring systems, create allowed_address_pairs for all current servers and (it's hardest) train our users how that done. Currently, we don't use the prevent ARP spoofing option (prevent_arp_spoofing = False) and honestly I don't understand why this option is enabled as default in private networks. Each such network belongs to one user, who controls all instances. Who would decide to perform a MITM attack in his own network? 2018-03-19 12:53 GMT+03:00 Kevin Benton : > Do you need to spoof arbitrary addresses? If not (i.e. a set you know > ahead of time), you can put entries in the allowed_address_pairs field of > the port that will allow you to send traffic using other MAC/IPs. > > On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" wrote: > > Hi, > > I support, that is a problem. It's unclear, how after removing the option > prevent_arp_spoofing, I can manage the prevent ARP spoofing mechanism. > Example: I use security groups but I don't want to use ARP spoofing > protection. How do I can disable the protection? > > 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : > >> Sure, there is an ability to enable ARP spoofing for the port/network, >> but it is impossible to make it enabled by default for all ports. >> It looks a bit complicated to me and I think it would be better to have >> an ability to set default port security via config file. >> >> Best regards, >> Tatiana >> >> 2018-03-13 15:10 GMT+03:00 Claudiu Belu : >> >>> Hi, >>> >>> Indeed ARP spoofing is prevented by default, but AFAIK, if you want it >>> enabled for a port / network, you can simply disable the security groups on >>> that neutron network / port. >>> >>> Best regards, >>> >>> Claudiu Belu >>> >>> ------------------------------ >>> *From:* Татьяна Холкина [holkina at selectel.ru] >>> *Sent:* Tuesday, March 13, 2018 12:54 PM >>> *To:* openstack-dev at lists.openstack.org >>> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >>> >>> Hi, >>> I'm using an ocata release of OpenStack where the option >>> prevent_arp_spoofing can be managed via conf. But later in pike it was >>> removed and it was decided to prevent spoofing by default. >>> There are cases where security features should be disabled. As I can see >>> now we can use a port_security option for these cases. But this option >>> should be set for a particular port or network on create. The default value >>> is set to True [1] and itt is impossible to change it. I'd like to >>> suggest to get default value for port_security [2] from config option. >>> It would be nice to know your opinion. >>> >>> [1] https://github.com/openstack/neutron-lib/blob/ >>> stable/queens/neutron_lib/api/definitions/port_security.py#L21 >>> [2] https://github.com/openstack/neutron/blob/stable/ >>> queens/neutron/objects/extensions/port_security.py#L24 >>> >>> Best regards, >>> Tatiana >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards, > Vadim Ponomarev > Developer of network automation department at Selectel Ltd. > > ---- > This message may contain confidential information that can't be > distributed without the consent of the sender or the authorized person Selectel > Ltd. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Vadim Ponomarev Developer of network automation department at Selectel Ltd. ---- This message may contain confidential information that can't be distributed without the consent of the sender or the authorized person Selectel Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Mar 19 11:25:02 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 19 Mar 2018 11:25:02 +0000 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions Message-ID: Hi, The change https://github.com/openstack/requirements/commit/35653e8c5044bff1059ddddf50a82b6065176eea has created some issues with decomposed neutron plugins. Let me try and give an example to explain. Say for example a patch landed in neutron master that exposed feature X. Now if a decomposed plugin wants to consume this it will fail as the decomposed plugin will pull in the latest tag. Does this mean for each new feature that we wish to consume we will need to update neutron tags? Please advise. Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Mar 19 11:45:39 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Mar 2018 12:45:39 +0100 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: <2f024e5c-7826-4faa-303b-248c7dddd83d@suse.com> On 2018-03-19 12:25, Gary Kotton wrote: > Hi, > > The change > https://github.com/openstack/requirements/commit/35653e8c5044bff1059ddddf50a82b6065176eea > has created some issues with decomposed neutron plugins. Let me try and > give an example to explain. > > Say for example a patch landed in neutron master that exposed feature X. > Now if a decomposed plugin wants to consume this it will fail as the > decomposed plugin will pull in the latest tag. It will pull in git master if your job lists openstack/neutron in required-projects. It will pull in the latest tag if it's not listed. Most repos are already setup with required-projects set to this - but please double check. > Does this mean for each new feature that we wish to consume we will need > to update neutron tags? No, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From thomas.morin at orange.com Mon Mar 19 11:46:30 2018 From: thomas.morin at orange.com (Thomas Morin) Date: Mon, 19 Mar 2018 12:46:30 +0100 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: Hi Gary, Seehttp://lists.openstack.org/pipermail/openstack-dev/2018-March/128311 .html > Note that thanks to the tox-siblings feature, we really continue to> install neutron and horizon from git - and not use the versions in> the global-requirements constraints file, Gary Kotton, 2018-03-19 11:25: > Hi, > The change https://github.com/openstack/requirements/commit/35653e8c5 > 044bff1059ddddf50a82b6065176eea has created some issues with > decomposed neutron plugins. Let me try and give an example to > explain. > Say for example a patch landed in neutron master that exposed feature > X. Now if a decomposed plugin wants to consume this it will fail as > the decomposed plugin will pull in the latest tag. > Does this mean for each new feature that we wish to consume we will > need to update neutron tags? > Please advise. > Thanks > Gary > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev I think it is worth adding a comment in (test-)requirements.txt that the OpenStack CI is overriding the version and uses git. -Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From aakashkt0 at gmail.com Mon Mar 19 12:07:19 2018 From: aakashkt0 at gmail.com (Aakash Kt) Date: Mon, 19 Mar 2018 17:37:19 +0530 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hi James, Thank you for the previous code review. I have pushed another patch. Also, I do not know how to reply to your review comments on gerrit, so I will reply to them here. About the signed-off-message, I did not know that it wasn't a requirement for OpenStack, I assumed it was. I have removed it from the updated patch. Thank you, Aakash On Thu, Mar 15, 2018 at 11:34 AM, Aakash Kt wrote: > Hi James, > > Just a small reminder that I have pushed a patch for review, according to > changes you suggested :-) > > Thanks, > Aakash > > On Mon, Mar 12, 2018 at 2:38 PM, James Page wrote: > >> Hi Aakash >> >> On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote: >> >>> Hi, >>> >>> I had previously put in a mail about the development for openstack-ovn >>> charm. Sorry it took me this long to get back, was involved in other >>> projects. >>> >>> I have submitted a charm spec for the above charm. >>> Here is the review link : https://review.openstack.org/#/c/551800/ >>> >>> Please look in to it and we can further discuss how to proceed. >>> >> >> I'll feedback directly on the review. >> >> Thanks! >> >> James >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Mar 19 12:44:28 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 19 Mar 2018 20:44:28 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: <9cab5a35-372b-1a20-6def-51b4e8e15fbe@intel.com> References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> <9cab5a35-372b-1a20-6def-51b4e8e15fbe@intel.com> Message-ID: Hi Sundar, I think the two points you raised is valid and please also reflect that in the spec you are helping drafting :) On Mon, Mar 19, 2018 at 12:34 AM, Nadathur, Sundar < sundar.nadathur at intel.com> wrote: > Sorry for the delayed response. I broadly agree with previous replies. > For the concerns about the impact of Cyborg weigher on scheduling > performance , there are some options (apart from filtering candidates as > much as possible in Placement): > * Handle hosts in bulk by extending BaseWeigher > and > overriding weigh_objects > (), > instead of handling one host at a time. > * If we have to handle one host at a time for whatever reason, since the > weigher is maintained by Cyborg, it could directly query Cyborg DB rather > than go through Cyborg REST API. This will be not unlike other weighers. > > Given these and other possible optimizations, it may be too soon to worry > about the performance impact. > > I am working on a spec that will capture the flow discussed in the PTG. I > will try to address these aspects as well. > > Thanks & Regards, > Sundar > > > On 3/8/2018 4:53 AM, Zhipeng Huang wrote: > > @jay I'm also against a weigher in nova/placement. This should be an > optional step depends on vendor implementation, not a default one. > > @Alex I think we should explore the idea of preferred trait. > > @Mathew: Like Sean said, Cyborg wants to support both reprogrammable FPGA > and pre-programed ones. > Therefore it is correct that in your description, the programming > operation should be a call from Nova to Cyborg, and cyborg will complete > the operation while nova waits. The only problem is that the weigher step > should be an optional one. > > > On Wed, Mar 7, 2018 at 9:21 PM, Jay Pipes wrote: > >> On 03/06/2018 09:36 PM, Alex Xu wrote: >> >>> 2018-03-07 10:21 GMT+08:00 Alex Xu >> soulxu at gmail.com>>: >>> >>> >>> >>> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K >> >: >>> >>> __ __ >>> >>> __ __ >>> >>> *From:*Matthew Booth [mailto:mbooth at redhat.com >>> ] >>> *Sent:* Saturday, March 3, 2018 4:15 PM >>> *To:* OpenStack Development Mailing List (not for usage >>> questions) >> > >>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple >>> functions____ >>> >>> __ __ >>> >>> On 2 March 2018 at 14:31, Jay Pipes >> > wrote:____ >>> >>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:____ >>> >>> Hello Nova team, >>> >>> During the Cyborg discussion at Rocky PTG, we >>> proposed a flow for FPGAs wherein the request spec asks >>> for a device type as a resource class, and optionally a >>> function (such as encryption) in the extra specs. This >>> does not seem to work well for the usage model that I’ll >>> describe below. >>> >>> An FPGA device may implement more than one function. For >>> example, it may implement both compression and >>> encryption. Say a cluster has 10 devices of device type >>> X, and each of them is programmed to offer 2 instances >>> of function A and 4 instances of function B. More >>> specifically, the device may implement 6 PCI functions, >>> with 2 of them tied to function A, and the other 4 tied >>> to function B. So, we could have 6 separate instances >>> accessing functions on the same device.____ >>> >>> __ __ >>> >>> Does this imply that Cyborg can't reprogram the FPGA at all?____ >>> >>> */[Mooney, Sean K] cyborg is intended to support fixed function >>> acclerators also so it will not always be able to program the >>> accelerator. In this case where an fpga is preprogramed with a >>> multi function bitstream that is statically provisioned cyborge >>> will not be able to reprogram the slot if any of the fuctions >>> from that slot are already allocated to an instance. In this >>> case it will have to treat it like a fixed function device and >>> simply allocate a unused vf of the corret type if available. >>> ____/* >>> >>> >>> ____ >>> >>> >>> In the current flow, the device type X is modeled as a >>> resource class, so Placement will count how many of them >>> are in use. A flavor for ‘RC device-type-X + function A’ >>> will consume one instance of the RC device-type-X. But >>> this is not right because this precludes other functions >>> on the same device instance from getting used. >>> >>> One way to solve this is to declare functions A and B as >>> resource classes themselves and have the flavor request >>> the function RC. Placement will then correctly count the >>> function instances. However, there is still a problem: >>> if the requested function A is not available, Placement >>> will return an empty list of RPs, but we need some way >>> to reprogram some device to create an instance of >>> function A.____ >>> >>> >>> Clearly, nova is not going to be reprogramming devices with >>> an instance of a particular function. >>> >>> Cyborg might need to have a separate agent that listens to >>> the nova notifications queue and upon seeing an event that >>> indicates a failed build due to lack of resources, then >>> Cyborg can try and reprogram a device and then try >>> rebuilding the original request.____ >>> >>> __ __ >>> >>> It was my understanding from that discussion that we intend to >>> insert Cyborg into the spawn workflow for device configuration >>> in the same way that we currently insert resources provided by >>> Cinder and Neutron. So while Nova won't be reprogramming a >>> device, it will be calling out to Cyborg to reprogram a device, >>> and waiting while that happens.____ >>> >>> My understanding is (and I concede some areas are a little >>> hazy):____ >>> >>> * The flavors says device type X with function Y____ >>> >>> * Placement tells us everywhere with device type X____ >>> >>> * A weigher orders these by devices which already have an >>> available function Y (where is this metadata stored?)____ >>> >>> * Nova schedules to host Z____ >>> >>> * Nova host Z asks cyborg for a local function Y and blocks____ >>> >>> * Cyborg hopefully returns function Y which is already >>> available____ >>> >>> * If not, Cyborg reprograms a function Y, then returns it____ >>> >>> Can anybody correct me/fill in the gaps?____ >>> >>> */[Mooney, Sean K] that correlates closely to my recollection >>> also. As for the metadata I think the weigher may need to call >>> to cyborg to retrieve this as it will not be available in the >>> host state object./* >>> >>> Is it the nova scheduler weigher or we want to support weigh on >>> placement? Function is traits as I think, so can we have >>> preferred_traits? I remember we talk about that parameter in the >>> past, but we don't have good use-case at that time. This is good >>> use-case. >>> >>> >>> If we call the Cyborg from the nova scheduler weigher, that will slow >>> down the scheduling a lot also. >>> >> >> Right, which is why I don't want to do any weighing in Placement at all. >> If folks want to sort by things that require long-running code/callbacks or >> silly temporal things like metrics, they can do that in a custom weigher in >> the nova-scheduler and take the performance hit there. >> >> Best, >> -jay >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Mar 19 13:01:32 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 19 Mar 2018 13:01:32 +0000 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: The issue is that since the change below - https://review.openstack.org/553045 - we are not picking up the latest neutron master code. From: Thomas Morin Organization: Orange S.A. Reply-To: OpenStack List Date: Monday, March 19, 2018 at 1:46 PM To: OpenStack List Subject: Re: [openstack-dev] [requirements][neutron] Depending on pypi versions Hi Gary, See http://lists.openstack.org/pipermail/openstack-dev/2018-March/128311.html > Note that thanks to the tox-siblings feature, we really continue to > install neutron and horizon from git - and not use the versions in > the global-requirements constraints file, I think it is worth adding a comment in (test-)requirements.txt that the OpenStack CI is overriding the version and uses git. -Thomas Gary Kotton, 2018-03-19 11:25: Hi, The change https://github.com/openstack/requirements/commit/35653e8c5044bff1059ddddf50a82b6065176eea has created some issues with decomposed neutron plugins. Let me try and give an example to explain. Say for example a patch landed in neutron master that exposed feature X. Now if a decomposed plugin wants to consume this it will fail as the decomposed plugin will pull in the latest tag. Does this mean for each new feature that we wish to consume we will need to update neutron tags? Please advise. Thanks Gary __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Mon Mar 19 13:15:14 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Mar 2018 14:15:14 +0100 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: On 2018-03-19 14:01, Gary Kotton wrote: > The issue is that since the change below - > https://review.openstack.org/553045 - we are not picking up the latest > neutron master code. I see neutron installed: http://logs.openstack.org/45/553045/1/gate/openstack-tox-pep8/5347981/job-output.txt.gz#_2018-03-15_13_58_12_552258 But later it's downgraded - like it's done for the other requirements you have. So, this uncovered a problem in your install. pip install -U breaks it, please double check that this does the right thing: https://review.openstack.org/554222 Andreas >   > > *From: *Thomas Morin > *Organization: *Orange S.A. > *Reply-To: *OpenStack List > *Date: *Monday, March 19, 2018 at 1:46 PM > *To: *OpenStack List > *Subject: *Re: [openstack-dev] [requirements][neutron] Depending on pypi > versions > >   > > Hi Gary, > >   > > See > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128311.html > >   > >> Note that thanks to the tox-siblings feature, we really continue to > >> install neutron and horizon from git - and not use the versions in > >> the global-requirements constraints file, > >   > > I think it is worth adding a comment in (test-)requirements.txt that the > OpenStack CI is overriding the version and uses git. > >   > > -Thomas > >   > >   > > Gary Kotton, 2018-03-19 11:25: > > Hi, > > The change > https://github.com/openstack/requirements/commit/35653e8c5044bff1059ddddf50a82b6065176eea > has created some issues with decomposed neutron plugins. Let me try > and give an example to explain. > > Say for example a patch landed in neutron master that exposed > feature X. Now if a decomposed plugin wants to consume this it will > fail as the decomposed plugin will pull in the latest tag. > > Does this mean for each new feature that we wish to consume we will > need to update neutron tags? > > Please advise. > > Thanks > > Gary > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org > ?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From gkotton at vmware.com Mon Mar 19 13:18:05 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 19 Mar 2018 13:18:05 +0000 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: Thanks!! Will check if it now On 3/19/18, 3:15 PM, "Andreas Jaeger" wrote: On 2018-03-19 14:01, Gary Kotton wrote: > The issue is that since the change below - > https://review.openstack.org/553045 - we are not picking up the latest > neutron master code. I see neutron installed: http://logs.openstack.org/45/553045/1/gate/openstack-tox-pep8/5347981/job-output.txt.gz#_2018-03-15_13_58_12_552258 But later it's downgraded - like it's done for the other requirements you have. So, this uncovered a problem in your install. pip install -U breaks it, please double check that this does the right thing: https://review.openstack.org/554222 Andreas > > > *From: *Thomas Morin > *Organization: *Orange S.A. > *Reply-To: *OpenStack List > *Date: *Monday, March 19, 2018 at 1:46 PM > *To: *OpenStack List > *Subject: *Re: [openstack-dev] [requirements][neutron] Depending on pypi > versions > > > > Hi Gary, > > > > See > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128311.html > > > >> Note that thanks to the tox-siblings feature, we really continue to > >> install neutron and horizon from git - and not use the versions in > >> the global-requirements constraints file, > > > > I think it is worth adding a comment in (test-)requirements.txt that the > OpenStack CI is overriding the version and uses git. > > > > -Thomas > > > > > > Gary Kotton, 2018-03-19 11:25: > > Hi, > > The change > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_requirements_commit_35653e8c5044bff1059ddddf50a82b6065176eea&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=PMrZQUSXojEgJQPh7cZrz1Lvja0OwAstg0U82FalZrw&m=438_Ni4AJ-nteg3SAstWzssFEcYN8k5BoSslcDtkSEU&s=nkowIZ6GVGMJAfMtR4XliFZhODdtly2gsBTr8k9wFFY&e= > has created some issues with decomposed neutron plugins. Let me try > and give an example to explain. > > Say for example a patch landed in neutron master that exposed > feature X. Now if a decomposed plugin wants to consume this it will > fail as the decomposed plugin will pull in the latest tag. > > Does this mean for each new feature that we wish to consume we will > need to update neutron tags? > > Please advise. > > Thanks > > Gary > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org > ?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kevin at benton.pub Mon Mar 19 13:24:20 2018 From: kevin at benton.pub (Kevin Benton) Date: Mon, 19 Mar 2018 13:24:20 +0000 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Disabling ARP spoofing protection alone will not let the standby instance source traffic using the active instance's IP. IP filtering rules independent of ARP enforcement rules ensure the source IP is in the fixed_ips or allowed_address_pairs. Are you already using allowed address pairs to add the shared IP to both? On Mon, Mar 19, 2018, 22:13 Vadim Ponomarev wrote: > Yes, there's really a need for mechanisms of high availability like > corosync, vrrp etc. Another simple example: we have two servers with the > active/standby HA configuration (for example keepalived + haproxy) and we > have third-party monitoring system for these servers. The monitoring system > gets some load metrics and when one of the servers is unavailable, the > monitoring system scales architecture (adds new server to cluster) in this > way saving the HA architecture. In your case, this monitoring system must > do the following steps: create new instance, add new instance's MAC address > to allowed_address_pairs and only after that reconfigure all other nodes. > Otherwise cluster will not work. The solution to the problem is simple - > disable the prevent ARP spoofing mechnism. > > Ok, we may used port_security options for this network with the HA > cluster. For this case we must reconfigure our monitoring systems, create > allowed_address_pairs for all current servers and (it's hardest) train our > users how that done. > > Currently, we don't use the prevent ARP spoofing option > (prevent_arp_spoofing = False) and honestly I don't understand why this > option is enabled as default in private networks. Each such network belongs > to one user, who controls all instances. Who would decide to perform a MITM > attack in his own network? > > 2018-03-19 12:53 GMT+03:00 Kevin Benton : > >> Do you need to spoof arbitrary addresses? If not (i.e. a set you know >> ahead of time), you can put entries in the allowed_address_pairs field of >> the port that will allow you to send traffic using other MAC/IPs. >> >> On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" wrote: >> >> Hi, >> >> I support, that is a problem. It's unclear, how after removing the option >> prevent_arp_spoofing, I can manage the prevent ARP spoofing mechanism. >> Example: I use security groups but I don't want to use ARP spoofing >> protection. How do I can disable the protection? >> >> 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : >> >>> Sure, there is an ability to enable ARP spoofing for the port/network, >>> but it is impossible to make it enabled by default for all ports. >>> It looks a bit complicated to me and I think it would be better to have >>> an ability to set default port security via config file. >>> >>> Best regards, >>> Tatiana >>> >>> 2018-03-13 15:10 GMT+03:00 Claudiu Belu : >>> >>>> Hi, >>>> >>>> Indeed ARP spoofing is prevented by default, but AFAIK, if you want it >>>> enabled for a port / network, you can simply disable the security groups on >>>> that neutron network / port. >>>> >>>> Best regards, >>>> >>>> Claudiu Belu >>>> >>>> ------------------------------ >>>> *From:* Татьяна Холкина [holkina at selectel.ru] >>>> *Sent:* Tuesday, March 13, 2018 12:54 PM >>>> *To:* openstack-dev at lists.openstack.org >>>> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >>>> >>>> Hi, >>>> I'm using an ocata release of OpenStack where the option >>>> prevent_arp_spoofing can be managed via conf. But later in pike it was >>>> removed and it was decided to prevent spoofing by default. >>>> There are cases where security features should be disabled. As I can >>>> see now we can use a port_security option for these cases. But this option >>>> should be set for a particular port or network on create. The default value >>>> is set to True [1] and itt is impossible to change it. I'd like to >>>> suggest to get default value for port_security [2] from config option. >>>> It would be nice to know your opinion. >>>> >>>> [1] >>>> https://github.com/openstack/neutron-lib/blob/stable/queens/neutron_lib/api/definitions/port_security.py#L21 >>>> [2] >>>> https://github.com/openstack/neutron/blob/stable/queens/neutron/objects/extensions/port_security.py#L24 >>>> >>>> Best regards, >>>> Tatiana >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Best regards, >> Vadim Ponomarev >> Developer of network automation department at Selectel Ltd. >> >> ---- >> This message may contain confidential information that can't be >> distributed without the consent of the sender or the authorized person Selectel >> Ltd. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards, > Vadim Ponomarev > Developer of network automation department at Selectel Ltd. > > ---- > This message may contain confidential information that can't be > distributed without the consent of the sender or the authorized person Selectel > Ltd. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Mar 19 13:24:52 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 19 Mar 2018 13:24:52 +0000 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: <77EA68C3-7570-46E8-BC56-99B692BF55D8@vmware.com> The change will need to be in all of the decomposed projects - http://codesearch.openstack.org/?q=install_cmd%20-U&i=nope&files=&repos= Good catch! On 3/19/18, 3:18 PM, "Gary Kotton" wrote: Thanks!! Will check if it now On 3/19/18, 3:15 PM, "Andreas Jaeger" wrote: On 2018-03-19 14:01, Gary Kotton wrote: > The issue is that since the change below - > https://review.openstack.org/553045 - we are not picking up the latest > neutron master code. I see neutron installed: http://logs.openstack.org/45/553045/1/gate/openstack-tox-pep8/5347981/job-output.txt.gz#_2018-03-15_13_58_12_552258 But later it's downgraded - like it's done for the other requirements you have. So, this uncovered a problem in your install. pip install -U breaks it, please double check that this does the right thing: https://review.openstack.org/554222 Andreas > > > *From: *Thomas Morin > *Organization: *Orange S.A. > *Reply-To: *OpenStack List > *Date: *Monday, March 19, 2018 at 1:46 PM > *To: *OpenStack List > *Subject: *Re: [openstack-dev] [requirements][neutron] Depending on pypi > versions > > > > Hi Gary, > > > > See > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128311.html > > > >> Note that thanks to the tox-siblings feature, we really continue to > >> install neutron and horizon from git - and not use the versions in > >> the global-requirements constraints file, > > > > I think it is worth adding a comment in (test-)requirements.txt that the > OpenStack CI is overriding the version and uses git. > > > > -Thomas > > > > > > Gary Kotton, 2018-03-19 11:25: > > Hi, > > The change > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_requirements_commit_35653e8c5044bff1059ddddf50a82b6065176eea&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=PMrZQUSXojEgJQPh7cZrz1Lvja0OwAstg0U82FalZrw&m=438_Ni4AJ-nteg3SAstWzssFEcYN8k5BoSslcDtkSEU&s=nkowIZ6GVGMJAfMtR4XliFZhODdtly2gsBTr8k9wFFY&e= > has created some issues with decomposed neutron plugins. Let me try > and give an example to explain. > > Say for example a patch landed in neutron master that exposed > feature X. Now if a decomposed plugin wants to consume this it will > fail as the decomposed plugin will pull in the latest tag. > > Does this mean for each new feature that we wish to consume we will > need to update neutron tags? > > Please advise. > > Thanks > > Gary > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org > ?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aj at suse.com Mon Mar 19 13:33:41 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 19 Mar 2018 14:33:41 +0100 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: <77EA68C3-7570-46E8-BC56-99B692BF55D8@vmware.com> References: <77EA68C3-7570-46E8-BC56-99B692BF55D8@vmware.com> Message-ID: <3a976ef7-229c-1677-b06b-2ae430076cd0@suse.com> On 2018-03-19 14:24, Gary Kotton wrote: > The change will need to be in all of the decomposed projects - http://codesearch.openstack.org/?q=install_cmd%20-U&i=nope&files=&repos= > Good catch! Will you do those, please? (once confirmed that this is the right change), Andreas > > On 3/19/18, 3:18 PM, "Gary Kotton" wrote: > > Thanks!! Will check if it now > > On 3/19/18, 3:15 PM, "Andreas Jaeger" wrote: > > On 2018-03-19 14:01, Gary Kotton wrote: > > The issue is that since the change below - > > https://review.openstack.org/553045 - we are not picking up the latest > > neutron master code. > > I see neutron installed: > > http://logs.openstack.org/45/553045/1/gate/openstack-tox-pep8/5347981/job-output.txt.gz#_2018-03-15_13_58_12_552258 > > But later it's downgraded - like it's done for the other requirements > you have. So, this uncovered a problem in your install. > > pip install -U breaks it, please double check that this does the right > thing: > > https://review.openstack.org/554222 > > Andreas > > > > > > > *From: *Thomas Morin > > *Organization: *Orange S.A. > > *Reply-To: *OpenStack List > > *Date: *Monday, March 19, 2018 at 1:46 PM > > *To: *OpenStack List > > *Subject: *Re: [openstack-dev] [requirements][neutron] Depending on pypi > > versions > > > > > > > > Hi Gary, > > > > > > > > See > > > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128311.html > > > > > > > >> Note that thanks to the tox-siblings feature, we really continue to > > > >> install neutron and horizon from git - and not use the versions in > > > >> the global-requirements constraints file, > > > > > > > > I think it is worth adding a comment in (test-)requirements.txt that the > > OpenStack CI is overriding the version and uses git. > > > > > > > > -Thomas > > > > > > > > > > > > Gary Kotton, 2018-03-19 11:25: > > > > Hi, > > > > The change > > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_requirements_commit_35653e8c5044bff1059ddddf50a82b6065176eea&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=PMrZQUSXojEgJQPh7cZrz1Lvja0OwAstg0U82FalZrw&m=438_Ni4AJ-nteg3SAstWzssFEcYN8k5BoSslcDtkSEU&s=nkowIZ6GVGMJAfMtR4XliFZhODdtly2gsBTr8k9wFFY&e= > > has created some issues with decomposed neutron plugins. Let me try > > and give an example to explain. > > > > Say for example a patch landed in neutron master that exposed > > feature X. Now if a decomposed plugin wants to consume this it will > > fail as the decomposed plugin will pull in the latest tag. > > > > Does this mean for each new feature that we wish to consume we will > > need to update neutron tags? > > > > Please advise. > > > > Thanks > > > > Gary > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org > > ?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jim at jimrollenhagen.com Mon Mar 19 14:22:26 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 19 Mar 2018 14:22:26 +0000 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> Message-ID: On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann wrote: > Both of those are good ideas. > Agree. I like the socket idea a bit more as I can imagine some operators don't want config file changes automatically applied. Do we want to choose one to standardize on or allow each project (or operators, via config) the choice? I believe adding those things to oslo.service would make them available to > all applications. > Not necessarily - this discussion started when the Keystone team was discussing how to implement this, given that keystone doesn't use oslo.service. That said, it should be easy to implement in services that don't want this dependency, so +1. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ponomarev at selectel.ru Mon Mar 19 14:34:09 2018 From: ponomarev at selectel.ru (Vadim Ponomarev) Date: Mon, 19 Mar 2018 17:34:09 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: If I understood correctly, you talk about rules which are generated by security_group extension as default from the fixed_ips + allowed_address_pairs list. In our openstack installation we disabled the security_group and the allowed_address_pairs extensions to simplify the configuration the HA clusters. Currently we configure the neutron as follows: 1. prevent_arp_spoofing = False 2. disable security_group extension 3. disable allowed_address_pairs extension Actually, if port_security will be like a "central regulator" which controll all mechanisms, it's perfectly in our case. But, we will lose flexibility, because we can't changed default value for this option. And, even if we disable the port_security extension in the neutron, the prevent ARP-spoofing mechanism will work as default [1]. It's very important question, how do we may disable globally the prevent ARP spoofing in the Pike release? To create all networks without specifying an option port_security_enabled=False. Changes that were proposed by Tatiana, just let save current flexability. [1] https://github.com/openstack/neutron/blob/stable/pike/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L907 2018-03-19 16:24 GMT+03:00 Kevin Benton : > Disabling ARP spoofing protection alone will not let the standby instance > source traffic using the active instance's IP. IP filtering rules > independent of ARP enforcement rules ensure the source IP is in the > fixed_ips or allowed_address_pairs. > > Are you already using allowed address pairs to add the shared IP to both? > > On Mon, Mar 19, 2018, 22:13 Vadim Ponomarev wrote: > >> Yes, there's really a need for mechanisms of high availability like >> corosync, vrrp etc. Another simple example: we have two servers with the >> active/standby HA configuration (for example keepalived + haproxy) and we >> have third-party monitoring system for these servers. The monitoring system >> gets some load metrics and when one of the servers is unavailable, the >> monitoring system scales architecture (adds new server to cluster) in this >> way saving the HA architecture. In your case, this monitoring system must >> do the following steps: create new instance, add new instance's MAC address >> to allowed_address_pairs and only after that reconfigure all other nodes. >> Otherwise cluster will not work. The solution to the problem is simple - >> disable the prevent ARP spoofing mechnism. >> >> Ok, we may used port_security options for this network with the HA >> cluster. For this case we must reconfigure our monitoring systems, create >> allowed_address_pairs for all current servers and (it's hardest) train our >> users how that done. >> >> Currently, we don't use the prevent ARP spoofing option >> (prevent_arp_spoofing = False) and honestly I don't understand why this >> option is enabled as default in private networks. Each such network belongs >> to one user, who controls all instances. Who would decide to perform a MITM >> attack in his own network? >> >> 2018-03-19 12:53 GMT+03:00 Kevin Benton : >> >>> Do you need to spoof arbitrary addresses? If not (i.e. a set you know >>> ahead of time), you can put entries in the allowed_address_pairs field of >>> the port that will allow you to send traffic using other MAC/IPs. >>> >>> On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" >>> wrote: >>> >>> Hi, >>> >>> I support, that is a problem. It's unclear, how after removing the >>> option prevent_arp_spoofing, I can manage the prevent ARP spoofing >>> mechanism. Example: I use security groups but I don't want to use ARP >>> spoofing protection. How do I can disable the protection? >>> >>> 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : >>> >>>> Sure, there is an ability to enable ARP spoofing for the port/network, >>>> but it is impossible to make it enabled by default for all ports. >>>> It looks a bit complicated to me and I think it would be better to have >>>> an ability to set default port security via config file. >>>> >>>> Best regards, >>>> Tatiana >>>> >>>> 2018-03-13 15:10 GMT+03:00 Claudiu Belu : >>>> >>>>> Hi, >>>>> >>>>> Indeed ARP spoofing is prevented by default, but AFAIK, if you want it >>>>> enabled for a port / network, you can simply disable the security groups on >>>>> that neutron network / port. >>>>> >>>>> Best regards, >>>>> >>>>> Claudiu Belu >>>>> >>>>> ------------------------------ >>>>> *From:* Татьяна Холкина [holkina at selectel.ru] >>>>> *Sent:* Tuesday, March 13, 2018 12:54 PM >>>>> *To:* openstack-dev at lists.openstack.org >>>>> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >>>>> >>>>> Hi, >>>>> I'm using an ocata release of OpenStack where the option >>>>> prevent_arp_spoofing can be managed via conf. But later in pike it was >>>>> removed and it was decided to prevent spoofing by default. >>>>> There are cases where security features should be disabled. As I can >>>>> see now we can use a port_security option for these cases. But this option >>>>> should be set for a particular port or network on create. The default value >>>>> is set to True [1] and itt is impossible to change it. I'd like to >>>>> suggest to get default value for port_security [2] from config option. >>>>> It would be nice to know your opinion. >>>>> >>>>> [1] https://github.com/openstack/neutron-lib/blob/ >>>>> stable/queens/neutron_lib/api/definitions/port_security.py#L21 >>>>> [2] https://github.com/openstack/neutron/blob/stable/ >>>>> queens/neutron/objects/extensions/port_security.py#L24 >>>>> >>>>> Best regards, >>>>> Tatiana >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>> unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Best regards, >>> Vadim Ponomarev >>> Developer of network automation department at Selectel Ltd. >>> >>> ---- >>> This message may contain confidential information that can't be >>> distributed without the consent of the sender or the authorized person Selectel >>> Ltd. >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Best regards, >> Vadim Ponomarev >> Developer of network automation department at Selectel Ltd. >> >> ---- >> This message may contain confidential information that can't be >> distributed without the consent of the sender or the authorized person Selectel >> Ltd. >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Vadim Ponomarev Developer of network automation department at Selectel Ltd. ---- This message may contain confidential information that can't be distributed without the consent of the sender or the authorized person Selectel Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From no-reply at openstack.org Mon Mar 19 14:49:42 2018 From: no-reply at openstack.org (no-reply at openstack.org) Date: Mon, 19 Mar 2018 14:49:42 -0000 Subject: [openstack-dev] [kolla] kolla-ansible 6.0.0.0rc2 (queens) Message-ID: Hello everyone, A new release candidate for kolla-ansible for the end of the Queens cycle is available! You can find the source code tarball at: https://tarballs.openstack.org/kolla-ansible/ Unless release-critical issues are found that warrant a release candidate respin, this candidate will be formally released as the final Queens release. You are therefore strongly encouraged to test and validate this tarball! Alternatively, you can directly test the stable/queens release branch at: http://git.openstack.org/cgit/openstack/kolla-ansible/log/?h=stable/queens Release notes for kolla-ansible can be found at: http://docs.openstack.org/releasenotes/kolla-ansible/ From bodenvmw at gmail.com Mon Mar 19 14:53:45 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 19 Mar 2018 08:53:45 -0600 Subject: [openstack-dev] [requirements][neutron] Depending on pypi versions In-Reply-To: References: Message-ID: <49642eed-50a3-72af-a497-f8b3ef8350a1@gmail.com> > On 3/19/18 7:15 AM, Andreas Jaeger wrote: > > pip install -U breaks it, please double check that this does the right > thing: > > https://review.openstack.org/554222 I'm not yet convinced the pip -U is the only factor here. When I run with 554222 in my local env I still get a back-leveled neutron, but maybe I'm doing something wrong. I'm testing with: https://review.openstack.org/#/c/554245/ From jim at jimrollenhagen.com Mon Mar 19 14:57:58 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 19 Mar 2018 14:57:58 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? Message-ID: Ironic (and surely other projects) have had to point out many times that specs are a point in time design discussion, and not completed documentation. It's obviously too much work to go back and update specs constantly. What do folks think about a banner at the top of the specs website (or each individual spec) that points this out? I'm happy to do the work if we agree it's a good thing to do. My suggested wording: "NOTE: specifications are a point-in-time design reference, not up-to-date feature documentation." // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaku.yamahata at gmail.com Mon Mar 19 15:04:19 2018 From: isaku.yamahata at gmail.com (Isaku Yamahata) Date: Mon, 19 Mar 2018 08:04:19 -0700 Subject: [openstack-dev] [Qos]Unable to apply qos policy with dscp marking rule to a port In-Reply-To: References: Message-ID: <20180319150419.GA25177@private.email.ne.jp> Please step up to drive https://review.openstack.org/#/c/519513/ At the moment no one is working on the patch. Thanks, On Fri, Mar 16, 2018 at 07:25:58PM +0000, A Vamsikrishna wrote: > Hi Manjeet / Isaku, > > I am unable to apply qos policy with dscp marking rule to a port. > > > 1.Create a Qos Policy > 2.Create a dscp marking rule on to create qos policy > 3.Apply above created policy to a port > > openstack network qos rule set --dscp-mark 22 dscp-marking 115e4f70-8034-41768fe9-2c47f8878a7d > > HttpException: Conflict (HTTP 409) (Request-ID: req-da7d8998-9d8c-4aea-a10b-326cc21b608e), Rule dscp_marking is not supported by port 115e4f70-8034-41768fe9-2c47f8878a7d > > stack at pike-ctrl:~/devstack$ > > Seeing above error during the qos policy application on a port. > > Any suggestions on this ? > > I see below review has been abandoned which is "Allow networking-odl to support DSCP Marking rule for qos driver": > > https://review.openstack.org/#/c/460470/ > > Is dscp marking supported in PIKE ? Can you please confirm ? > > I have raised below bug to track this issue: > > https://bugs.launchpad.net/networking-odl/+bug/1756132 > > > > Thanks, > Vamsi -- Isaku Yamahata From amy at demarco.com Mon Mar 19 15:04:45 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 19 Mar 2018 10:04:45 -0500 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: I think it's a good idea especially as some times a spec is never completed or not in the same release. Thanks, Amy(spotz) On Mon, Mar 19, 2018 at 9:57 AM, Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or > each individual spec) that points this out? I'm happy to do the work if we > agree it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Mar 19 15:07:54 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Mar 2018 10:07:54 -0500 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: <20180319150753.GA896@sm-xps> On Mon, Mar 19, 2018 at 02:57:58PM +0000, Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or each > individual spec) that points this out? I'm happy to do the work if we agree > it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." > > // jim I like that idea. There have been several times where this has caused confusion. If there was some sort of notification on the page, that may make it more likely that random web search readers will understand that they are not reading "official" documentation. From juliaashleykreger at gmail.com Mon Mar 19 15:12:12 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 19 Mar 2018 08:12:12 -0700 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: I'm all for the idea, although that may be because I've been been one of the people in the past attempting to assist people who have found specs, and who are are confused or frustrated. I believe it is because some people latch on to the highly technical design documents when they can't find $complex thing in $complex documentation easily. -Julia On Mon, Mar 19, 2018 at 7:57 AM, Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or each > individual spec) that points this out? I'm happy to do the work if we agree > it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > On Mon, Mar 19, 2018 at 7:57 AM, Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or each > individual spec) that points this out? I'm happy to do the work if we agree > it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pkovar at redhat.com Mon Mar 19 15:19:59 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 19 Mar 2018 16:19:59 +0100 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: <20180319161959.f942ecea1bcacae9fdbadedb@redhat.com> On Mon, 19 Mar 2018 14:57:58 +0000 Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or each > individual spec) that points this out? I'm happy to do the work if we agree > it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." Might be a good idea to also include a pointer to https://docs.openstack.org/. Thanks, pk From fungi at yuggoth.org Mon Mar 19 15:46:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Mar 2018 15:46:33 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> On 2018-03-19 14:57:58 +0000 (+0000), Jim Rollenhagen wrote: [...] > What do folks think about a banner at the top of the specs website > (or each individual spec) that points this out? I'm happy to do > the work if we agree it's a good thing to do. [...] Sounds good in principle, but the execution may take a bit of work. Specs sites are independently generated Sphinx documents stored in different repositories managed by different teams, and don't necessarily share a common theme or configuration. It might be possible to hack around this with some sort of content injection in Apache but that also seems like a bit of a kluge. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From arxcruz at redhat.com Mon Mar 19 16:43:29 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Mon, 19 Mar 2018 17:43:29 +0100 Subject: [openstack-dev] [tripleo] Tripleo CI Community meeting tomorrow Message-ID: Hello We are going to have a TripleO CI Community meeting tomorrow 03/20/2018 at 3 pm UTC time. The meeting is going to happen on BlueJeans [1] and also on IRC on #tripleo channel. After that, we will hold Office Hours starting at 4PM UTC in case someone from community have any questions related to CI. Hope to see you there. 1 - https://bluejeans.com/7071866728 Kind regards, Arx Cruz -------------- next part -------------- An HTML attachment was scrubbed... URL: From gong.yongsheng at 99cloud.net Mon Mar 19 16:54:08 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Tue, 20 Mar 2018 00:54:08 +0800 (CST) Subject: [openstack-dev] [tacker] tacker meeting on Mar. 20th 2018 is cancelled Message-ID: <33e9ea64.59de.1623f2eb758.Coremail.gong.yongsheng@99cloud.net> hi Because I am on a trip, the Tacker project meeting on Mar. 20th is cancelled. If you guys need to talk, please go to #tacker channel or send me an email. please go ahead according to our PTG meeting. Regards, yong sheng gong 99cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramamani.yeleswarapu at intel.com Mon Mar 19 18:25:33 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 19 Mar 2018 18:25:33 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Critical Sushy bug fix https://review.openstack.org/#/c/552817/ - Deploy Steps - https://review.openstack.org/#/c/549493/ - Remaining Rescue patches - https://review.openstack.org/#/c/546919/ - Fix a bug for unrescuiing with whole disk image - better fix: https://review.openstack.org/#/c/499050/ - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` Updated 19-Mar-2018. - https://review.openstack.org/#/c/538119/ - Rescue mode standalone tests - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) - Nova virt lying to Nova regarding resources fix - high bug for ironic - Placement has issues after upgrade if ironic is unreachable for too long Current WIP: https://review.openstack.org/#/c/545479/ - https://bugs.launchpad.net/nova/+bug/1750450 - https://review.openstack.org/#/c/545479/ - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None oneview: None at this time - No subteam at present. Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - note: the increase in bug count is probably because now the dashboard tracks virtualbmc and networking-baremetal - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - status as of 19 March 2018: - spec for deployment steps framework: https://review.openstack.org/#/c/549493/ - more reviews welcome; needs update BIOS config framework(zshi, yolanda, moddard, hshiina) ------------------------------------------------------ - status as of 19 March 2018: - Spec has merged: https://review.openstack.org/#/c/496481/ - https://review.openstack.org/#/q/topic:bug/1712032+(status:open+OR+status:merged) Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - no update, will write spec soonish Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - status as of 19 Feb 2018: - Dublin PTG consensus was to start with small architectural building blocks. - basic architecture explanation: https://review.openstack.org/554284 - (mostly moves stuff from the user guide) - list of cases from the Denver PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 19 Mar 2018: - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 19 Mar 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Updating nova virt to use REST API (TheJulia) --------------------------------------------- Status as of 19 Mar 2018: Two phases to implement: 1) Rewrite unit tests away from object usage. (TheJulia) Posted an initial sample of what this will look like to nova. https://review.openstack.org/#/c/553699/ 2) Rewriting the actual calls in the driver to use rest, and then removing python-ironicclient. Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of March 19th - Test instance with some ironic data is available at https://nuc.mirroruniverse.org - Please explore, see TheJulia with questions. - Storyboard team has confirmed that data migrates out of Launchpad without issues, and that our data would be migrated into individual projects in a top level "ironic" project group. - Ironic seems to take 4-6 hours, all other subprojects seem to take 5-20 minutes. Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of March 19th: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ Getting clean steps (rloo, TheJulia) ------------------------------------ - Status as of March 19th: - TheJulia to pickup state machine related spec this week and resume. Project vision (jroll, TheJulia) -------------------------------- - Status as of March 19th: - jroll to send email detailing the session this week SIGHUP support (rloo) --------------------- - Proposed for ironic by rloo -- this is done: https://review.openstack.org/474331 MERGED\o/ - TODO: - ironic-inspector - networking-baremetal Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 19 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 - ironic-lib: https://review.openstack.org/#/c/552537/ - python-ironicclient: https://review.openstack.org/552543 - python-ironic-inspector-client: https://review.openstack.org/552546 - virtualbmc: TODO? - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- Zuul v3 playbook refactoring (sambetts, pas-ha) ----------------------------------------------- Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Add documentation for rescue mode: https://review.openstack.org/#/c/431622/ MERGED - Devstack changes to enable testing add support for rescue mode: https://review.openstack.org/#/c/524118/ MERGED - We need to be careful with this, in that we can't use python-ironicclient changes that have not been released. - Update "standalone" job for supporting rescue mode: https://review.openstack.org/#/c/537821/ - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Bugs: - unrescue fails with partition user image: https://review.openstack.org/#/c/544278/ MERGED - rescue ramdisk doesn't boot on UEFI: https://review.openstack.org/#/c/545186/ MERGED - Can't Merge until we do a client release with rescue support (in Rocky): - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) - code patch: https://review.openstack.org/#/c/416487/ - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Mon Mar 19 19:06:38 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 19 Mar 2018 19:06:38 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> Message-ID: On Mon, Mar 19, 2018 at 3:46 PM, Jeremy Stanley wrote: > On 2018-03-19 14:57:58 +0000 (+0000), Jim Rollenhagen wrote: > [...] > > What do folks think about a banner at the top of the specs website > > (or each individual spec) that points this out? I'm happy to do > > the work if we agree it's a good thing to do. > [...] > > Sounds good in principle, but the execution may take a bit of work. > Specs sites are independently generated Sphinx documents stored in > different repositories managed by different teams, and don't > necessarily share a common theme or configuration. Huh, I had totally thought there was a theme for the specs site that most/all projects use. I may try to accomplish this anyway, but will likely be more work that I thought. I'll poke around at options (small sphinx plugin, etc). > It might be > possible to hack around this with some sort of content injection in > Apache but that also seems like a bit of a kluge. Totally agree. :) Thanks Jeremy! // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Mar 19 19:20:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Mar 2018 19:20:23 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> Message-ID: <20180319192022.qsrnlcclloi7ezhq@yuggoth.org> On 2018-03-19 19:06:38 +0000 (+0000), Jim Rollenhagen wrote: [...] > Huh, I had totally thought there was a theme for the specs site > that most/all projects use. I may try to accomplish this anyway, > but will likely be more work that I thought. I'll poke around at > options (small sphinx plugin, etc). [...] A lot of them may share the same theming, so you can probably get a significant coverage by going that route and work up to more complete coverage as you can convince others to switch to it. Also they all (or at least almost all?) share the same publication job so that's a possible alternative means of injection as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jsbryant at electronicjungle.net Mon Mar 19 19:28:47 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Mon, 19 Mar 2018 19:28:47 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: Message-ID: Agree this is a good idea. Let me know what we can do to help. Jay On Mon, Mar 19, 2018, 9:58 AM Jim Rollenhagen wrote: > Ironic (and surely other projects) have had to point out many times that > specs are a point in time design discussion, and not completed > documentation. It's obviously too much work to go back and update specs > constantly. > > What do folks think about a banner at the top of the specs website (or > each individual spec) that points this out? I'm happy to do the work if we > agree it's a good thing to do. My suggested wording: > > "NOTE: specifications are a point-in-time design reference, not up-to-date > feature documentation." > > // jim > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Mar 19 20:09:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Mar 2018 16:09:14 -0400 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> Message-ID: <1521490029-sup-9732@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-03-19 19:06:38 +0000: > On Mon, Mar 19, 2018 at 3:46 PM, Jeremy Stanley wrote: > > > On 2018-03-19 14:57:58 +0000 (+0000), Jim Rollenhagen wrote: > > [...] > > > What do folks think about a banner at the top of the specs website > > > (or each individual spec) that points this out? I'm happy to do > > > the work if we agree it's a good thing to do. > > [...] > > > > Sounds good in principle, but the execution may take a bit of work. > > Specs sites are independently generated Sphinx documents stored in > > different repositories managed by different teams, and don't > > necessarily share a common theme or configuration. > > > Huh, I had totally thought there was a theme for the specs site that > most/all projects use. I may try to accomplish this anyway, but will likely > be more work that I thought. I'll poke around at options (small sphinx > plugin, etc). We want them all to use the openstackdocstheme so you could look into creating a "subclass" of that one with the extra content in the header, then ensure all of the specs repos use it. We would have to land a small patch to trigger a rebuild, but the patch switching them from oslosphinx to openstackdocstheme would serve for that and a small change to the readme or another file would do it for any that are already using the theme. Doug From fungi at yuggoth.org Mon Mar 19 20:17:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 19 Mar 2018 20:17:37 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: <1521490029-sup-9732@lrrr.local> References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> <1521490029-sup-9732@lrrr.local> Message-ID: <20180319201737.mkwjpxxgbutyncic@yuggoth.org> On 2018-03-19 16:09:14 -0400 (-0400), Doug Hellmann wrote: [...] > We want them all to use the openstackdocstheme so you could look > into creating a "subclass" of that one with the extra content in > the header, then ensure all of the specs repos use it. We would > have to land a small patch to trigger a rebuild, but the patch > switching them from oslosphinx to openstackdocstheme would serve > for that and a small change to the readme or another file would do it > for any that are already using the theme. Seems like a reasonable incentive for some needed cleanup. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Mon Mar 19 20:31:32 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 19 Mar 2018 13:31:32 -0700 Subject: [openstack-dev] [nova][ironic] Rocky PTG summary - nova/ironic Message-ID: Hello everyone, Here's the summary etherpad [0] for the nova/ironic session from the PTG in the Croke Park Hotel breakfast area, also included as a plain text export on this email. Please feel freed to edit or reply to this thread to add/correct anything I've missed. Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-ironic-summary *Nova/Ironic: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L245 *Key topics * Disk partitioning, want to be able to pass a hardware config for the hypervisor * Virt driver interaction issues * nova-compute crashing on startup * maintenance-state of Ironic nodes * Ironic API version negotiation * Currently, ironicclient does not have the ability to send a specific microversion per request and the Ironic driver needs to be able to behave differently depending on Ironic version * Ironic + Traits update * Where we are and what's coming next * Flavor decomposition status (ability to specify desired traits of the allowed traits for a given flavor) *Agreements and decisions * * On disk partitioning and passing a hardware config, we could use an artifact repository such as Glare to store configs and then on the Nova side, we could enhance the existing disk_config parameter for server create to also support a profile ((AUTO/MANUAL/PROFILE). Profile templates would be operator-defined like flavors * jroll to write a spec on this * For the issue of nova-compute crashing on startup, we could add a try-except around the call site at startup and ignore a "NotReadyYet" or similar exception from the Ironic driver * For the issue of maintenance-state Ironic nodes, the Ironic virt driver shall set reserved=1 for the custom resource class inventory record instead of returning no inventory records * On Ironic API version negotiation, the ironicclient already has some version negotiation built-in, so there are some options. 1) update Ironic driver to handle return/error codes from ironicclient version-negotiated calls, 2) add per-call microversion support to ironicclient and use it in the Ironic driver, 3) convert all Ironic driver calls to use raw REST * Option 1) would be the most expedient, but it's up to the Ironic team how they will want to proceed. Option 3 is the desired ideal solution but will take a rewrite of the related Ironic driver unit tests as they currently all mock ironicclient * TheJulia will follow up on this * On Ironic + Traits, the ability to specify traits for a given flavor is available starting in Queens * https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/ironic-driver-traits.html * https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/request-traits-in-nova.html * Ironic CI work is in progress: https://review.openstack.org/#/c/545370 * Resource classes allows us to drop IronicHostManager and exact filters and the deprecation landed on 20171205, so we can remove them now From melwittt at gmail.com Mon Mar 19 20:45:54 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 19 Mar 2018 13:45:54 -0700 Subject: [openstack-dev] [nova] New image backend: StorPool In-Reply-To: <20180316172401.GD4118@office.storpool.com> References: <7fc727c0-e375-de6f-ba8b-585512324830@gmail.com> <20180316172401.GD4118@office.storpool.com> Message-ID: <8828e105-dffb-aa64-6f62-a16b19ef4db5@gmail.com> On Fri, 16 Mar 2018 19:24:01 +0200, Peter Penchev wrote: > On Fri, Mar 16, 2018 at 09:23:11AM -0700, melanie witt wrote: >> On Fri, 16 Mar 2018 17:33:30 +0200, Peter Penchev wrote: >>> Would there be any major opposition to adding a StorPool shared >>> storage image backend, so that our customers are not limited to >>> volume-backed instances? Right now, creating a StorPool volume and >>> snapshot from a Glance image and then booting instances from that >>> snapshot works great, but in some cases, including some provisioning >>> and accounting systems on top of OpenStack, it would be preferable to >>> go the Nova way and let the hypervisor think that it has a local(ish) >>> image to work with, even though it's on shared storage anyway. >> >> Can you be more specific about what is limiting you when you use >> volume-backed instances? > > It's not a problem for our current customers, but we had an OpenStack > PoC last year for a customer who was using some proprietary > provisioning+accounting system on top of OpenStack (sorry, I really > can't remember the name). That particular system simply couldn't be > bothered to create a volume-backed instance, so we "helped" by doing > an insane hack: writing an almost-pass-through Compute API that would > intercept the boot request and DTRT behind the scenes (send a modified > request to the real Compute API), and then also writing > an almost-pass-through Identity API that would intercept the requests to > get the Compute API's endpoint and slip our API's address there. > The customer ended up not using OpenStack for completely unrelated > reasons, but there was certainly at least one instance of this. > >> We've been kicking around the idea of beefing up >> support of boot-from-volume in nova such that "automatic boot-from-volume >> for instance create" works well enough that we could consider >> boot-from-volume the first-class way to support the vast variety of cinder >> storage backends and let cinder handle the details instead of trying to >> re-implement support of various storage backends in nova on a selective >> basis. I'd like to better understand what is lacking for you when you use >> boot-from-volume to leverage StorPool and determine whether it's something >> we could address in nova. > > I'll see if I can remember anything more (ISTR also another case of > something that couldn't boot a volume-backed instance, but I really > cannot remember even what it was). The problem was certainly not with > OpenStack proper, but with other systems built on top of it. Thanks for the insight, Peter. On both points, it looks like we could speculate that providing a good UX in nova for automatic boot-from-volume might have addressed the issues in other systems being unable to leverage it (boot-from-volume). As it stands, we haven't had anyone interested enough in the idea of a generic cinder imagebackend in nova to come forward and work on it. As an alternative, we could enhance our support of boot-from-volume enough to make it easy/automatic to use if an environment needs to be able to take advantage of various cinder backends for instances. If we can gather support and people willing to work on either of those options, we could start getting serious about a plan and make some progress toward it. Best, -melanie From sean.mcginnis at gmx.com Mon Mar 19 21:02:50 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 19 Mar 2018 16:02:50 -0500 Subject: [openstack-dev] [all] Job failures on stable/pike and stable/ocata Message-ID: <20180319210249.GA23433@sm-xps> I've seen this pop up in various channels now, so I figure I better get more visibility on what is going on to avoid wasted troubleshooting. We have a couple of issues causing failures with stable/pike and stable/ocata. Actually, it also affects stable/queens as well due to grenade jobs needing to run stable/pike first. The first is an issue with setuptools since the 39.0.0 release. There were some deprecations for the type of Version objects returned from setuptools that have now been removed to new objects that no longer allow iterating. This impacted oslo.utilsin versionutils.is_compatible, causing that method to raise the exception: TypeError: 'Version' object does not support indexing This has been already addressed in master, so there are two backports for that fix to the stable branches. The second issue is with a new release of Pip. Basically, this change deprecated and removed support for importing pip and calling internal methods in 9.0.2. This manifests itself by neutron agent failing to load with the following in the q-agt log file: KeyError: 'pip._vendor.urllib3.contrib' This actually bubbles up from the ryu package. Luckily, they had refactored some things such that they are still importing pip, but we are not calling the parts of the ryu code where this is still an issue. To make things even more fun, these changes are in two different repos, and neither can merge without the other fix. I think we have a full working plan in place. The oslo.util patches would fail just the legacy-tempest-dsvm-neutron-src job, so that has been marked as non-voting for now. Next, the oslo.util fixes need to merge and a new stable release done for them. Then, requirements updates to both stable branches can pass that raise the upper-constraints for ryu to 4.18 which includes the changes we need. Once all that is done, we can merge the last patch that reverts the change making legacy-tempest-dsvm-neutron-src voting again. The set up patches (other than the upcoming release requests) can be found under the pip/5081 topic: https://review.openstack.org/#/q/topic:pip/5081+(status:open+OR+status:merged) As far as I can tell, once all that is done, the stable branches should be unblocked and we should be back in business. If anything else crops up, I'll post updates here. Thanks, Sean From juliaashleykreger at gmail.com Mon Mar 19 21:22:35 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 19 Mar 2018 14:22:35 -0700 Subject: [openstack-dev] [ironic] Moving Ironic meeting time Message-ID: Greetings everyone! In an effort to have our meeting be a little friendlier to contributors in Japan and India, we agreed today [0] to move our meeting time up two hours to 1500 UTC. The appropriate change [1] has been submitted to the irc-meetings repository. Please let me know if you have any questions or concerns. Thanks! -Julia [0]: http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-03-19-17.00.log.html#l-274 [1]: https://review.openstack.org/#/c/554361/ From msm at redhat.com Mon Mar 19 21:26:31 2018 From: msm at redhat.com (Michael McCune) Date: Mon, 19 Mar 2018 17:26:31 -0400 Subject: [openstack-dev] [Openstack-sigs] [all][api] POST /api-sig/news In-Reply-To: References: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> Message-ID: On Fri, Mar 16, 2018 at 4:55 AM, Chris Dent wrote: > > >> So summarize and clarify, we are talking about SDK being able to build >> their interface to Openstack APIs in an automated way but statically from >> API Schema generated by every project. Such API Schema is already built in >> memory during API reference documentation generation and could be saved in >> JSON format (for instance) (see [5]). >> > > What do you see as the current roadblocks preventing this work from > continuing to make progress? > > > Gilles, i'm very curious about how we can help as well. i am keenly interested in the api-schema work that is happening and i am coming up to speed with the work that Graham has done, and which previously existed, on os-api-ref. although i don't have a *ton* of spare free time, i would like to help as much as i can. thanks for bringing this up again, peace o/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Mar 19 22:17:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 19 Mar 2018 18:17:00 -0400 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: <20180319201737.mkwjpxxgbutyncic@yuggoth.org> References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> <1521490029-sup-9732@lrrr.local> <20180319201737.mkwjpxxgbutyncic@yuggoth.org> Message-ID: <1521497789-sup-7479@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-03-19 20:17:37 +0000: > On 2018-03-19 16:09:14 -0400 (-0400), Doug Hellmann wrote: > [...] > > We want them all to use the openstackdocstheme so you could look > > into creating a "subclass" of that one with the extra content in > > the header, then ensure all of the specs repos use it. We would > > have to land a small patch to trigger a rebuild, but the patch > > switching them from oslosphinx to openstackdocstheme would serve > > for that and a small change to the readme or another file would do it > > for any that are already using the theme. > > Seems like a reasonable incentive for some needed cleanup. And if I wasn't clear, we would want to put that subclass in the openstackdocstheme repo so we can easily keep the styles up to date over time. Doug From ksnhr.tech at gmail.com Tue Mar 20 01:02:41 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Tue, 20 Mar 2018 10:02:41 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan, Horizon folks, Now totally 8 xstatic-** repos for heat-dashboard have been landed. In project-config for them, I've set same acl-config as the existing xstatic repos. It means only "xstatic-core" can manage the newly created repos on gerrit. Could you kindly add "heat-dashboard-core" into "xstatic-core" like as what horizon-core is doing ? xstatic-core https://review.openstack.org/#/admin/groups/385,members heat-dashboard-core https://review.openstack.org/#/admin/groups/1844,members Of course, we will surely touch only what we made, just would like to manage them smoothly by ourselves. In case we need to touch the other ones, will ask Horizon team for help. Thanks in advance. Regards, Kaz 2018-03-14 15:12 GMT+09:00 Xinni Ge : > Hi Horizon Team, > > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, > and submitted a patch for it. > Could you please help to review the patch. > > https://bugs.launchpad.net/horizon/+bug/1755339 > https://review.openstack.org/#/c/552259/ > > Thank you very much. > > Best Regards, > Xinni > > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny wrote: >> >> Hi Kaz, >> >> Thanks for cleaning this up. I put +1 on both of these patches >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> wrote: >>> >>> Hi Ivan & Horizon folks, >>> >>> >>> Now we are submitting a couple of patches to have the new xstatic >>> modules. >>> Let me request you to have review the following patches. >>> We need Horizon PTL's +1 to move these forward. >>> >>> project-config >>> https://review.openstack.org/#/c/551978/ >>> >>> governance >>> https://review.openstack.org/#/c/551980/ >>> >>> Thanks in advance:) >>> >>> Regards, >>> Kaz >>> >>> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski : >>> > Yes, please do that. We can then discuss in the review about technical >>> > details. >>> > >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>> > wrote: >>> >> >>> >> Hi, Akihiro >>> >> >>> >> Thanks for the quick reply. >>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be >>> >> modified. >>> >> It is much better to enhance horizon plugin settings, >>> >> and I think maybe there could be one option like ADD_XSTATIC_MODULES. >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. >>> >> I am considering to add a bug report to describe it at first, and give >>> >> a >>> >> patch later maybe. >>> >> Is that ok with the Horizon team? >>> >> >>> >> Best Regards. >>> >> Xinni >>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>> >> wrote: >>> >>> >>> >>> Hi Xinni, >>> >>> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >>> >>> > Hello Horizon Team, >>> >>> > >>> >>> > I would like to hear about your opinions about how to add new >>> >>> > xstatic >>> >>> > modules to horizon settings. >>> >>> > >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue, >>> >>> > thanks >>> >>> > for >>> >>> > your advices in Dublin PTG, we are now removing them and >>> >>> > referencing as >>> >>> > new >>> >>> > xstatic-* libs. >>> >>> >>> >>> Thanks for moving this forward. >>> >>> >>> >>> > So we installed the new xstatic files (not uploaded as openstack >>> >>> > official >>> >>> > repos yet) in our development environment now, but hesitate to >>> >>> > decide >>> >>> > how to >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in >>> >>> > openstack_dashboard.settings so that the static files could be >>> >>> > automatically >>> >>> > collected by *collectstatic* process. >>> >>> > >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>> >>> > openstack_dashboard/utils/settings.py and the relevant static fils >>> >>> > are >>> >>> > added >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin dashboard. >>> >>> > We may want new plugin setting keywords ( something similar to >>> >>> > ADD_JS_FILES) >>> >>> > to update horizon XSTATIC_MODULES (or directly update >>> >>> > STATICFILES_DIRS). >>> >>> >>> >>> IMHO it is better to allow horizon plugins to add xstatic modules >>> >>> through horizon plugin settings. I don't think it is a good idea to >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >>> >>> usages. It makes difficult to track why and where a xstatic module in >>> >>> BASE_XSTATIC_MODULES is used. >>> >>> Multiple horizon plugins can add a same entry, so horizon code to >>> >>> handle plugin settings should merge multiple entries to a single one >>> >>> hopefully. >>> >>> My vote is to enhance the horizon plugin settings. >>> >>> >>> >>> Akihiro >>> >>> >>> >>> > >>> >>> > Looking forward to hearing any suggestions from you guys, and >>> >>> > Best Regards, >>> >>> > >>> >>> > Xinni Ge >>> >>> > >>> >>> > >>> >>> > >>> >>> > __________________________________________________________________________ >>> >>> > OpenStack Development Mailing List (not for usage questions) >>> >>> > Unsubscribe: >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> > >>> >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> >>> Unsubscribe: >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> 葛馨霓 Xinni Ge >>> >> >>> >> >>> >> __________________________________________________________________________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> > >>> > >>> > >>> > __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From 935540343 at qq.com Tue Mar 20 02:15:21 2018 From: 935540343 at qq.com (=?gb18030?B?X18gbWFuZ28u?=) Date: Tue, 20 Mar 2018 10:15:21 +0800 Subject: [openstack-dev] [ceilometer] [gnocchi] keystone verification failed. Message-ID: hi, I have a question about the validation of gnocchi keystone. I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be # gnocchi status --debug REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12" Starting new HTTP connection (1): localhost http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114 RESP: [401] Content-Type: application/json Content-Length: 114 WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: Keep-Alive RESP BODY: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} The request you have made requires authentication. (HTTP 401) Please help me, thank you very much. Ps: I have configured the following -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 971D4487 at D78F3174.B96EB05A.jpg Type: image/jpeg Size: 8224 bytes Desc: not available URL: From usnexp at gmail.com Tue Mar 20 02:15:56 2018 From: usnexp at gmail.com (sungil im) Date: Tue, 20 Mar 2018 11:15:56 +0900 Subject: [openstack-dev] [openstack-helm] need an ideation on multiline logging support Message-ID: Docker generate container logs in json format, and Fluent-bit deliver the logs. Fluent-bit provides some mechanism for various logs, but it cannot apply them to the logs, because the logs already converted in json-format. There are some debates on this issue. https://github.com/moby/moby/issues/22920 It seems that docker will not support multi-line log for each log, judging from above debates. There is a discussion about this issue, on other monitoring solution. https://github.com/monasca/monasca-docker/issues/139 since it would be better to provide multilne support from openstck-helm lma deployment, I would like to gather various opinion on what would be the best approach for us. -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Mar 20 02:57:50 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 20 Mar 2018 10:57:50 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: <9cab5a35-372b-1a20-6def-51b4e8e15fbe@intel.com> References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> <9cab5a35-372b-1a20-6def-51b4e8e15fbe@intel.com> Message-ID: 2018-03-19 0:34 GMT+08:00 Nadathur, Sundar : > Sorry for the delayed response. I broadly agree with previous replies. > For the concerns about the impact of Cyborg weigher on scheduling > performance , there are some options (apart from filtering candidates as > much as possible in Placement): > * Handle hosts in bulk by extending BaseWeigher > and > overriding weigh_objects > (), > instead of handling one host at a time. > Still an external REST call, I guess people still doesn't like that. > * If we have to handle one host at a time for whatever reason, since the > weigher is maintained by Cyborg, it could directly query Cyborg DB rather > than go through Cyborg REST API. This will be not unlike other weighers. > That means when the cyborg DB schema changed, we have to restart the nova-scheduler to update the weigher also. We couple the two service upgrade together. > Given these and other possible optimizations, it may be too soon to worry > about the performance impact. > yea, maybe. What about the preferred traits? > > I am working on a spec that will capture the flow discussed in the PTG. I > will try to address these aspects as well. > > Thanks & Regards, > Sundar > > > On 3/8/2018 4:53 AM, Zhipeng Huang wrote: > > @jay I'm also against a weigher in nova/placement. This should be an > optional step depends on vendor implementation, not a default one. > > @Alex I think we should explore the idea of preferred trait. > > @Mathew: Like Sean said, Cyborg wants to support both reprogrammable FPGA > and pre-programed ones. > Therefore it is correct that in your description, the programming > operation should be a call from Nova to Cyborg, and cyborg will complete > the operation while nova waits. The only problem is that the weigher step > should be an optional one. > > > On Wed, Mar 7, 2018 at 9:21 PM, Jay Pipes wrote: > >> On 03/06/2018 09:36 PM, Alex Xu wrote: >> >>> 2018-03-07 10:21 GMT+08:00 Alex Xu >> soulxu at gmail.com>>: >>> >>> >>> >>> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K >> >: >>> >>> __ __ >>> >>> __ __ >>> >>> *From:*Matthew Booth [mailto:mbooth at redhat.com >>> ] >>> *Sent:* Saturday, March 3, 2018 4:15 PM >>> *To:* OpenStack Development Mailing List (not for usage >>> questions) >> > >>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple >>> functions____ >>> >>> __ __ >>> >>> On 2 March 2018 at 14:31, Jay Pipes >> > wrote:____ >>> >>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:____ >>> >>> Hello Nova team, >>> >>> During the Cyborg discussion at Rocky PTG, we >>> proposed a flow for FPGAs wherein the request spec asks >>> for a device type as a resource class, and optionally a >>> function (such as encryption) in the extra specs. This >>> does not seem to work well for the usage model that I’ll >>> describe below. >>> >>> An FPGA device may implement more than one function. For >>> example, it may implement both compression and >>> encryption. Say a cluster has 10 devices of device type >>> X, and each of them is programmed to offer 2 instances >>> of function A and 4 instances of function B. More >>> specifically, the device may implement 6 PCI functions, >>> with 2 of them tied to function A, and the other 4 tied >>> to function B. So, we could have 6 separate instances >>> accessing functions on the same device.____ >>> >>> __ __ >>> >>> Does this imply that Cyborg can't reprogram the FPGA at all?____ >>> >>> */[Mooney, Sean K] cyborg is intended to support fixed function >>> acclerators also so it will not always be able to program the >>> accelerator. In this case where an fpga is preprogramed with a >>> multi function bitstream that is statically provisioned cyborge >>> will not be able to reprogram the slot if any of the fuctions >>> from that slot are already allocated to an instance. In this >>> case it will have to treat it like a fixed function device and >>> simply allocate a unused vf of the corret type if available. >>> ____/* >>> >>> >>> ____ >>> >>> >>> In the current flow, the device type X is modeled as a >>> resource class, so Placement will count how many of them >>> are in use. A flavor for ‘RC device-type-X + function A’ >>> will consume one instance of the RC device-type-X. But >>> this is not right because this precludes other functions >>> on the same device instance from getting used. >>> >>> One way to solve this is to declare functions A and B as >>> resource classes themselves and have the flavor request >>> the function RC. Placement will then correctly count the >>> function instances. However, there is still a problem: >>> if the requested function A is not available, Placement >>> will return an empty list of RPs, but we need some way >>> to reprogram some device to create an instance of >>> function A.____ >>> >>> >>> Clearly, nova is not going to be reprogramming devices with >>> an instance of a particular function. >>> >>> Cyborg might need to have a separate agent that listens to >>> the nova notifications queue and upon seeing an event that >>> indicates a failed build due to lack of resources, then >>> Cyborg can try and reprogram a device and then try >>> rebuilding the original request.____ >>> >>> __ __ >>> >>> It was my understanding from that discussion that we intend to >>> insert Cyborg into the spawn workflow for device configuration >>> in the same way that we currently insert resources provided by >>> Cinder and Neutron. So while Nova won't be reprogramming a >>> device, it will be calling out to Cyborg to reprogram a device, >>> and waiting while that happens.____ >>> >>> My understanding is (and I concede some areas are a little >>> hazy):____ >>> >>> * The flavors says device type X with function Y____ >>> >>> * Placement tells us everywhere with device type X____ >>> >>> * A weigher orders these by devices which already have an >>> available function Y (where is this metadata stored?)____ >>> >>> * Nova schedules to host Z____ >>> >>> * Nova host Z asks cyborg for a local function Y and blocks____ >>> >>> * Cyborg hopefully returns function Y which is already >>> available____ >>> >>> * If not, Cyborg reprograms a function Y, then returns it____ >>> >>> Can anybody correct me/fill in the gaps?____ >>> >>> */[Mooney, Sean K] that correlates closely to my recollection >>> also. As for the metadata I think the weigher may need to call >>> to cyborg to retrieve this as it will not be available in the >>> host state object./* >>> >>> Is it the nova scheduler weigher or we want to support weigh on >>> placement? Function is traits as I think, so can we have >>> preferred_traits? I remember we talk about that parameter in the >>> past, but we don't have good use-case at that time. This is good >>> use-case. >>> >>> >>> If we call the Cyborg from the nova scheduler weigher, that will slow >>> down the scheduling a lot also. >>> >> >> Right, which is why I don't want to do any weighing in Placement at all. >> If folks want to sort by things that require long-running code/callbacks or >> silly temporal things like metrics, they can do that in a custom weigher in >> the nova-scheduler and take the performance hit there. >> >> Best, >> -jay >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Tue Mar 20 03:23:51 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Tue, 20 Mar 2018 11:23:51 +0800 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: Time is up. Welcome caoyuan join core team :D On Fri, Mar 16, 2018 at 2:57 PM, duonghq at vn.fujitsu.com < duonghq at vn.fujitsu.com> wrote: > +1 > > > > *From:* Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] > *Sent:* Monday, March 12, 2018 9:07 AM > *To:* OpenStack Development Mailing List openstack.org> > *Subject:* [openstack-dev] [kolla][vote] core nomination for caoyuan > > > > ​​Kolla core reviewer team, > > > > It is my pleasure to nominate caoyuan for kolla core team. > > > > caoyuan's output is fantastic over the last cycle. And he is the most > > active non-core contributor on Kolla project for last 180 days[1]. He > > focuses on configuration optimize and improve the pre-checks feature. > > > > Consider this nomination a +1 vote from me. > > > > A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 > > is a veto. Voting is open for 7 days until Mar 12th, or a unanimous > > response is reached or a veto vote occurs. > > > > [1] http://stackalytics.com/report/contribution/kolla-group/180 > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Tue Mar 20 03:55:08 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Tue, 20 Mar 2018 03:55:08 +0000 Subject: [openstack-dev] [cinder] Support share backup to different projects? Message-ID: Now Cinder can transfer volume (with or without snapshots) to different projects, and this make it possbile to transfer data across tenant via volume or image. Recently we had a conversation with our customer from Germany, they mentioned they are more pleased if we can support transfer data accross tenant via backup not image or volume, and these below are some of their concerns: 1. There is a use case that they would like to deploy their develop/test/product systems in the same region but within different tenants, so they have the requirment to share/transfer data across tenants. 2. Users are more willing to use backups to secure/store their volume data since backup feature is more advanced in product openstack version (incremental backups/periodic backups/etc.). 3. Volume transfer is not a valid option as it's in AZ and it's a complicated process if we would like to share the data to multiple projects (keep copy in all the tenants). 4. Most of the users would like to use image for bootable volume only and share volume data via image means the users have to maintain lots of image copies when volume backup changed as well as the whole system needs to differentiate bootable images and none bootable images, most important, we can not restore volume data via image now. 5. The easiest way for this seems to support sharing backup to different projects, the owner project have the full authority while shared projects only can view/read the backups. 6. AWS has the similar concept, share snapshot. We can share it by modify the snapshot's create volume permissions [1]. Looking forward to any like or dislike or suggestion on this idea accroding to my feature proposal experience:) Thanks TommyLike [1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Mar 20 03:58:40 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 20 Mar 2018 12:58:40 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Kaz, These repositories are under horizon project. It looks better to keep the current core team. It potentially brings some confusion if we treat some horizon plugin team specially. Reviewing xstatic repos would be a small burden, wo I think it would work without problem even if only horizon-core can approve xstatic reviews. 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : > Hi Ivan, Horizon folks, > > > Now totally 8 xstatic-** repos for heat-dashboard have been landed. > > In project-config for them, I've set same acl-config as the existing > xstatic repos. > It means only "xstatic-core" can manage the newly created repos on gerrit. > Could you kindly add "heat-dashboard-core" into "xstatic-core" like as > what horizon-core is doing ? > > xstatic-core > https://review.openstack.org/#/admin/groups/385,members > > heat-dashboard-core > https://review.openstack.org/#/admin/groups/1844,members > > Of course, we will surely touch only what we made, just would like to > manage them smoothly by ourselves. > In case we need to touch the other ones, will ask Horizon team for help. > > Thanks in advance. > > Regards, > Kaz > > > 2018-03-14 15:12 GMT+09:00 Xinni Ge : > > Hi Horizon Team, > > > > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, > > and submitted a patch for it. > > Could you please help to review the patch. > > > > https://bugs.launchpad.net/horizon/+bug/1755339 > > https://review.openstack.org/#/c/552259/ > > > > Thank you very much. > > > > Best Regards, > > Xinni > > > > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny > wrote: > >> > >> Hi Kaz, > >> > >> Thanks for cleaning this up. I put +1 on both of these patches > >> > >> Regards, > >> Ivan Kolodyazhny, > >> http://blog.e0ne.info/ > >> > >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara > >> wrote: > >>> > >>> Hi Ivan & Horizon folks, > >>> > >>> > >>> Now we are submitting a couple of patches to have the new xstatic > >>> modules. > >>> Let me request you to have review the following patches. > >>> We need Horizon PTL's +1 to move these forward. > >>> > >>> project-config > >>> https://review.openstack.org/#/c/551978/ > >>> > >>> governance > >>> https://review.openstack.org/#/c/551980/ > >>> > >>> Thanks in advance:) > >>> > >>> Regards, > >>> Kaz > >>> > >>> > >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >: > >>> > Yes, please do that. We can then discuss in the review about > technical > >>> > details. > >>> > > >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge > >>> > wrote: > >>> >> > >>> >> Hi, Akihiro > >>> >> > >>> >> Thanks for the quick reply. > >>> >> > >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be > >>> >> modified. > >>> >> It is much better to enhance horizon plugin settings, > >>> >> and I think maybe there could be one option like > ADD_XSTATIC_MODULES. > >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. > >>> >> I am considering to add a bug report to describe it at first, and > give > >>> >> a > >>> >> patch later maybe. > >>> >> Is that ok with the Horizon team? > >>> >> > >>> >> Best Regards. > >>> >> Xinni > >>> >> > >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki > >>> >> wrote: > >>> >>> > >>> >>> Hi Xinni, > >>> >>> > >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : > >>> >>> > Hello Horizon Team, > >>> >>> > > >>> >>> > I would like to hear about your opinions about how to add new > >>> >>> > xstatic > >>> >>> > modules to horizon settings. > >>> >>> > > >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue, > >>> >>> > thanks > >>> >>> > for > >>> >>> > your advices in Dublin PTG, we are now removing them and > >>> >>> > referencing as > >>> >>> > new > >>> >>> > xstatic-* libs. > >>> >>> > >>> >>> Thanks for moving this forward. > >>> >>> > >>> >>> > So we installed the new xstatic files (not uploaded as openstack > >>> >>> > official > >>> >>> > repos yet) in our development environment now, but hesitate to > >>> >>> > decide > >>> >>> > how to > >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in > >>> >>> > openstack_dashboard.settings so that the static files could be > >>> >>> > automatically > >>> >>> > collected by *collectstatic* process. > >>> >>> > > >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in > >>> >>> > openstack_dashboard/utils/settings.py and the relevant static > fils > >>> >>> > are > >>> >>> > added > >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin > dashboard. > >>> >>> > We may want new plugin setting keywords ( something similar to > >>> >>> > ADD_JS_FILES) > >>> >>> > to update horizon XSTATIC_MODULES (or directly update > >>> >>> > STATICFILES_DIRS). > >>> >>> > >>> >>> IMHO it is better to allow horizon plugins to add xstatic modules > >>> >>> through horizon plugin settings. I don't think it is a good idea to > >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin > >>> >>> usages. It makes difficult to track why and where a xstatic module > in > >>> >>> BASE_XSTATIC_MODULES is used. > >>> >>> Multiple horizon plugins can add a same entry, so horizon code to > >>> >>> handle plugin settings should merge multiple entries to a single > one > >>> >>> hopefully. > >>> >>> My vote is to enhance the horizon plugin settings. > >>> >>> > >>> >>> Akihiro > >>> >>> > >>> >>> > > >>> >>> > Looking forward to hearing any suggestions from you guys, and > >>> >>> > Best Regards, > >>> >>> > > >>> >>> > Xinni Ge > >>> >>> > > >>> >>> > > >>> >>> > > >>> >>> > ____________________________________________________________ > ______________ > >>> >>> > OpenStack Development Mailing List (not for usage questions) > >>> >>> > Unsubscribe: > >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>> >>> > > >>> >>> > >>> >>> > >>> >>> > >>> >>> ____________________________________________________________ > ______________ > >>> >>> OpenStack Development Mailing List (not for usage questions) > >>> >>> Unsubscribe: > >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >> > >>> >> > >>> >> > >>> >> > >>> >> -- > >>> >> 葛馨霓 Xinni Ge > >>> >> > >>> >> > >>> >> ____________________________________________________________ > ______________ > >>> >> OpenStack Development Mailing List (not for usage questions) > >>> >> Unsubscribe: > >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> >> > >>> > > >>> > > >>> > > >>> > ____________________________________________________________ > ______________ > >>> > OpenStack Development Mailing List (not for usage questions) > >>> > Unsubscribe: > >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > > > -- > > 葛馨霓 Xinni Ge > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.vivekanandan at ericsson.com Tue Mar 20 06:45:22 2018 From: n.vivekanandan at ericsson.com (N Vivekanandan) Date: Tue, 20 Mar 2018 06:45:22 +0000 Subject: [openstack-dev] [Openstack-Dev] [Neutron] [DragonFlow] Automatic Neighbour Discovery responder for IPv6 Message-ID: Hi DragonFlow Team, We noticed that you are adding support for automatic responder for neighbor solicitation via OpenFlow Rules here: https://review.openstack.org/#/c/412208/ Can you please let us know with latest OVS release are you using to test this feature? We are pursuing Automatic NS Responder in OpenDaylight Controller implementation, and we noticed that there are no NXM extensions to manage the 'R' bit and 'S' bit correctly. >From the RFC: https://tools.ietf.org/html/rfc4861 R Router flag. When set, the R-bit indicates that the sender is a router. The R-bit is used by Neighbor Unreachability Detection to detect a router that changes to a host. S Solicited flag. When set, the S-bit indicates that the advertisement was sent in response to a Neighbor Solicitation from the Destination address. The S-bit is used as a reachability confirmation for Neighbor Unreachability Detection. It MUST NOT be set in multicast advertisements or in unsolicited unicast advertisements. We noticed that this dragonflow rule is being programmed for automatic response generation for NS: icmp6,ipv6_dst=1::1,icmp_type=135 actions=load:0x88->NXM_NX_ICMPV6_TYPE[],move:NXM_NX_IPV6_SRC[]->NXM_NX_IPV6_DST[],mod_dl_src:00:11:22:33:44:55,load:0->NXM_NX_ND_SLL[],IN_PORT above line from spec https://docs.openstack.org/dragonflow/latest/specs/ipv6.html However, from the flow rule by dragonflow for automatic response above, we couldn't notice that R and S bits of the NS Response is being managed. Can you please clarify if you don't intend to use 'R' and 'S' bits at all in dragonflow implementation? Or you intend to use them but you weren't able to get NXM extensions for the same with OVS and so wanted to start ahead without managing those bits (as per RFC)? Thanks in advance for your help. -- Thanks, Vivek -------------- next part -------------- An HTML attachment was scrubbed... URL: From muroi.masahito at lab.ntt.co.jp Tue Mar 20 07:44:26 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 20 Mar 2018 16:44:26 +0900 Subject: [openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core Message-ID: <68dbaf1f-6546-9efe-e98b-8ffd23c1b117@lab.ntt.co.jp> Hi Blazar folks, I'd like to nominate Bertrand Souville to blazar core team. He has been involved in the project since the Ocata release. He has worked on NFV usecase, gap analysis and feedback in OPNFV and ETSI NFV as well as in Blazar itself. Additionally, he has reviewed not only Blazar repository but Blazar related repository with nice long-term perspective. I believe he would make the project much nicer. best regards, Masahito From ksnhr.tech at gmail.com Tue Mar 20 08:17:59 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Tue, 20 Mar 2018 17:17:59 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Akihiro, Thanks for your comment. The background of my request to add us to xstatic-core comes from Ivan's comment in last PTG's etherpad for heat-dashboard discussion. https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion Line135, "we can share ownership if needed - e0ne" Just in case, could you guys confirm unified opinion on this matter as Horizon team ? Frankly speaking I'm feeling the benefit to make us xstatic-core because it's easier & smoother to manage what we are taking for heat-dashboard. On the other hand, I can understand what Akihiro you are saying, the newly added repos belong to Horizon project & being managed by not Horizon core is not consistent. Also having exception might make unexpected confusion in near future. Eventually we will follow your opinion, let me hear Horizon team's conclusion. Regards, Kaz 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : > Hi Kaz, > > These repositories are under horizon project. It looks better to keep the > current core team. > It potentially brings some confusion if we treat some horizon plugin team > specially. > Reviewing xstatic repos would be a small burden, wo I think it would work > without problem even if only horizon-core can approve xstatic reviews. > > > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : >> >> Hi Ivan, Horizon folks, >> >> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. >> >> In project-config for them, I've set same acl-config as the existing >> xstatic repos. >> It means only "xstatic-core" can manage the newly created repos on gerrit. >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as >> what horizon-core is doing ? >> >> xstatic-core >> https://review.openstack.org/#/admin/groups/385,members >> >> heat-dashboard-core >> https://review.openstack.org/#/admin/groups/1844,members >> >> Of course, we will surely touch only what we made, just would like to >> manage them smoothly by ourselves. >> In case we need to touch the other ones, will ask Horizon team for help. >> >> Thanks in advance. >> >> Regards, >> Kaz >> >> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> > Hi Horizon Team, >> > >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, >> > and submitted a patch for it. >> > Could you please help to review the patch. >> > >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> > https://review.openstack.org/#/c/552259/ >> > >> > Thank you very much. >> > >> > Best Regards, >> > Xinni >> > >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> > wrote: >> >> >> >> Hi Kaz, >> >> >> >> Thanks for cleaning this up. I put +1 on both of these patches >> >> >> >> Regards, >> >> Ivan Kolodyazhny, >> >> http://blog.e0ne.info/ >> >> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> >> wrote: >> >>> >> >>> Hi Ivan & Horizon folks, >> >>> >> >>> >> >>> Now we are submitting a couple of patches to have the new xstatic >> >>> modules. >> >>> Let me request you to have review the following patches. >> >>> We need Horizon PTL's +1 to move these forward. >> >>> >> >>> project-config >> >>> https://review.openstack.org/#/c/551978/ >> >>> >> >>> governance >> >>> https://review.openstack.org/#/c/551980/ >> >>> >> >>> Thanks in advance:) >> >>> >> >>> Regards, >> >>> Kaz >> >>> >> >>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >>> : >> >>> > Yes, please do that. We can then discuss in the review about >> >>> > technical >> >>> > details. >> >>> > >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> >>> > wrote: >> >>> >> >> >>> >> Hi, Akihiro >> >>> >> >> >>> >> Thanks for the quick reply. >> >>> >> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be >> >>> >> modified. >> >>> >> It is much better to enhance horizon plugin settings, >> >>> >> and I think maybe there could be one option like >> >>> >> ADD_XSTATIC_MODULES. >> >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. >> >>> >> I am considering to add a bug report to describe it at first, and >> >>> >> give >> >>> >> a >> >>> >> patch later maybe. >> >>> >> Is that ok with the Horizon team? >> >>> >> >> >>> >> Best Regards. >> >>> >> Xinni >> >>> >> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> >>> >> wrote: >> >>> >>> >> >>> >>> Hi Xinni, >> >>> >>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >> >>> >>> > Hello Horizon Team, >> >>> >>> > >> >>> >>> > I would like to hear about your opinions about how to add new >> >>> >>> > xstatic >> >>> >>> > modules to horizon settings. >> >>> >>> > >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue, >> >>> >>> > thanks >> >>> >>> > for >> >>> >>> > your advices in Dublin PTG, we are now removing them and >> >>> >>> > referencing as >> >>> >>> > new >> >>> >>> > xstatic-* libs. >> >>> >>> >> >>> >>> Thanks for moving this forward. >> >>> >>> >> >>> >>> > So we installed the new xstatic files (not uploaded as openstack >> >>> >>> > official >> >>> >>> > repos yet) in our development environment now, but hesitate to >> >>> >>> > decide >> >>> >>> > how to >> >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in >> >>> >>> > openstack_dashboard.settings so that the static files could be >> >>> >>> > automatically >> >>> >>> > collected by *collectstatic* process. >> >>> >>> > >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant static >> >>> >>> > fils >> >>> >>> > are >> >>> >>> > added >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin >> >>> >>> > dashboard. >> >>> >>> > We may want new plugin setting keywords ( something similar to >> >>> >>> > ADD_JS_FILES) >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >>> >>> > STATICFILES_DIRS). >> >>> >>> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic modules >> >>> >>> through horizon plugin settings. I don't think it is a good idea >> >>> >>> to >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >> >>> >>> usages. It makes difficult to track why and where a xstatic module >> >>> >>> in >> >>> >>> BASE_XSTATIC_MODULES is used. >> >>> >>> Multiple horizon plugins can add a same entry, so horizon code to >> >>> >>> handle plugin settings should merge multiple entries to a single >> >>> >>> one >> >>> >>> hopefully. >> >>> >>> My vote is to enhance the horizon plugin settings. >> >>> >>> >> >>> >>> Akihiro >> >>> >>> >> >>> >>> > >> >>> >>> > Looking forward to hearing any suggestions from you guys, and >> >>> >>> > Best Regards, >> >>> >>> > >> >>> >>> > Xinni Ge >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > __________________________________________________________________________ >> >>> >>> > OpenStack Development Mailing List (not for usage questions) >> >>> >>> > Unsubscribe: >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> >>> > >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >>> > >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> __________________________________________________________________________ >> >>> >>> OpenStack Development Mailing List (not for usage questions) >> >>> >>> Unsubscribe: >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> -- >> >>> >> 葛馨霓 Xinni Ge >> >>> >> >> >>> >> >> >>> >> >> >>> >> __________________________________________________________________________ >> >>> >> OpenStack Development Mailing List (not for usage questions) >> >>> >> Unsubscribe: >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >> >>> > >> >>> > >> >>> > >> >>> > >> >>> > __________________________________________________________________________ >> >>> > OpenStack Development Mailing List (not for usage questions) >> >>> > Unsubscribe: >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> > >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > >> > >> > -- >> > 葛馨霓 Xinni Ge >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From singh.surya64mnnit at gmail.com Tue Mar 20 08:24:18 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Tue, 20 Mar 2018 13:54:18 +0530 Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan In-Reply-To: References: Message-ID: +1 > >> >> *From:* Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] >> *Sent:* Monday, March 12, 2018 9:07 AM >> *To:* OpenStack Development Mailing List > .org> >> *Subject:* [openstack-dev] [kolla][vote] core nomination for caoyuan >> >> >> >> ​​Kolla core reviewer team, >> >> >> >> It is my pleasure to nominate caoyuan for kolla core team. >> >> >> >> caoyuan's output is fantastic over the last cycle. And he is the most >> >> active non-core contributor on Kolla project for last 180 days[1]. He >> >> focuses on configuration optimize and improve the pre-checks feature. >> >> >> >> Consider this nomination a +1 vote from me. >> >> >> >> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1 >> >> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous >> >> response is reached or a veto vote occurs. >> >> >> >> [1] http://stackalytics.com/report/contribution/kolla-group/180 >> >> -- >> >> Regards, >> >> Jeffrey Zhang >> >> Blog: http://xcodest.me >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue Mar 20 08:39:43 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 20 Mar 2018 19:39:43 +1100 Subject: [openstack-dev] [all][api] POST /api-sig/news In-Reply-To: References: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> Message-ID: <057cec41-1c96-924e-21b8-7a9ebace0dca@redhat.com> On 16/03/18 19:55, Chris Dent wrote: > > Meta: When responding to lists, please do not cc individuals, just > repond to the list. Thanks, response within. > +1 > On Fri, 16 Mar 2018, Gilles Dubreuil wrote: > >> In order to continue and progress on the API Schema guideline [1] as >> mentioned in [2] to make APIs more machine-discoverable and also >> discussed during [3]. >> >> Unfortunately until a new or either a second meeting time slot has >> been allocated,  inconveniently for everyone, have to be done by emails. > > I'm sorry that the meeting time is excluding you and others, but our > efforts to have either a second meeting or to change the time have > met with limited response (except from you). > > In any case, the meeting are designed to be checkpoints where we > resolve stuck questions and checkpoint where we are on things. It is > better that most of the work be done in emails and on reviews as > that's the most inclusive, and is less dependent on time-related > variables. I agree in general most of our work can be done "off-line" meanwhile there are times were interaction is preferable especially in early phases of conception in order to provide appropriate momentum. > > So moving the discussion about schemas here is the right thing and > the fact that it hasn't happened (until now) is the reason for what > appears to be a rather lukewarm reception from the people writing > the API-SIG newsletter: if there's no traffic on either the gerrit > review or here in email then there's no evidence of demand. You're > asserting here that there is; that's great. Yes, and some of those believers are to either jump-on this thread or add comment to related reviews in order to confirm this. Of course one cannot expect them to be active participants as I'm delegated to be the interface for this feature. > >> Of course new features have to be decided (voted) by the community >> but how does that work when there are not enough people voting in? >> It seems unfair to decide not to move forward and ignore the request >> because the others people interested are not participating at this >> level. > > In a world of limited resources we can't impose work on people. The > SIG is designed to be a place where people can come to make progress > on API-related issues. If people don't show up, progress can't be > made. Showing up doesn't have to mean show up at an IRC meeting. In > fact I very much hope that it never means that. Instead it means > writing things (like your email message) and seeking out > collaborators to push your idea(s) forward. This comforts me about more automation to help ;) > >> It's very important  to consider the fact "I" am representing more >> than just myself but an Openstack integration team, whose members are >> supporting me, and our work impacts others teams involved in their >> open source product consuming OpenStack. I'm sorry if I haven't made >> this more clear from the beginning, I guess I'm still learning on the >> particiaption process. So from now on, I'm going to use "us" instead. > > Can some of those "us" show up on the mailing list, the gerrit > reviews, and prototype work that Graham has done? Yes absolutely, as I just mentioned above. > >> Also from discussions with other developers from AT&T (OpenStack >> summit in Sydney) and SAP (Misty project) who are already using >> automation to consume APIs, this is really needed. > > Them too. For the first ones, I've tried without success (tweeter), unfortunately I don't have their email addresses, let me ask Openstack Organizers if they can pass it along... I'll poke the second ones. > >> I've also mentioned the now known fact that no SDK has full time >> resources to maintain it (which was the initial trigger for us) more >> automation is the only sustainable way to continue the journey. >> >> Finally how can we dare say no to more automation? Unless of course, >> only artisan work done by real hipster is allowed ;) > > Nobody is saying no to automation (as far as I'm aware). Some people > (e.g., me, but not just me) are saying "unless there's an active > community to do this work and actively publish about it and the > related use cases that drive it it's impossible to make it a > priority". Some other people (also me, but not just me) are also > saying "schematizing API client generation is not my favorite thing" > but that's just a personal opinion and essentially meaningless > because yet other people are saying "I love API schema!". > > What's missing, though, is continuous enagement on producing > children of that love. Well I believe, maybe because I kind of belong to the second group, that the whole API definition is upside-down. If we had API schema from day one we would have more children of love and many many more grand children of Openstack users. > >>> Furthermore, API-Schema will be problematic for services that use >> microversions. If you have some insight or opinions on this, please >> add your comments to that review. >> >> I understand microversion standardization (OpenAPI) has not happened >> yet or if it ever does but that shouldn't preclude making progress. > > Of course, but who are you expecting to make that progress? The > API-SIGs statement of "not something we're likely to pursue as a > part of guidance" is about apparent unavailability of interested > people. If that changes then the guidance situation probably changes > too. This a question I've been struggling about a lot. What's the API SIG purpose and how effective it can be in driving changes. I understand the history of OpenStack has been very pragmatically driven from all its projects and even more strongly from some 'core' projects such as Nova. Meanwhile it doesn't preclude OpenStack overall project to benefit from having needs driven from a user level requirements. As far as know, there are no other structure, whether project or SIG/WG that can currently tackle this better than the API SIG. Yes, going across the projects is daunting but I believe that's the challenge to lead and share among all projects that OpenStack needs it. Maybe that's what I kind of expect here, to get support to do so. > > But not writing guiadance is different from provide a place to talk > about it. That's what a SIG is for. Think of it as a room with > coffee and snacks where it is safe to talk about anything related to > APIs. And that room exists in email just as much as it does in IRC > and at the PTG. Ideally it exists _most_ in email. > >> So summarize and clarify, we are talking about SDK being able to >> build their interface to Openstack APIs in an automated way but >> statically from API Schema generated by every project. Such API >> Schema is already built in memory during API reference documentation >> generation and could be saved in JSON format (for instance) (see [5]). > > What do you see as the current roadblocks preventing this work from > continuing to make progress? Once we've obtained clear evidence from others of such need and assuming we have support of the committee then I suppose Graham's PR will move forward before we add guidance for API schema use. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Mar 20 08:45:17 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 20 Mar 2018 17:45:17 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Kaz and Ivan, Yeah, it is worth discussed officially in the horizon team meeting or the mailing list thread to get a consensus. Hopefully you can add this topic to the horizon meeting agenda. After sending the previous mail, I noticed anther option. I see there are several options now. (1) Keep xstatic-core and horizon-core same. (2) Add specific members to xstatic-core (3) Add specific horizon-plugin core to xstatic-core (4) Split core membership into per-repo basis (perhaps too complicated!!) My current vote is (2) as xstatic-core needs to understand what is xstatic and how it is maintained. Thanks, Akihiro 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : > Hi Akihiro, > > > Thanks for your comment. > The background of my request to add us to xstatic-core comes from > Ivan's comment in last PTG's etherpad for heat-dashboard discussion. > > https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion > Line135, "we can share ownership if needed - e0ne" > > Just in case, could you guys confirm unified opinion on this matter as > Horizon team ? > > Frankly speaking I'm feeling the benefit to make us xstatic-core > because it's easier & smoother to manage what we are taking for > heat-dashboard. > On the other hand, I can understand what Akihiro you are saying, the > newly added repos belong to Horizon project & being managed by not > Horizon core is not consistent. > Also having exception might make unexpected confusion in near future. > > Eventually we will follow your opinion, let me hear Horizon team's > conclusion. > > Regards, > Kaz > > > 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : > > Hi Kaz, > > > > These repositories are under horizon project. It looks better to keep the > > current core team. > > It potentially brings some confusion if we treat some horizon plugin team > > specially. > > Reviewing xstatic repos would be a small burden, wo I think it would work > > without problem even if only horizon-core can approve xstatic reviews. > > > > > > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : > >> > >> Hi Ivan, Horizon folks, > >> > >> > >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. > >> > >> In project-config for them, I've set same acl-config as the existing > >> xstatic repos. > >> It means only "xstatic-core" can manage the newly created repos on > gerrit. > >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as > >> what horizon-core is doing ? > >> > >> xstatic-core > >> https://review.openstack.org/#/admin/groups/385,members > >> > >> heat-dashboard-core > >> https://review.openstack.org/#/admin/groups/1844,members > >> > >> Of course, we will surely touch only what we made, just would like to > >> manage them smoothly by ourselves. > >> In case we need to touch the other ones, will ask Horizon team for help. > >> > >> Thanks in advance. > >> > >> Regards, > >> Kaz > >> > >> > >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : > >> > Hi Horizon Team, > >> > > >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, > >> > and submitted a patch for it. > >> > Could you please help to review the patch. > >> > > >> > https://bugs.launchpad.net/horizon/+bug/1755339 > >> > https://review.openstack.org/#/c/552259/ > >> > > >> > Thank you very much. > >> > > >> > Best Regards, > >> > Xinni > >> > > >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny > >> > wrote: > >> >> > >> >> Hi Kaz, > >> >> > >> >> Thanks for cleaning this up. I put +1 on both of these patches > >> >> > >> >> Regards, > >> >> Ivan Kolodyazhny, > >> >> http://blog.e0ne.info/ > >> >> > >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara > > >> >> wrote: > >> >>> > >> >>> Hi Ivan & Horizon folks, > >> >>> > >> >>> > >> >>> Now we are submitting a couple of patches to have the new xstatic > >> >>> modules. > >> >>> Let me request you to have review the following patches. > >> >>> We need Horizon PTL's +1 to move these forward. > >> >>> > >> >>> project-config > >> >>> https://review.openstack.org/#/c/551978/ > >> >>> > >> >>> governance > >> >>> https://review.openstack.org/#/c/551980/ > >> >>> > >> >>> Thanks in advance:) > >> >>> > >> >>> Regards, > >> >>> Kaz > >> >>> > >> >>> > >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski > >> >>> : > >> >>> > Yes, please do that. We can then discuss in the review about > >> >>> > technical > >> >>> > details. > >> >>> > > >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge > > >> >>> > wrote: > >> >>> >> > >> >>> >> Hi, Akihiro > >> >>> >> > >> >>> >> Thanks for the quick reply. > >> >>> >> > >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not be > >> >>> >> modified. > >> >>> >> It is much better to enhance horizon plugin settings, > >> >>> >> and I think maybe there could be one option like > >> >>> >> ADD_XSTATIC_MODULES. > >> >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. > >> >>> >> I am considering to add a bug report to describe it at first, and > >> >>> >> give > >> >>> >> a > >> >>> >> patch later maybe. > >> >>> >> Is that ok with the Horizon team? > >> >>> >> > >> >>> >> Best Regards. > >> >>> >> Xinni > >> >>> >> > >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki < > amotoki at gmail.com> > >> >>> >> wrote: > >> >>> >>> > >> >>> >>> Hi Xinni, > >> >>> >>> > >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : > >> >>> >>> > Hello Horizon Team, > >> >>> >>> > > >> >>> >>> > I would like to hear about your opinions about how to add new > >> >>> >>> > xstatic > >> >>> >>> > modules to horizon settings. > >> >>> >>> > > >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue, > >> >>> >>> > thanks > >> >>> >>> > for > >> >>> >>> > your advices in Dublin PTG, we are now removing them and > >> >>> >>> > referencing as > >> >>> >>> > new > >> >>> >>> > xstatic-* libs. > >> >>> >>> > >> >>> >>> Thanks for moving this forward. > >> >>> >>> > >> >>> >>> > So we installed the new xstatic files (not uploaded as > openstack > >> >>> >>> > official > >> >>> >>> > repos yet) in our development environment now, but hesitate to > >> >>> >>> > decide > >> >>> >>> > how to > >> >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in > >> >>> >>> > openstack_dashboard.settings so that the static files could be > >> >>> >>> > automatically > >> >>> >>> > collected by *collectstatic* process. > >> >>> >>> > > >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in > >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant static > >> >>> >>> > fils > >> >>> >>> > are > >> >>> >>> > added > >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin > >> >>> >>> > dashboard. > >> >>> >>> > We may want new plugin setting keywords ( something similar to > >> >>> >>> > ADD_JS_FILES) > >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update > >> >>> >>> > STATICFILES_DIRS). > >> >>> >>> > >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic > modules > >> >>> >>> through horizon plugin settings. I don't think it is a good idea > >> >>> >>> to > >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin > >> >>> >>> usages. It makes difficult to track why and where a xstatic > module > >> >>> >>> in > >> >>> >>> BASE_XSTATIC_MODULES is used. > >> >>> >>> Multiple horizon plugins can add a same entry, so horizon code > to > >> >>> >>> handle plugin settings should merge multiple entries to a single > >> >>> >>> one > >> >>> >>> hopefully. > >> >>> >>> My vote is to enhance the horizon plugin settings. > >> >>> >>> > >> >>> >>> Akihiro > >> >>> >>> > >> >>> >>> > > >> >>> >>> > Looking forward to hearing any suggestions from you guys, and > >> >>> >>> > Best Regards, > >> >>> >>> > > >> >>> >>> > Xinni Ge > >> >>> >>> > > >> >>> >>> > > >> >>> >>> > > >> >>> >>> > > >> >>> >>> > ____________________________________________________________ > ______________ > >> >>> >>> > OpenStack Development Mailing List (not for usage questions) > >> >>> >>> > Unsubscribe: > >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> >>> > > >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>> >>> > > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> ____________________________________________________________ > ______________ > >> >>> >>> OpenStack Development Mailing List (not for usage questions) > >> >>> >>> Unsubscribe: > >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> -- > >> >>> >> 葛馨霓 Xinni Ge > >> >>> >> > >> >>> >> > >> >>> >> > >> >>> >> ____________________________________________________________ > ______________ > >> >>> >> OpenStack Development Mailing List (not for usage questions) > >> >>> >> Unsubscribe: > >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>> >> > >> >>> > > >> >>> > > >> >>> > > >> >>> > > >> >>> > ____________________________________________________________ > ______________ > >> >>> > OpenStack Development Mailing List (not for usage questions) > >> >>> > Unsubscribe: > >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >>> > > >> >>> > >> >>> > >> >>> > >> >>> ____________________________________________________________ > ______________ > >> >>> OpenStack Development Mailing List (not for usage questions) > >> >>> Unsubscribe: > >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> >> > >> >> > >> >> > >> >> ____________________________________________________________ > ______________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> > > >> > > >> > > >> > -- > >> > 葛馨霓 Xinni Ge > >> > > >> > > >> > ____________________________________________________________ > ______________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Tue Mar 20 08:54:43 2018 From: julien at danjou.info (Julien Danjou) Date: Tue, 20 Mar 2018 09:54:43 +0100 Subject: [openstack-dev] [ceilometer] [gnocchi] keystone verification failed. In-Reply-To: (mango.'s message of "Tue, 20 Mar 2018 10:15:21 +0800") References: Message-ID: On Tue, Mar 20 2018, __ mango. wrote: > hi, > I have a question about the validation of gnocchi keystone. > I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be > # gnocchi status --debug > REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H > "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: > application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 > python-requests/2.18.1 CPython/2.7.12" > Starting new HTTP connection (1): localhost > http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114 > RESP: [401] Content-Type: application/json Content-Length: 114 WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: Keep-Alive > RESP BODY: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} > The request you have made requires authentication. (HTTP 401) You need to be authed as "admin" to get the status. -- Julien Danjou /* Free Software hacker https://julien.danjou.info */ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From vidyadharreddy68 at gmail.com Tue Mar 20 09:27:57 2018 From: vidyadharreddy68 at gmail.com (vidyadhar reddy) Date: Tue, 20 Mar 2018 10:27:57 +0100 Subject: [openstack-dev] [Neutron][vpnaas] Message-ID: Hello, i have a general question regarding the working of vpnaas, can we setup multiple vpn connections on a single router? my scenario is lets say we have two networks net 1 and net2 in two different sites respectiviely, each network has two subnets, in site one and site two we have two routers, each router with three interfaces one for the public network and remaining two for the two subnets, can we setup a two vpnaas connections on the routers in each site to enable communication between the two subnets in each site. i have tried this setup, it didn't work for me. just wanted to know if it is a design constraint or not, i am not sure if this issue is under development, is there any development going on or is it already been solved? BR, Vidyadhar reddy peddireddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From vidyadharreddy68 at gmail.com Tue Mar 20 09:30:55 2018 From: vidyadharreddy68 at gmail.com (vidyadhar reddy) Date: Tue, 20 Mar 2018 10:30:55 +0100 Subject: [openstack-dev] [Neutron][vpnaas] Message-ID: Hello, i have a general question regarding the working of vpnaas, can we setup multiple vpn connections on a single router? my scenario is lets say we have two networks net 1 and net2 in two different sites respectively, each network has two subnets, two sites have one router in each, with three interfaces one for the public network and remaining two for the two subnets, can we setup a two vpnaas connections on the routers in each site to enable communication between the two subnets in each site. i have tried this setup, it didn't work for me. just wanted to know if it is a design constraint or not, i am not sure if this issue is under development, is there any development going on or is it already been solved? BR, Vidyadhar reddy peddireddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue Mar 20 09:34:08 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 20 Mar 2018 20:34:08 +1100 Subject: [openstack-dev] [Openstack-sigs] [all][api] POST /api-sig/news In-Reply-To: References: <72f7b0cd-dca7-d4b0-8236-8222ac755f0a@redhat.com> Message-ID: On 20/03/18 08:26, Michael McCune wrote: > > > On Fri, Mar 16, 2018 at 4:55 AM, Chris Dent > wrote: > > > > So summarize and clarify, we are talking about SDK being able > to build their interface to Openstack APIs in an automated way > but statically from API Schema generated by every project. > Such API Schema is already built in memory during API > reference documentation generation and could be saved in JSON > format (for instance) (see [5]). > > > What do you see as the current roadblocks preventing this work from > continuing to make progress? > > > > Gilles, i'm very curious about how we can help as well. > > i am keenly interested in the api-schema work that is happening and i > am coming up to speed with the work that Graham has done, and which > previously existed, on os-api-ref. although i don't have a *ton* of > spare free time, i would like to help as much as i can. Hi Michael, Thank you very much for jumping in. Your interest shows the demand for such feature, which is what we need the most at the moment. The more people the better the momentum and likelihood of getting more help. Let's blow the horn! As you probably already know, the real work is Graham's PR [1] where the magic is going to happen and where you can help. Graham who has been involved and working with the Sphinx library offered to 'dump' the API schema which is already in memory, what I call the de-facto API Schema, which is needed to generate the API Reference guides. So instead of asking developers of each project to change their habits and write an API schema up front, it seemed easier to just use the current work flow in place with the documentation (API Ref) and generate the API schema which can be stored in every project Git. [1] https://review.openstack.org/#/c/528801 Cheers, Gilles > > thanks for bringing this up again, > > peace o/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From 935540343 at qq.com Tue Mar 20 09:34:11 2018 From: 935540343 at qq.com (=?utf-8?B?X18gbWFuZ28u?=) Date: Tue, 20 Mar 2018 17:34:11 +0800 Subject: [openstack-dev] =?utf-8?b?5Zue5aSN77yaICBbY2VpbG9tZXRlcl0gW2du?= =?utf-8?q?occhi=5D_keystone_verification_failed=2E?= In-Reply-To: References: Message-ID: hi, I have configured the following export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin exp​ort OS_PASSWORD=admin export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 /etc/gnocchi/gnocchi.conf [DEFAULT] [api] auth_mode = keystone [archive_policy] [cors] [healthcheck] [incoming] [indexer] url = mysql+pymysql://gnocchi:gnocchi at controller/gnocchi [keystone_authtoken] auth_type = password auth_url = http://controller:5000/v3 project_domain_name = default user_domain_name = default project_name = service username = gnocchi password = gnocchi interface = internalURL region_name = RegionOne [metricd] [oslo_middleware] [oslo_policy] [statsd] [storage] coordination_url = redis://controller:6379 file_basepath = /var/lib/gnocchi driver = file PS: This standard documents cannot install gnocchi (https://docs.openstack.org/ceilometer/pike/install/install-base-ubuntu.html#install-gnocchi), what should I do? I have used the "admin" authentication, and the other components are normal except for gnocchi. ------------------ 原始邮件 ------------------ 发件人: "Julien Danjou"; 发送时间: 2018年3月20日(星期二) 下午4:54 收件人: "__ mango."<935540343 at qq.com>; 抄送: "openstack-dev"; 主题: Re: [openstack-dev] [ceilometer] [gnocchi] keystone verification failed. On Tue, Mar 20 2018, __ mango. wrote: > hi, > I have a question about the validation of gnocchi keystone. > I run the following command, but it is not successful.(api.auth_mode :basic, basic mode can be > # gnocchi status --debug > REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H > "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: > application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 > python-requests/2.18.1 CPython/2.7.12" > Starting new HTTP connection (1): localhost > http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114 > RESP: [401] Content-Type: application/json Content-Length: 114 WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: Keep-Alive > RESP BODY: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} > The request you have made requires authentication. (HTTP 401) You need to be authed as "admin" to get the status. -- Julien Danjou /* Free Software hacker https://julien.danjou.info */ -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Tue Mar 20 09:50:51 2018 From: julien at danjou.info (Julien Danjou) Date: Tue, 20 Mar 2018 10:50:51 +0100 Subject: [openstack-dev] =?utf-8?b?5Zue5aSN77yaICBbY2VpbG9tZXRlcl0gW2du?= =?utf-8?q?occhi=5D_keystone_verification_failed=2E?= In-Reply-To: (mango.'s message of "Tue, 20 Mar 2018 17:34:11 +0800") References: Message-ID: On Tue, Mar 20 2018, __ mango. wrote: > I have configured the following > export OS_PROJECT_DOMAIN_NAME=Default > export OS_USER_DOMAIN_NAME=Default > export OS_PROJECT_NAME=admin > export OS_USERNAME=admin > exp​ort OS_PASSWORD=admin > export OS_AUTH_URL=http://controller:35357/v3 > export OS_IDENTITY_API_VERSION=3 > export OS_IMAGE_API_VERSION=2 > > /etc/gnocchi/gnocchi.conf > [DEFAULT] > [api] > auth_mode = keystone You said in your mail that you were using basic as auth_mode and your request here: >> # gnocchi status --debug >> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H >> "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: >> application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 >> python-requests/2.18.1 CPython/2.7.12" >> Starting new HTTP connection (1): localhost >> http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114 >> RESP: [401] Content-Type: application/json Content-Length: 114 WWW-Authenticate: Keystone uri='http://controller:5000/v3' Connection: Keep-Alive >> RESP BODY: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Unauthorized"}} >> The request you have made requires authentication. (HTTP 401) Indicates that the client is using basic auth mode. As Gordon already replied: https://gnocchi.xyz/gnocchiclient/shell.html#openstack-keystone-authentication you're missing OS_AUTH_TYPE Sigh. -- Julien Danjou /* Free Software hacker https://julien.danjou.info */ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From thierry at openstack.org Tue Mar 20 09:54:38 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 20 Mar 2018 10:54:38 +0100 Subject: [openstack-dev] [tc] Vancouver Forum session brainstorming Message-ID: Hi, governance / cross-community topics lovers, Like other groups, the TC has started to brainstorm potential topics for discussion at the Forum in Vancouver. The idea is to coordinate, merge duplicate sessions, and find missing sessions before the submission site formally opens. Please add your suggestions at: https://etherpad.openstack.org/p/YVR-forum-TC-sessions As a reminder, the Forum is the venue where it is the easiest to get wide feedback from the OpenStack community as a whole. Ideal session topics are those that would benefit a lot from that wide feedback. Cheers, -- Thierry Carrez (ttx) From balazs.gibizer at ericsson.com Tue Mar 20 10:29:04 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 20 Mar 2018 11:29:04 +0100 Subject: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron In-Reply-To: <82358229-b86f-073c-5019-6e762be73722@gmail.com> References: <82358229-b86f-073c-5019-6e762be73722@gmail.com> Message-ID: <1521541744.9826.2@smtp.office365.com> On Fri, Mar 16, 2018 at 12:04 AM, Matt Riedemann wrote: > On 3/15/2018 3:30 PM, melanie witt wrote: >> * We don't need to block bandwidth-based scheduling support for >> doing port creation in conductor (it's not trivial), however, if >> nova creates a port on a network with a QoS policy, nova is going >> to have to munge the allocations and update placement (from >> nova-compute) ... so maybe we should block this on moving port >> creation to conductor after all > > This is not the current direction in the spec. The spec is *large* > and detailed, and this is one of the things being discussed in there. > For the latest on all of it, gonna need to get caught up on the spec. > But it won't be updated for awhile because Brother Gib is on vacation. In the current state of the spec I try to keep this case out of scope [1]. Having QoS policy requires a special port or network and nova server create with network_id only expected to work is simple network and port setup. If the user want some special port (like SRIOV) she has to pre-create that port in neutron anyhow. Cheers, gibi [1] https://review.openstack.org/#/c/502306/18/specs/rocky/approved/bandwidth-resource-provider.rst at 126 From jim at jimrollenhagen.com Tue Mar 20 12:37:15 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 20 Mar 2018 12:37:15 +0000 Subject: [openstack-dev] [nova][ironic] Rocky PTG summary - nova/ironic In-Reply-To: References: Message-ID: Thanks for the writeup, Melanie :) On Mon, Mar 19, 2018 at 8:31 PM, melanie witt wrote: > > * For the issue of nova-compute crashing on startup, we could add a > try-except around the call site at startup and ignore a "NotReadyYet" or > similar exception from the Ironic driver > This is here: https://review.openstack.org/#/c/545479/ Just doing a bit more testing and should have a new version up shortly. > * On Ironic API version negotiation, the ironicclient already has some > version negotiation built-in, so there are some options. 1) update Ironic > driver to handle return/error codes from ironicclient version-negotiated > calls, 2) add per-call microversion support to ironicclient and use it in > the Ironic driver, 3) convert all Ironic driver calls to use raw REST > * Option 1) would be the most expedient, but it's up to the Ironic > team how they will want to proceed. Option 3 is the desired ideal solution > but will take a rewrite of the related Ironic driver unit tests as they > currently all mock ironicclient > We discussed this further in IRC yesterday, and Julia is going to explore option 2 for now. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From priteau at uchicago.edu Tue Mar 20 13:01:36 2018 From: priteau at uchicago.edu (Pierre Riteau) Date: Tue, 20 Mar 2018 13:01:36 +0000 Subject: [openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core In-Reply-To: <68dbaf1f-6546-9efe-e98b-8ffd23c1b117@lab.ntt.co.jp> References: <68dbaf1f-6546-9efe-e98b-8ffd23c1b117@lab.ntt.co.jp> Message-ID: <84717E64-73B9-481C-AADD-8FE1D5412128@uchicago.edu> > On 20 Mar 2018, at 07:44, Masahito MUROI wrote: > > Hi Blazar folks, > > I'd like to nominate Bertrand Souville to blazar core team. He has been involved in the project since the Ocata release. He has worked on NFV usecase, gap analysis and feedback in OPNFV and ETSI NFV as well as in Blazar itself. Additionally, he has reviewed not only Blazar repository but Blazar related repository with nice long-term perspective. > > I believe he would make the project much nicer. > > best regards, > Masahito +1 From mkletzan at redhat.com Tue Mar 20 14:20:31 2018 From: mkletzan at redhat.com (Martin Kletzander) Date: Tue, 20 Mar 2018 15:20:31 +0100 Subject: [openstack-dev] Project for profiles and defaults for libvirt domains Message-ID: <20180320142031.GB23007@wheatley> Hi everyone! First of all sorry for such wide distribution, but apparently that's the best way to make sure we cooperate nicely. So please be considerate as this is a cross-post between huge amount of mailing lists. After some discussions with developers from different projects that work with libvirt one cannot but notice some common patterns and workarounds. So I set off to see how can we make all our lives better and our coding more effective (and maybe more fun as well). If all goes well we will create a project that will accommodate most of the defaulting, policies, workarounds and other common algorithms around libvirt domain definitions. And since early design gets you half way, I would like to know your feedback on several key points as well as on the general idea. Also correct me brutally in case I'm wrong. In order to not get confused in the following descriptions, I will refer to this project idea using the name `virtuned`, but there is really no name for it yet (although an abbreviation for "Virtualization Abstraction Definition and Hypervisor Delegation" would suit well, IMHO). Here are some common problems and use cases that virtuned could solve (or help with). Don't take it as something that's impossible to solve on your own, but rather something that could be de-duplicated from multiple projects or "done right" instead of various hack-ish solutions. 1) Default devices/values Libvirt itself must default to whatever values there were before any particular element was introduced due to the fact that it strives to keep the guest ABI stable. That means, for example, that it can't just add -vmcoreinfo option (for KASLR support) or magically add the pvpanic device to all QEMU machines, even though it would be useful, as that would change the guest ABI. For default values this is even more obvious. Let's say someone figures out some "pretty good" default values for various HyperV enlightenment feature tunables. Libvirt can't magically change them, but each one of the projects building on top of it doesn't want to keep that list updated and take care of setting them in every new XML. Some projects don't even expose those to the end user as a knob, while others might. One more thing could be automatically figuring out best values based on libosinfo-provided data. 2) Policies Lot of the time there are parts of the domain definition that need to be added, but nobody really cares about them. Sometimes it's enough to have few templates, another time you might want to have a policy per-scenario and want to combine them in various ways. For example with the data provided by point 1). For example if you want PCI-Express, you need the q35 machine type, but you don't really want to care about the machine type. Or you want to use SPICE, but you don't want to care about adding QXL. What if some of these policies could be specified once (using some DSL for example), and used by virtuned to merge them in a unified and predictable way? 3) Abstracting the XML This is probably just usable for stateless apps, but it might happen that some apps don't really want to care about the XML at all. They just want an abstract view of the domain, possibly add/remove a device and that's it. We could do that as well. I can't really tell how much of a demand there is for it, though. 4) Identifying devices properly In contrast to the previous point, stateful apps might have a problem identifying devices after hotplug. For example, let's say you don't care about the addresses and leave that up to libvirt. You hotplug a device into the domain and dump the new XML of it. Depending on what type of device it was, you might need to identify it based on different values. It could be for disks, for interfaces etc. For some devices it might not even be possible and you need to remember the addresses of all the previous devices and then parse them just to identify that one device and then throw them away. With new enough libvirt you could use the user aliases for that, but turns out it's not that easy to use them properly anyway. Also the aliases won't help users identify that device inside the guest. We really should've gone with new attribute for the user alias instead of using an existing one, given how many problems that is causing. 5) Generating the right XML snippet for device hot-(un)plug This is kind of related to some previous points. When hot-plugging a device and creating an XML snippet for it, you want to keep the defaults from point 1) and policies from 2) in mind. Or something related to the already existing domain which you can describe systematically. And adding something for identification (see previous point). Doing the hot-unplug is easy depending on how much information about that device is saved by your application. The less you save about the device (or show to the user in a GUI, if applicable) the harder it might be to generate an XML that libvirt will accept. Again, some problems with this should be fixed in libvirt, some of them are easy to workaround. But having a common ground that takes care of this should help some projects. Hot-unplug could be implemented just based on the alias. This is something that would fit into libvirt as well. ======================================================================== To mention some pre-existing solutions: - I understand OpenStack has some really sensible and wisely chosen and/or tested default values. - I know KubeVirt has VirtualMachinePresets. That is something closely related to points 1) and 2). Also their abstraction of the XML might be usable for point 3). - There was an effort on creating policy based configuration of libvirt objects called libvirt-designer. This is closely related to points 2) and 3). Unfortunately there was no much going on lately and part of virt-manager repository has currently more features implemented with the same ideas in mind, just not exported for public use. We could utilize some of the above to various extents. Let me know what you think and have a nice day. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: From aschultz at redhat.com Tue Mar 20 14:42:18 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 20 Mar 2018 08:42:18 -0600 Subject: [openstack-dev] [tripleo] Bug status Message-ID: Hey everyone, In today's IRC meeting, I brought up[0] that we've been having an increase in the number of open bugs of the last few weeks. We're currently at about 635 open bugs. It would be beneficial for everyone to take a look at the bugs that they are currently assigned to and ensure they are up to date. Additionally, there was chat about possibly introducing some process around the Triaging of bugs such that we should be assigning squad tags to all the bugs so that there's some potential ownership. I'm not sure what that would look like, so if others thinks this might be a good idea, feel free to comment. Thanks, -Alex [0] http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-03-20-14.01.log.html#l-69 From mcdkr at yandex.ru Tue Mar 20 15:47:03 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Tue, 20 Mar 2018 18:47:03 +0300 Subject: [openstack-dev] [mistral] Re: Mistral docker job In-Reply-To: References: , Message-ID: <614531521560823@web37o.yandex.ru> An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Mar 20 16:01:51 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 20 Mar 2018 09:01:51 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 13th Edition Message-ID: Note: this is the thirteenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128234.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Bug backlog for Rocky is *huge*, please read Alex's email: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128557.html +--> TripleO UI squad is about to experiment Storyboard and bugs might be migrated from Launchpad during the following days (more infos soon). +------------------------------+ | Continuous Integration | +------------------------------+ +--> Matt is John and ruck is John. Please let them know any new CI issue. +--> Master promotion is 14 days, Queens is 14 days, Pike is 2 days and Ocata is 0 days. +--> Focus is on devmode replacement and promotion blockers. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Good progress on FFU and P2Q workflows, reviews are needed. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> No updates this week, efforts are on containerized undercloud. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> ceph-ansible support still in progress +--> Working on validation to check for SoftwareConfig outputs +--> Still looking at process to create a new git repo per role for standalone ansible roles +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> Team is working on config-download integration for ceph and multi-cluster support. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Last week's meeting was about Threat analysis, Limit TripleO users, Public TLS, and Secret identification for TripleO +--> Tomorrow's meeting is about Mistral secret storage. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ The eyes of an owl are not true “eyeballs.” Their tube-shaped eyes are completely immobile, providing binocular vision which fully focuses on their prey and boosts depth perception. Source: http://www.audubon.org/news/11-fun-facts-about-owls Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Mar 20 17:00:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 20 Mar 2018 12:00:19 -0500 Subject: [openstack-dev] [neutron][oslo] Move MultiConfigParser to networking-cisco Message-ID: <17d0ea2f-d1a1-cbed-68a8-fe9c7a37d084@nemebean.com> Hi, I'm hoping anyone involved with networking-cisco is subscribed to the neutron tag. If there's a better one to use please feel free to add it. The purpose of this email is to discuss plans for removing MultiConfigParser from oslo.config. It has been deprecated for a while and some upcoming work in the project has prompted us to want to remove it. Currently the only project still using it is networking-cisco, and in the interest of simplicity we are proposing that MultiConfigParser just be moved to networking-cisco. I've pushed a change to do that in https://review.openstack.org/554617 One concern is that I'm not sure if this functionality is tested in ci. I'm hoping someone from networking-cisco can comment on what needs to happen with that. Anyway, I just wanted to send something out that explains what is going on with these changes. Please respond with any comments or questions. Thanks. -Ben From borne.mace at oracle.com Tue Mar 20 17:02:39 2018 From: borne.mace at oracle.com (Borne Mace) Date: Tue, 20 Mar 2018 10:02:39 -0700 Subject: [openstack-dev] [kolla] kolla-ansible cli proposal Message-ID: Greetings all, One of the discussions we had at the recent PTG was in regards to the blueprint to add support for a kolla-ansible cli [0]. I would like to propose that to satisfy this blueprint the thus far Oracle developed kollacli be completely upstreamed and made a community guided project. The source for the project is available already [1] and it is known to work against the Queens codebase. My suggestion would be that either a new repository be created for it or it be included in the kolla-ansible repository. Either way my hope is that it be under kolla project control, as far as PTL guidance and core contributors. The kollacli is documented here [2] for your review, and along with any discussion that folks want to have on the mailing list I will make sure to be around for the next couple of wednesday kolla meetings so that it can be discussed there as well. Thanks much for taking the time to read this, -- Borne Mace [0]: https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli [1]: https://oss.oracle.com/git/gitweb.cgi?p=openstack-kollacli.git;a=summary [2]: https://docs.oracle.com/cd/E90981_01/E90982/html/kollacli.html From balazs.gibizer at ericsson.com Tue Mar 20 17:10:43 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 20 Mar 2018 18:10:43 +0100 Subject: [openstack-dev] [nova] Notification update week 12 Message-ID: <1521565843.9826.6@smtp.office365.com> Hi, Here is the status update / focus settings mail for w12. Bugs ---- One new bug from last week: [Undecided] https://bugs.launchpad.net/nova/+bug/1756360 Serializer strips Exception kwargs The bug refers to an oslo.serialization change as the reason of the changed behavior but I failed to reproduce the expected behavior with older than that oslo.serialization version. Also there is a fix proposed that I have to look at https://review.openstack.org/#/c/554607/ Versioned notification transformation ------------------------------------- There are 3 patches that has positive feedback (but no +2 as I'm the author of those) and needs core attention https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances The implementation still needs work https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- The bp has been approved https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications but implementation hasn't been proposed yet. Add request_id to the InstanceAction versioned notifications ------------------------------------------------------------ https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications Kevin has a WIP patch up https://review.openstack.org/#/c/553288 . I promised to go through it soon. Sending full traceback in versioned notifications ------------------------------------------------- The specless bp has been approved https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications was discussed and due to possible complications with looking up the server group when a server is deleted we would like to see some WIP implementation patch proposed before the bp is approved. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open No progress. Weekly meeting -------------- The next meeting will be held on 27th of Marc on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180327T170000 Cheers, gibi From berendt at betacloud-solutions.de Tue Mar 20 17:19:54 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Tue, 20 Mar 2018 18:19:54 +0100 Subject: [openstack-dev] [kolla] kolla-ansible cli proposal In-Reply-To: References: Message-ID: <89E3113D-1E08-4B64-B846-901E926F686A@betacloud-solutions.de> In my opinion, a separate repository would be most suitable for this. So we do not mix the ansible roll with a frontend for ansible itself and are independent. Importing the project into the OpenStack namespace is probably easier this way. Christian. > On 20. Mar 2018, at 18:02, Borne Mace wrote: > > Greetings all, > > One of the discussions we had at the recent PTG was in regards to the > blueprint to add support for a kolla-ansible cli [0]. I would like to > propose that to satisfy this blueprint the thus far Oracle developed > kollacli be completely upstreamed and made a community guided project. > > The source for the project is available already [1] and it is known to > work against the Queens codebase. My suggestion would be that either a > new repository be created for it or it be included in the > kolla-ansible repository. Either way my hope is that it be under > kolla project control, as far as PTL guidance and core contributors. > > The kollacli is documented here [2] for your review, and along with > any discussion that folks want to have on the mailing list I will make > sure to be around for the next couple of wednesday kolla meetings so > that it can be discussed there as well. > > Thanks much for taking the time to read this, > > -- Borne Mace > > [0]: https://blueprints.launchpad.net/kolla/+spec/kolla-multicloud-cli > [1]: https://oss.oracle.com/git/gitweb.cgi?p=openstack-kollacli.git;a=summary > [2]: https://docs.oracle.com/cd/E90981_01/E90982/html/kollacli.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From jpena at redhat.com Tue Mar 20 17:28:45 2018 From: jpena at redhat.com (Javier Pena) Date: Tue, 20 Mar 2018 13:28:45 -0400 (EDT) Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: References: Message-ID: <1028378685.11422536.1521566925853.JavaMail.zimbra@redhat.com> ----- Original Message ----- > During the PTG we had some nice conversations about how TripleO can make > progress on testing OpenStack deployments with Python 3. > In CC, Haikel, Alfredo and Javier, please complete if I missed something. > ## Goal > As an OpenStack distribution, RDO would like to ensure that the OpenStack > services (which aren't depending on Python 2) are packaged and can be > containerized to be tested in TripleO CI. > ## Challenges > - Some services aren't fully Python 3, but we agreed this was not our problem > but the project's problems. However, as a distribution, we'll make sure to > ship what we can on Python 3. > - CentOS 7 is not the Python 3 distro and there are high expectations from > the next release but we aren't there yet. > - Fedora is Python 3 friendly but we don't deploy TripleO on Fedora, and we > don't want to do it (for now at least). > ## Proposal > - Continue to follow upstream projects who support Python3 only and ship rpms > in RDO. > - Investigate the build of Kolla containers on Fedora / Python 3 and push > them to a registry (maybe in the same namespace with different name or maybe > a new namespace). > - Kick-off some TripleO CI experimental job that will use these containers to > deploy TripleO (maybe on one basic scenario for now). One point we should add here: to test Python 3 we need some base operating system to work on. For now, our plan is to create a set of stabilized Fedora 28 repositories and use them only for CI jobs. See [1] for details on this plan. Regards, Javier [1] - https://etherpad.openstack.org/p/stabilized-fedora-repositories-for-openstack > ## Roadmap for Rocky > For Rocky we agreed to follow the 3 steps part of the proposal (maybe more, > please add what I've missed). > That way, we'll be able to have some early testing on python3-only > environments (thanks containers!) without changing the host OS. > Thanks for your feedback and comments, it's an open discussion. > -- > Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Mar 20 17:29:25 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 20 Mar 2018 18:29:25 +0100 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: References: Message-ID: On Sat, Mar 17, 2018 at 9:34 AM, Emilien Macchi wrote: > During the PTG we had some nice conversations about how TripleO can make > progress on testing OpenStack deployments with Python 3. > In CC, Haikel, Alfredo and Javier, please complete if I missed something. > > > ## Goal > > As an OpenStack distribution, RDO would like to ensure that the OpenStack > services (which aren't depending on Python 2) are packaged and can be > containerized to be tested in TripleO CI. > > > ## Challenges > > - Some services aren't fully Python 3, but we agreed this was not our > problem but the project's problems. However, as a distribution, we'll make > sure to ship what we can on Python 3. > - CentOS 7 is not the Python 3 distro and there are high expectations from > the next release but we aren't there yet. > - Fedora is Python 3 friendly but we don't deploy TripleO on Fedora, and > we don't want to do it (for now at least). > > To be clear, python3 packages will be only provided for Fedora ini RDO Trunk repos and, unless it's explicitely changed in future, RDO's policy is not to support deployments in Fedora using python2 nor python3. The main goal of this effort is to make transition to python3 smoother in future CentOS releases and using fedora as a testbed for it. > > ## Proposal > > - A fedora stabilized repository will be created by RDO to provide a stable and working set of fedora packages to run RDO OpenStack services using python3. - Continue to follow upstream projects who support Python3 only and ship > rpms in RDO. > - Investigate the build of Kolla containers on Fedora / Python 3 and push > them to a registry (maybe in the same namespace with different name or > maybe a new namespace). > - Kick-off some TripleO CI experimental job that will use these containers > to deploy TripleO (maybe on one basic scenario for now). > > > ## Roadmap for Rocky > > For Rocky we agreed to follow the 3 steps part of the proposal (maybe > more, please add what I've missed). > The services enabled for python3 during rocky will depend on the progress of the different tasks and i guess we will adapt the order of the services depending on the technical issues we find. > That way, we'll be able to have some early testing on python3-only > environments (thanks containers!) without changing the host OS. > > Just for awareness, we may hit issues running services closely coupled to kernel modules as openvswitch. > > Thanks for your feedback and comments, it's an open discussion. > -- > Emilien Macchi > [1] https://mail.rdoproject.org/thread.html/f122ccd93daf5e4ca26b7db0e90e977fb0fbb253ad7293f81b13a132@%3Cdev.lists.rdoproject.org%3E -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Mar 20 18:23:07 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 20 Mar 2018 13:23:07 -0500 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: References: Message-ID: <72f7d27d-4c0d-f880-76f9-33af8d5b15fc@inaugust.com> On 03/17/2018 03:34 AM, Emilien Macchi wrote: > That way, we'll be able to have some early testing on python3-only > environments (thanks containers!) without changing the host OS. All hail our new python3-only overlords!!! From sean.mcginnis at gmx.com Tue Mar 20 18:45:09 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 20 Mar 2018 13:45:09 -0500 Subject: [openstack-dev] [all] Job failures on stable/pike and stable/ocata In-Reply-To: <20180319210249.GA23433@sm-xps> References: <20180319210249.GA23433@sm-xps> Message-ID: <20180320184508.GA9935@sm-xps> On Mon, Mar 19, 2018 at 04:02:50PM -0500, Sean McGinnis wrote: > [snip] > > We have a couple of issues causing failures with stable/pike and stable/ocata. > Actually, it also affects stable/queens as well due to grenade jobs needing to > run stable/pike first. > > [snip] > > I think we have a full working plan in place. The oslo.util patches would fail > just the legacy-tempest-dsvm-neutron-src job, so that has been marked as > non-voting for now. Next, the oslo.util fixes need to merge and a new stable > release done for them. Then, requirements updates to both stable branches can > pass that raise the upper-constraints for ryu to 4.18 which includes the > changes we need. > > Once all that is done, we can merge the last patch that reverts the change > making legacy-tempest-dsvm-neutron-src voting again. > > The set up patches (other than the upcoming release requests) can be found > under the pip/5081 topic: > > https://review.openstack.org/#/q/topic:pip/5081+(status:open+OR+status:merged) > > As far as I can tell, once all that is done, the stable branches should be > unblocked and we should be back in business. If anything else crops up, I'll > post updates here. > All known patches are merged now and the last step of reverting the non-voting state of the one job is just about to finish in the gate queue. Stable branches should now be OK to recheck any failed jobs from the last couple of days. If you see anything else crop up related to this, just let me know. Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Tue Mar 20 19:16:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 20 Mar 2018 12:16:31 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 13th Edition In-Reply-To: References: Message-ID: On Tue, Mar 20, 2018 at 9:01 AM, Emilien Macchi wrote: > > +--> Matt is John and ruck is John. Please let them know any new CI issue. > so I double checked and Matt isn't John but in fact he's the rover ;-) -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Mar 20 19:21:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Mar 2018 14:21:12 -0500 Subject: [openstack-dev] [all] Job failures on stable/pike and stable/ocata In-Reply-To: <20180320184508.GA9935@sm-xps> References: <20180319210249.GA23433@sm-xps> <20180320184508.GA9935@sm-xps> Message-ID: <8d8bd2f7-1300-0020-9099-fe4dbd1f18ca@gmail.com> On 3/20/2018 1:45 PM, Sean McGinnis wrote: > All known patches are merged now and the last step of reverting the non-voting > state of the one job is just about to finish in the gate queue. > > Stable branches should now be OK to recheck any failed jobs from the last > couple of days. If you see anything else crop up related to this, just let me > know. Thanks for wrangling this one. -- Thanks, Matt From jungleboyj at gmail.com Tue Mar 20 20:12:56 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 20 Mar 2018 15:12:56 -0500 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: Message-ID: On 3/19/2018 10:55 PM, TommyLike Hu wrote: > Now Cinder can transfer volume (with or without snapshots) to > different projects,  and this make it possbile to transfer data across > tenant via volume or image. Recently we had a conversation with our > customer from Germany, they mentioned they are more pleased if we can > support transfer data accross tenant via backup not image or volume, > and these below are some of their concerns: > > 1. There is a use case that they would like to deploy their > develop/test/product systems in the same region but within different > tenants, so they have the requirment to share/transfer data across > tenants. > > 2. Users are more willing to use backups to secure/store their volume > data since backup feature is more advanced in product openstack > version (incremental backups/periodic backups/etc.). > > 3. Volume transfer is not a valid option as it's in AZ and it's a > complicated process if we would like to share the data to multiple > projects (keep copy in all the tenants). > > 4. Most of the users would like to use image for bootable volume only > and share volume data via image means the users have to maintain lots > of image copies when volume backup changed as well as the whole system > needs to differentiate bootable images and none bootable images, most > important, we can not restore volume data via image now. > > 5. The easiest way for this seems to support sharing backup to > different projects, the owner project have the full authority while > shared projects only can view/read the backups. > > 6. AWS has the similar concept, share snapshot. We can share it by > modify the snapshot's create volume permissions [1]. > > Looking forward to any like or dislike or suggestion on this idea > accroding to my feature proposal experience:) > > > Thanks > TommyLike > > > [1]: > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Tommy, As discussed at the PTG, this still sounds like improper usage of Backup.  Happy to hear input from others but I am having trouble getting my head around it. The idea of sharing a snapshot, as you mention AWS supports sounds like it could be a more sensible approach.  Why are you not proposing that? Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Mar 20 21:50:02 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 20 Mar 2018 21:50:02 +0000 Subject: [openstack-dev] [First Contact] [SIG] Meeting Today! Message-ID: Hello! Another meeting tonight late/tomorrow depending on where in the world you live :) 0800 UTC Wednesday. Here is the agenda if you have anything to add [1]. Or if you want to add your name to the ping list it is there as well! See you all soon! -Kendall (diablo_rojo) [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Mar 20 22:57:25 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 20 Mar 2018 15:57:25 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday Message-ID: Howdy all, I've put together an etherpad [0] with summaries of the items from the Friday miscellaneous session from the PTG at the Croke Park Hotel "game room" across from the bar area. I didn't summarize all of the items, but attempted to do so for most of them, namely the ones that had discussion/decisions about them. Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-ptg-rocky-misc-summary *Friday Miscellaneous: Rocky PTG Summary https://etherpad.openstack.org/p/nova-ptg-rocky L281 *Key topics * Team / review policy * Technical debt and cleanup * Removing nova-network and legacy cells v1 * Community goal to remove usage of mox3 in unit tests * Dropping support of running nova-api and the metadata API service under eventlet * Cruft surrounding rebuild and evacuate * Bumping the minimum required version of libvirt * Nova's 'enabled_perf_events' feature will be broken with Linux Kernel 4.14+ (the feature has been removed from the kernel) * Miscellaneous topics from the PTG etherpad *Agreements and decisions * On team / review policy, for the Rocky cycle we're going to experiment with a process for "runways" wherein we'll focus review bandwidth on selected blueprints in 2 week time-boxes * Details here: https://etherpad.openstack.org/p/nova-runways-rocky * On technical debt and cleanup: * We're going to remove nova-network this cycle and see how it goes. Then we'll look toward removing legacy cells v1. * NOTE: If you're planning to work on the community-wide goal of removing mox3 usage, don't bother refactoring nova-network and legacy cells v1 unit tests. Those tests will be entirely removed soon-ish. * We're going to dump a warning on service startup and add a release note for deprecation, and plan for removal of support for running nova-api and the metadata API service under eventlet in S * Patch: https://review.openstack.org/#/c/549510/ * For rebuild, we're going to defer the instance.save() until conductor has passed scheduling and before it casts to compute in order to address the issue of rolling back instance values if something fails during rebuild scheduling * For future work on rebuild tech debt, there was an idea to deprecate "evacuate" and add an option to rebuild like "--elsewhere" to collapse the two into using nearly the same code path. Evacuate is a rebuild and it would be nice to represent it as such. Someone would need to write up a spec for this. * We're going to bump the minimum required libvirt version: https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix * kashyap is going to do this * We're going to log a warning if enabled_perf_events is set in nova.conf and mark it as deprecated for removal * kashyap is going to do this * Abort Cold Migration * This would add a new API and a significant amount of complexity as it prone to race conditions (for example, abort request lands just after the disk migration has finished, having to restore the original instance), etc. * We would like to have greater interest from operators for the feature before going down that path * takashin will email openstack-operators at lists.openstack.org to ask if there is broader interest in the feature * Abort live migrations in queued status * We agreed this is reasonable functionality to add, just need to work out the details on the spec * Kevin_Zheng will update the spec: https://review.openstack.org/#/c/536722/ * Adding request_id field to migrations object * The goal here is to be able to lookup the instance action for a failed migration to determine why it failed, and the request_id is needed to lookup the instance action. * We agreed to add the request_id instance action notification instead and gibi will do this: https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications * Returning Flavor Extra Specs in GET /flavors/detail and GET /flavors/{flavor_id} * https://blueprints.launchpad.net/nova/+spec/add-extra-specs-to-flavor-list * Doing this would create parity between the servers API (when showing the instance.flavor) and the flavors API * We agreed to add a new microversion and implement it the same way as we have for instance.flavor using policy as the control on whether to show the extra specs * Adding host and Error Code field to instance action event * We agreed that it would be reasonable to add a new microversion to add the host (how it's shown to be based on a policy check) to the instance action event but the error code is a much more complex, cross-project, community-wide effort so we're not going to pursue that for now * Spec for adding host: https://review.openstack.org/#/c/543277/ * Allow specifying tolerance for (soft)(anti-)affinity groups * This requirement is about adding an attribute to the group to limit the amount of how hard the affinity is (in the filter). Today it's a max of 1 per host for hard anti-affinity * We agreed this feature is reasonable, just need to sort out the api model vs data model in the spec: https://review.openstack.org/#/c/546925/ * XenAPI: support non file system based SR types - e.g. LVM, ISCSI * Currently xenapi is only file system-based, cannot yet support LVM, ISCSI that are supported by XenServer * We agreed that a specless blueprint is fine for this: https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement * Supporting live-migration for xapi pool based hosts * Have to implement this first before removing the aggregate upcall, else it would break live migration for shared storage * We agreed to do a specless blueprint for this: https://blueprints.launchpad.net/nova/+spec/live-migration-in-xapi-pool * The removal of the aggregate upcall will be a patch stacked on top of the live migration implementation ^ * Preemptible instances * The scientific SIG has been experimenting with doing this completely outside of Nova and it was a bad user experience, so they're interested in what a minimal integration in Nova could look like * There was an idea suggested to put instances that failed to boot in a new non-ERROR vm_state. The external reaper service could go make some room, then issue a rebuild on that instance. Notifications could be used to determine the ordering of the instances that landed in the new non-ERROR state. Then the reaper service could use that info to decide what to do. Maybe use reset-state to put an instance into ERROR if the reaper is giving up on it. A new notification is not needed, we already have one. The configurable bit will be whether to use the new non-ERROR "pending" state. We'll need a way to remove the instance from cell0, etc. "Evacuate for cell0" when you do the rebuild. * There was agreement that the above idea seemed reasonable (though need to check if there is any special handling needed for boot-from-volume) * Exposing virt driver capabilities out of a REST API * Simple way to tie driver capabilities to scheduling requests using Placement and provider traits, for things like tagged devices and volume multiattach * We agreed this is OK, so do it. Jay has a patch to register the traits in os-traits; the nova patch will depend on a release of that * We also agreed to merge traits, somehow. But we'll finish update_provider_tree as written first * libvirt: add support for virtio-net rx/tx queue sizes * Spec: https://review.openstack.org/#/c/539605 * It looks like we thought we had agreement to go with a global config option for this but it's still being actively discussed on the spec. Please see the spec discussion for details * Adding support for rebuild of volume-backed instances * Review of the spec is underway: https://review.openstack.org/#/c/532407 * Strict isolation of group of hosts for image and flavor * Spec: https://review.openstack.org/#/c/381912 * The only problem remaining is if image doesn't contain any properties, then it will land on any host aggregate group * We agreed that this should be done with traits. From the spec review comments, it sounds like the desired behavior will be possible once the "placement request filtering" work lands: https://blueprints.launchpad.net/nova/+spec/placement-req-filter * The solution will involve applying custom traits to compute nodes and using the placement request filtering to use data from the RequestSpec to filter the request only for host aggregates that meet the requirement. Discussion on the spec is in progress. * Reliable port ordering in Nova * Ports do not get designated order, backup/restore of db can result in port orders changing * We agreed the existing device tagging feature can be used to get reliable ordering for devices * Granular API policy * This is about making policy more granular for GET, POST, PUT, DELETE separately for APIs * Interested people should review the spec: https://review.openstack.org/#/c/547850/ * Block device mapping creation races during attach volume * We agreed to create a nova-manage command to do BDM clean up and then add a unique constraint in S * mriedem will restore the device name spec and someone else can pick it up * Ironic instance switchover * With booting from volume, a failed bare metal node can be switched to another node by booting the alternative node from the same node. There are some options regarding which Compute API can be used to trigger this switchover * This is about wanting to be able to migrate baremetal instances * We agreed this would be adding parity with other virt drivers, so it's okay in that regard. Details need to be worked out on the spec review: https://review.openstack.org/#/c/449155 * Add UEFI Secure Boot support for QEMU/KVM guests, using OVMF * Spec: https://review.openstack.org/#/c/506720/ * We agreed that providing this feature would be reasonable. Details need to be worked out on the spec review * Validate policy when creating a server group * We can create a server group that have no policies (empty policies) currently. We can create a server with it, but all related scheduler filters return True, so it is useless * Spec: https://review.openstack.org/#/c/546484 * We agreed this should be a simple thing to do, spec review is underway. We also said we should consider lumping in some other trivial API cleanup into the same microversion - we have a lot of TODOs for similar stuff like this in the API * Add force flag in cold migration * We have agreed not to add a way to force bypass of the scheduler filters * Skip instance backup image creation when rotation 0 * Spec: https://review.openstack.org/511825 * The spec ^ is old and needs to be re-proposed for Rocky * We agreed that the approach should be to add a microversion to disallow 0 and then also add a new API for "purge all backups" to be used instead of passing 0 * Live resize for hyper-v * Spec: https://review.openstack.org/#/c/141219 * This uses same workflow as cold migrate, we'll increase allocations in placement when a host is found, but there's no confirm/revert step because it's live * Proposes a new API and resize up only will be allowed. Virt driver will have a new method "can_live_resize" (or similar) to check if it supports it before proceeding past the API layer * Stage 1 only, no automatic live migration fallback * PowerVM wants to do this too. Should be possible in libvirt driver too * We agreed this idea sounds fine, just a matter of getting interest and review on the spec. Interested parties should review the spec From cdent+os at anticdent.org Tue Mar 20 23:24:19 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 20 Mar 2018 23:24:19 +0000 (GMT) Subject: [openstack-dev] [tc] [all] TC Report 18-12 Message-ID: HTML: https://anticdent.org/tc-report-18-12.html This week's TC Report goes off in the weeds a bit with the editorial commentary from yours truly. I had trouble getting started, so had to push myself through some thinking by writing stuff that at least for the last few weeks I wouldn't normally be including in the summaries. After getting through it, I realized that the reason I was struggling is because I haven't been including these sorts of things. Including them results in a longer and more meandering report but it is more authentically my experience, which was my original intention. # Zuul Extraction and the Difficult Nature of Communication Last [Tuesday Morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T17:22:38) we had some initial discussion about Zuul being extracted from OpenStack governance as a precursor to becoming part of the CI/CD strategic area being born elsewhere in the OpenStack Foundation. Then on [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:08:06) we revisited the topic, especially as it related to how we communicate change in the community and how we invite participation in making decisions about change. In this case by "community" we're talking about anything under the giant umbrella of "stuff associated with the OpenStack Foundation". Plenty of people expressed that though they were not surprised by the change, it was because they are insiders and could understand how some, who are not, might be surprised by what seemed like a big change. This led to addressing the immediate shortcomings and clarifying the history of the event. There was also [concern](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:27:22) that some of the reluctance to talk openly about the change appeared to stem from needing to preserve the potency of a Foundation marketing release. I [expressed some frustration](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:36:50): "...as usual, we're getting caught up in details of a particular event (one that in the end we're all happy to see happen), rather than the general problem we saw with it (early transparency etc). Solving the immediate problem is easy, but since we _keep doing it_, we've got a general issues to resolve." We went round and round about the various ways in which we have tried and failed to do good communication in the past, and while we make some progress, we fail to establish a pattern. As Doug [pointed out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:41:33), no method can be 100% successful, but if we pick a method and stick to it, people can learn that method. We have a cycle where we not only sometimes communicate poorly but we also communicate poorly about that poor communication. So when I come round to another week of writing this report, and am reminded that these issues persist and I am once again communicating about them, it's frustrating. Communicating, a lot, is generally a good thing, but if things don't change as a result, that can be a strain. If I'm still writing these things in a year's time, and we haven't managed to achieve at least a bit more grace, consistency, and transparency in the ways that we share information within and between groups (including, and maybe especially, the Foundation executive wing) in the wider community, it will be a shame and I will have a sad. In a somewhat related and good sign, there is [great thread](http://lists.openstack.org/pipermail/openstack-operators/2018-March/014994.html) on the operators list that raises the potential of merging the Ops Meeting and the PTG into some kind of "OpenStack Community Working Gathering". # Encouraging Upstream Contribution On [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-16.log.html#t2018-03-16T14:29:21), tbarron raised some interesting questions about how the summit talk selection process might relate to the [four opens](https://governance.openstack.org/tc/reference/opens.html). The talk eventually led to a positive plan to try bring some potential contributors upstream in advance of summit as, well as to work to create more clear guidelines for track chairs. # Executive Power I had a question at [this morning's office hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-20.log.html#t2018-03-20T09:00:00), related to some work in the API-SIG that hasn't had a lot of traction, about how best to explain how executive power is gained and spent in a community where we intentionally spread power around a lot. As with communication above, this is a topic that comes up a fair amount, and investigating the underlying patterns can be instructive. My initial reaction on the topic was the fairly standard (but in different words): If this is important to you, step up and make it happen. I think, however, that when we discuss these things we fail to take enough account of the nature of OpenStack as a professional open source environment. Usually, nonhierarchical, consensual collaborations are found in environments where members represent their own interests. In OpenStack our interactions are sometimes made more complex (and alienating) by virtue of needing to represent the interests of a company or other financial interest (including the interest of keeping our nice job) while at the same time not having the recourse of being able to complain to someone's boss when they are difficult (because that boss is part of a different hierarchy than the one you operate in). We love (rightfully so) the grand project which is OpenStack, and want to preserve and extend as much as possible the beliefs in things that make it feel unique, like "influence tokens". But we must respect that these things are collectively agreed hallucinations that require regular care and feeding, and balance them against the surrounding context which is not operating with those agreements. Further, those of us who have leeway to spend time building influence tokens are operating from a position of privilege. One of the ways we sustain that position is by behaving as if those tokens are more readily available to more people than they really are. /me wipes brow # TC Elections Coming The next round of TC elections will be coming up in late April. If you're thinking about it, but feel like you need more information about what it might entail, please feel free to contact me. I'm sure most of the other TC members would be happy to share their thoughts as well. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From melwittt at gmail.com Tue Mar 20 23:44:57 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 20 Mar 2018 16:44:57 -0700 Subject: [openstack-dev] [nova] Review runways this cycle Message-ID: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> Hello Stackers, As mentioned in the earlier "Rocky PTG summary - miscellaneous topics from Friday" email, this cycle we're going to experiment with a "runways" system for focusing review on approved blueprints in time-boxes. The goal here is to use a bit more structure and process in order to focus review and complete merging of approved work more quickly and reliably. We were thinking of starting the runways process after the spec review freeze (which is April 19) so that reviewers won't be split between spec reviews and reviews of work in runways. The process and instructions are explained in detail on this etherpad, which will also serve as the place we queue and track blueprints for runways: https://etherpad.openstack.org/p/nova-runways-rocky Please bear with us as this is highly experimental and we will be giving it a go knowing it's imperfect and adjusting the process iteratively as we learn from it. Do check out the etherpad and ask questions on this thread or on IRC and we'll do our best to answer them. Cheers, -melanie From melwittt at gmail.com Tue Mar 20 23:47:58 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 20 Mar 2018 16:47:58 -0700 Subject: [openstack-dev] [nova] Rocky spec review day Message-ID: Hi everybody, The past several cycles, we've had a spec review day in the cycle where reviewers focus on specs and iterating quickly with spec authors for the day. Spec freeze is April 19 so I wanted to get some input from all of you about what day would work best for a spec review day. I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, so that would be March 27 (next week) or April 3 if we do it on a Tuesday. Please let me know what you think and suggest other days that might work better. Best, -melanie From mriedemos at gmail.com Wed Mar 21 00:12:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Mar 2018 19:12:58 -0500 Subject: [openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday In-Reply-To: References: Message-ID: <398a65bc-c8b3-a3dd-7971-8d7e32a2a80f@gmail.com> On 3/20/2018 5:57 PM, melanie witt wrote: >     * For rebuild, we're going to defer the instance.save() until > conductor has passed scheduling and before it casts to compute in order > to address the issue of rolling back instance values if something fails > during rebuild scheduling I got to thinking about why the API does the instance.save() before casting to conductor, and realized that if we changed that, the POST response for rebuild will be different, because the handler code looks up the updated instance from the DB to form the response body. So if we move the save() to conductor, the response body will change and that's a behavior change, unless there is another way to handle this without duplicating a bunch of logic. >   *  XenAPI: support non file system based SR types - e.g. LVM, ISCSI >     * Currently xenapi is only file system-based, cannot yet support > LVM, ISCSI that are supported by XenServer >     * We agreed that a specless blueprint is fine for this: > https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement > This blueprint isn't approved yet. Is someone going to bring it up in the nova meeting, or are we just going to approve since it there was agreement to do so at the PTG? >   * Block device mapping creation races during attach volume >     * We agreed to create a nova-manage command to do BDM clean up and > then add a unique constraint in S >     * mriedem will restore the device name spec and someone else can > pick it up The spec is now restored: https://review.openstack.org/#/c/452546/ But I don't know who was going to take it over (dansmith?). >   * Validate policy when creating a server group >     * We can create a server group that have no policies (empty > policies) currently. We can create a server with it, but all related > scheduler filters return True, so it is useless >     * Spec: https://review.openstack.org/#/c/546484 >     * We agreed this should be a simple thing to do, spec review is > underway. We also said we should consider lumping in some other trivial > API cleanup into the same microversion - we have a lot of TODOs for > similar stuff like this in the API I think https://review.openstack.org/#/c/546925/ will supersede ^ so we should probably hold off on Takashi's spec until we know for sure what we're doing about the hard-affinity policy limit stuff. -- Thanks, Matt From mriedemos at gmail.com Wed Mar 21 00:29:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Mar 2018 19:29:11 -0500 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: References: Message-ID: <76917686-697d-9471-a298-90364323c5f8@gmail.com> On 3/20/2018 6:47 PM, melanie witt wrote: > I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, > so that would be March 27 (next week) or April 3 if we do it on a Tuesday. It's spring break here on April 3 so I'll be listening to screaming kids, I mean on vacation. Not that my schedule matters, just FYI. But regardless of that, I think the earlier the better to flush out what's already there, since we've already approved quite a few blueprints this cycle (32 to so far). -- Thanks, Matt From wangpeihuixyz at 126.com Wed Mar 21 00:34:37 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Wed, 21 Mar 2018 08:34:37 +0800 (CST) Subject: [openstack-dev] =?gbk?q?=5Bneutron=5DDoes_neutron-server_support_?= =?gbk?q?the_main_backup_redundancy=A3=BF?= Message-ID: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> Hi All, As far as I know, neutron-server only can be a single node, In order to improve the reliability of the system, Does it support the main backup or active/active redundancy? Any comment would be appreciated. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Mar 21 01:03:34 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 20 Mar 2018 18:03:34 -0700 Subject: [openstack-dev] [nova] Rocky PTG summary - miscellaneous topics from Friday In-Reply-To: <398a65bc-c8b3-a3dd-7971-8d7e32a2a80f@gmail.com> References: <398a65bc-c8b3-a3dd-7971-8d7e32a2a80f@gmail.com> Message-ID: <25643f27-b081-638e-7ffc-585b2c5cac1e@gmail.com> On Tue, 20 Mar 2018 19:12:58 -0500, Matt Riedemann wrote: >>   *  XenAPI: support non file system based SR types - e.g. LVM, ISCSI >>     * Currently xenapi is only file system-based, cannot yet support >> LVM, ISCSI that are supported by XenServer >>     * We agreed that a specless blueprint is fine for this: >> https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement >> > This blueprint isn't approved yet. Is someone going to bring it up in > the nova meeting, or are we just going to approve since it there was > agreement to do so at the PTG? I'll bring it up at the next meeting. I've added it to the open discussion section of the agenda. -melanie From tommylikehu at gmail.com Wed Mar 21 01:54:43 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Wed, 21 Mar 2018 01:54:43 +0000 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: Message-ID: Thanks Jay, The question is AWS doesn't have the concept of backup and their snapshot is incremental backup internally and will be finllay stored into S3 which is more sound like backup for us. Our snapshot can not be used across AZ. Jay S Bryant 于2018年3月21日周三 上午4:13写道: > > > On 3/19/2018 10:55 PM, TommyLike Hu wrote: > > Now Cinder can transfer volume (with or without snapshots) to different > projects, and this make it possbile to transfer data across tenant via > volume or image. Recently we had a conversation with our customer from > Germany, they mentioned they are more pleased if we can support transfer > data accross tenant via backup not image or volume, and these below are > some of their concerns: > > 1. There is a use case that they would like to deploy their > develop/test/product systems in the same region but within different > tenants, so they have the requirment to share/transfer data across tenants. > > 2. Users are more willing to use backups to secure/store their volume data > since backup feature is more advanced in product openstack version > (incremental backups/periodic backups/etc.). > > 3. Volume transfer is not a valid option as it's in AZ and it's a > complicated process if we would like to share the data to multiple projects > (keep copy in all the tenants). > > 4. Most of the users would like to use image for bootable volume only and > share volume data via image means the users have to maintain lots of image > copies when volume backup changed as well as the whole system needs to > differentiate bootable images and none bootable images, most important, we > can not restore volume data via image now. > > 5. The easiest way for this seems to support sharing backup to different > projects, the owner project have the full authority while shared projects > only can view/read the backups. > > 6. AWS has the similar concept, share snapshot. We can share it by modify > the snapshot's create volume permissions [1]. > > Looking forward to any like or dislike or suggestion on this idea > accroding to my feature proposal experience:) > > > Thanks > TommyLike > > > [1]: > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Tommy, > > As discussed at the PTG, this still sounds like improper usage of Backup. > Happy to hear input from others but I am having trouble getting my head > around it. > > The idea of sharing a snapshot, as you mention AWS supports sounds like it > could be a more sensible approach. Why are you not proposing that? > > Jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Wed Mar 21 02:33:25 2018 From: kevin at benton.pub (Kevin Benton) Date: Wed, 21 Mar 2018 02:33:25 +0000 Subject: [openstack-dev] =?utf-8?q?=5Bneutron=5DDoes_neutron-server_suppor?= =?utf-8?q?t_the_main_backup_redundancy=EF=BC=9F?= In-Reply-To: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> References: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> Message-ID: You can run as many neutron server processes as you want in an active/active setup. On Tue, Mar 20, 2018, 18:35 Frank Wang wrote: > Hi All, > As far as I know, neutron-server only can be a single node, In order > to improve the reliability of the system, Does it support the main backup > or active/active redundancy? Any comment would be appreciated. > > Thanks, > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From HoangCX at vn.fujitsu.com Wed Mar 21 02:54:00 2018 From: HoangCX at vn.fujitsu.com (HoangCX at vn.fujitsu.com) Date: Wed, 21 Mar 2018 02:54:00 +0000 Subject: [openstack-dev] [Neutron][vpnaas] In-Reply-To: References: Message-ID: Hi, IIUC, your use case is to connect 4 subnets from different sites (2 subnets for each site). If so, did you try with endpoint group? If not, please refer the following docs for more detail about how to try and get more understanding [1][2] [1] https://docs.openstack.org/neutron/latest/admin/vpnaas-scenario.html#using-vpnaas-with-endpoint-group-recommended [2] https://docs.openstack.org/neutron-vpnaas/latest/contributor/multiple-local-subnets.html BRs, Cao Xuan Hoang, From: vidyadhar reddy [mailto:vidyadharreddy68 at gmail.com] Sent: Tuesday, March 20, 2018 4:31 PM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Neutron][vpnaas] Hello, i have a general question regarding the working of vpnaas, can we setup multiple vpn connections on a single router? my scenario is lets say we have two networks net 1 and net2 in two different sites respectively, each network has two subnets, two sites have one router in each, with three interfaces one for the public network and remaining two for the two subnets, can we setup a two vpnaas connections on the routers in each site to enable communication between the two subnets in each site. i have tried this setup, it didn't work for me. just wanted to know if it is a design constraint or not, i am not sure if this issue is under development, is there any development going on or is it already been solved? BR, Vidyadhar reddy peddireddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Mar 21 02:54:24 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 21 Mar 2018 10:54:24 +0800 Subject: [openstack-dev] [cyborg]Team Weekly Meeting 2018.03.21 Message-ID: Hi Team, Meeting today starting UTC1400 at #openstack-cyborg, initial agenda as follows: 1. Sub-team lead progress update 2. rocky spec/patch review: https://review.openstack.org/#/q/status:open+project:openstack/cyborg -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Mar 21 03:54:53 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 21 Mar 2018 03:54:53 +0000 Subject: [openstack-dev] [congress] No meeting on 3/23 Message-ID: IRC weekly meeting resumes on 3/30. -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Mar 21 04:39:35 2018 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 21 Mar 2018 15:39:35 +1100 Subject: [openstack-dev] [infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0 Message-ID: <3008b3e9-47c2-077c-7acd-5a850b004e21@redhat.com> Hi, We had a small issue with dib's 2.12.0 release that means it creates the root partition with the wrong partition type [1]. The result is that a very old check in sfdisk fails, and growpart then can not expand the disk -- which means you may have seen jobs that usually work fine run out of disk space. This slipped by because our functional testing doesn't test growpart; an oversight we will correct in due course. The bad images should have been removed, so a recheck should work. We will prepare dib 2.12.1 with the fix. As usual there are complications, since the dib gate is broken due to unrelated triple-o issues [2]. In the mean time, probably avoid 2.12.0 if you can. Thanks, -i [1] https://review.openstack.org/554771 [2] https://review.openstack.org/554705 From jungleboyj at gmail.com Wed Mar 21 05:04:56 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 21 Mar 2018 00:04:56 -0500 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: Message-ID: Tommy, I am still not sure that this is going to move the team to a different decision. Now that you have more information you can propose it as a topic in tomorrow's team meeting if you wish. Jay On 3/20/2018 8:54 PM, TommyLike Hu wrote: > > Thanks Jay, >     The question is AWS doesn't have the concept of backup and their > snapshot is incremental backup internally and will be finllay stored > into S3 which is more sound like backup for us. Our snapshot can not > be used across AZ. > > Jay S Bryant >于2018年3月21日周三 上午4:13写道: > > > > On 3/19/2018 10:55 PM, TommyLike Hu wrote: >> Now Cinder can transfer volume (with or without snapshots) to >> different projects,  and this make it possbile to transfer data >> across tenant via volume or image. Recently we had a conversation >> with our customer from Germany, they mentioned they are more >> pleased if we can support transfer data accross tenant via backup >> not image or volume, and these below are some of their concerns: >> >> 1. There is a use case that they would like to deploy their >> develop/test/product systems in the same region but within >> different tenants, so they have the requirment to share/transfer >> data across tenants. >> >> 2. Users are more willing to use backups to secure/store their >> volume data since backup feature is more advanced in product >> openstack version (incremental backups/periodic backups/etc.). >> >> 3. Volume transfer is not a valid option as it's in AZ and it's a >> complicated process if we would like to share the data to >> multiple projects (keep copy in all the tenants). >> >> 4. Most of the users would like to use image for bootable volume >> only and share volume data via image means the users have to >> maintain lots of image copies when volume backup changed as well >> as the whole system needs to differentiate bootable images and >> none bootable images, most important, we can not restore volume >> data via image now. >> >> 5. The easiest way for this seems to support sharing backup to >> different projects, the owner project have the full authority >> while shared projects only can view/read the backups. >> >> 6. AWS has the similar concept, share snapshot. We can share it >> by modify the snapshot's create volume permissions [1]. >> >> Looking forward to any like or dislike or suggestion on this idea >> accroding to my feature proposal experience:) >> >> >> Thanks >> TommyLike >> >> >> [1]: >> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tommy, > > As discussed at the PTG, this still sounds like improper usage of > Backup.  Happy to hear input from others but I am having trouble > getting my head around it. > > The idea of sharing a snapshot, as you mention AWS supports sounds > like it could be a more sensible approach.  Why are you not > proposing that? > > Jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lvmxhster at gmail.com Wed Mar 21 05:37:07 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Wed, 21 Mar 2018 13:37:07 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: 2018-03-07 10:36 GMT+08:00 Alex Xu : > > > 2018-03-07 10:21 GMT+08:00 Alex Xu : > >> >> >> 2018-03-06 22:45 GMT+08:00 Mooney, Sean K : >> >>> >>> >>> >>> >>> *From:* Matthew Booth [mailto:mbooth at redhat.com] >>> *Sent:* Saturday, March 3, 2018 4:15 PM >>> *To:* OpenStack Development Mailing List (not for usage questions) < >>> openstack-dev at lists.openstack.org> >>> *Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple >>> functions >>> >>> >>> >>> On 2 March 2018 at 14:31, Jay Pipes wrote: >>> >>> On 03/02/2018 02:00 PM, Nadathur, Sundar wrote: >>> >>> Hello Nova team, >>> >>> During the Cyborg discussion at Rocky PTG, we proposed a flow for >>> FPGAs wherein the request spec asks for a device type as a resource class, >>> and optionally a function (such as encryption) in the extra specs. This >>> does not seem to work well for the usage model that I’ll describe below. >>> >>> An FPGA device may implement more than one function. For example, it may >>> implement both compression and encryption. Say a cluster has 10 devices of >>> device type X, and each of them is programmed to offer 2 instances of >>> function A and 4 instances of function B. More specifically, the device may >>> implement 6 PCI functions, with 2 of them tied to function A, and the other >>> 4 tied to function B. So, we could have 6 separate instances accessing >>> functions on the same device. >>> >>> >>> >>> Does this imply that Cyborg can't reprogram the FPGA at all? >>> >>> *[Mooney, Sean K] cyborg is intended to support fixed function >>> acclerators also so it will not always be able to program the accelerator. >>> In this case where an fpga is preprogramed with a multi function bitstream >>> that is statically provisioned cyborge will not be able to reprogram the >>> slot if any of the fuctions from that slot are already allocated to an >>> instance. In this case it will have to treat it like a fixed function >>> device and simply allocate a unused vf of the corret type if available. * >>> >>> >>> >>> >>> >>> In the current flow, the device type X is modeled as a resource class, >>> so Placement will count how many of them are in use. A flavor for ‘RC >>> device-type-X + function A’ will consume one instance of the RC >>> device-type-X. But this is not right because this precludes other >>> functions on the same device instance from getting used. >>> >>> One way to solve this is to declare functions A and B as resource >>> classes themselves and have the flavor request the function RC. Placement >>> will then correctly count the function instances. However, there is still a >>> problem: if the requested function A is not available, Placement will >>> return an empty list of RPs, but we need some way to reprogram some device >>> to create an instance of function A. >>> >>> >>> Clearly, nova is not going to be reprogramming devices with an instance >>> of a particular function. >>> >>> Cyborg might need to have a separate agent that listens to the nova >>> notifications queue and upon seeing an event that indicates a failed build >>> due to lack of resources, then Cyborg can try and reprogram a device and >>> then try rebuilding the original request. >>> >>> >>> >>> It was my understanding from that discussion that we intend to insert >>> Cyborg into the spawn workflow for device configuration in the same way >>> that we currently insert resources provided by Cinder and Neutron. So while >>> Nova won't be reprogramming a device, it will be calling out to Cyborg to >>> reprogram a device, and waiting while that happens. >>> >>> My understanding is (and I concede some areas are a little hazy): >>> >>> * The flavors says device type X with function Y >>> >>> * Placement tells us everywhere with device type X >>> >>> * A weigher orders these by devices which already have an available >>> function Y (where is this metadata stored?) >>> >>> * Nova schedules to host Z >>> >>> * Nova host Z asks cyborg for a local function Y and blocks >>> >>> * Cyborg hopefully returns function Y which is already available >>> >>> * If not, Cyborg reprograms a function Y, then returns it >>> >>> Can anybody correct me/fill in the gaps? >>> >>> *[Mooney, Sean K] that correlates closely to my recollection also. As >>> for the metadata I think the weigher may need to call to cyborg to retrieve >>> this as it will not be available in the host state object.* >>> >> Is it the nova scheduler weigher or we want to support weigh on >> placement? Function is traits as I think, so can we have preferred_traits? >> I remember we talk about that parameter in the past, but we don't have good >> use-case at that time. This is good use-case. >> > > If we call the Cyborg from the nova scheduler weigher, that will slow down > the scheduling a lot also. > I'm not sure how much the performance loss. But one nova scheduler weighter call Cyborg API (get the all accelerators info) once seems is acceptable. I‘m not sure how many placement API calls during one schedule. Any performance figure for one placement API call? It can help for us to evaluate the performance loss of Cyborg API call. >> >>> Matt >>> >>> >>> >>> -- >>> >>> Matthew Booth >>> >>> Red Hat OpenStack Engineer, Compute DFG >>> >>> >>> >>> Phone: +442070094448 <+44%2020%207009%204448> (UK) >>> >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Wed Mar 21 06:12:40 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Wed, 21 Mar 2018 06:12:40 +0000 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: Message-ID: Sure! I will bring this to our weekly meeting when more feedbacks are gathered : ) Jay S Bryant 于2018年3月21日周三 下午1:05写道: > Tommy, > > I am still not sure that this is going to move the team to a different > decision. > > Now that you have more information you can propose it as a topic in > tomorrow's team meeting if you wish. > > Jay > > On 3/20/2018 8:54 PM, TommyLike Hu wrote: > > > Thanks Jay, > The question is AWS doesn't have the concept of backup and their > snapshot is incremental backup internally and will be finllay stored into > S3 which is more sound like backup for us. Our snapshot can not be used > across AZ. > > Jay S Bryant 于2018年3月21日周三 上午4:13写道: > >> >> >> On 3/19/2018 10:55 PM, TommyLike Hu wrote: >> >> Now Cinder can transfer volume (with or without snapshots) to different >> projects, and this make it possbile to transfer data across tenant via >> volume or image. Recently we had a conversation with our customer from >> Germany, they mentioned they are more pleased if we can support transfer >> data accross tenant via backup not image or volume, and these below are >> some of their concerns: >> >> 1. There is a use case that they would like to deploy their >> develop/test/product systems in the same region but within different >> tenants, so they have the requirment to share/transfer data across tenants. >> >> 2. Users are more willing to use backups to secure/store their volume >> data since backup feature is more advanced in product openstack version >> (incremental backups/periodic backups/etc.). >> >> 3. Volume transfer is not a valid option as it's in AZ and it's a >> complicated process if we would like to share the data to multiple projects >> (keep copy in all the tenants). >> >> 4. Most of the users would like to use image for bootable volume only and >> share volume data via image means the users have to maintain lots of image >> copies when volume backup changed as well as the whole system needs to >> differentiate bootable images and none bootable images, most important, we >> can not restore volume data via image now. >> >> 5. The easiest way for this seems to support sharing backup to different >> projects, the owner project have the full authority while shared projects >> only can view/read the backups. >> >> 6. AWS has the similar concept, share snapshot. We can share it by modify >> the snapshot's create volume permissions [1]. >> >> Looking forward to any like or dislike or suggestion on this idea >> accroding to my feature proposal experience:) >> >> >> Thanks >> TommyLike >> >> >> [1]: >> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> Tommy, >> >> As discussed at the PTG, this still sounds like improper usage of >> Backup. Happy to hear input from others but I am having trouble getting my >> head around it. >> >> The idea of sharing a snapshot, as you mention AWS supports sounds like >> it could be a more sensible approach. Why are you not proposing that? >> >> Jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Mar 21 07:00:12 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 21 Mar 2018 00:00:12 -0700 Subject: [openstack-dev] [Nova] [Cyborg] Separate spec for compute node flows? In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: Hi all,     The Cyborg Nova scheduling specification addresses the scheduling aspects alone. There needs to be a separate spec to address: * Cyborg/Nova interactions in the compute node, incl. the newly proposed os-acc library. * Programming, including fetching bitstreams from Glance. * Bitstream metadata. Shall I send such a spec while the first one is still in review? Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From lihiwish at gmail.com Wed Mar 21 07:46:46 2018 From: lihiwish at gmail.com (Lihi Wishnitzer) Date: Wed, 21 Mar 2018 09:46:46 +0200 Subject: [openstack-dev] [Openstack-Dev] [Neutron] [DragonFlow] Automatic Neighbour Discovery responder for IPv6 In-Reply-To: References: Message-ID: Hi Vivek, Originally we planned to support only in-cloud VMs, therefore, we do not use these bits in Dragonflow. We believe we managed to follow the standard without using these bits. If this is a bug, we will try to fix it. Regards, Lihi On Tue, Mar 20, 2018 at 8:45 AM, N Vivekanandan wrote: > Hi DragonFlow Team, > > > > We noticed that you are adding support for automatic responder for > neighbor solicitation via OpenFlow Rules here: > > https://review.openstack.org/#/c/412208/ > > > > Can you please let us know with latest OVS release are you using to test > this feature? > > > > We are pursuing Automatic NS Responder in OpenDaylight Controller > implementation, and we noticed that there are no NXM extensions to manage > the ‘R’ bit and ’S’ bit correctly. > > > > From the RFC: https://tools.ietf.org/html/rfc4861 > > > > R Router flag. When set, the R-bit indicates that > > the sender is a router. The R-bit is used by > > Neighbor Unreachability Detection to detect a > > router that changes to a host. > > > > S Solicited flag. When set, the S-bit indicates that > > the advertisement was sent in response to a > > Neighbor Solicitation from the Destination address. > > The S-bit is used as a reachability confirmation > > for Neighbor Unreachability Detection. It MUST NOT > > be set in multicast advertisements or in > > unsolicited unicast advertisements. > > > > We noticed that this dragonflow rule is being programmed for automatic > response generation for NS: > > icmp6,ipv6_dst=1::1,icmp_type=135 actions=load:0x88->NXM_NX_ > ICMPV6_TYPE[],move:NXM_NX_IPV6_SRC[]->NXM_NX_IPV6_DST[],mod_dl_src:00:11: > 22:33:44:55,load:0->NXM_NX_ND_SLL[],IN_PORT > > above line from spec https://docs.openstack.org/ > dragonflow/latest/specs/ipv6.html > > > > However, from the flow rule by dragonflow for automatic response above, we > couldn’t notice that R and S bits of the NS Response is being managed. > > > > Can you please clarify if you don’t intend to use ‘R’ and ‘S’ bits at all > in dragonflow implementation? > > Or you intend to use them but you weren’t able to get NXM extensions for > the same with OVS and so wanted to start ahead without managing those bits > (as per RFC)? > > > > Thanks in advance for your help. > > > > -- > > Thanks, > > > > Vivek > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Mar 21 07:54:01 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 21 Mar 2018 15:54:01 +0800 Subject: [openstack-dev] [Nova] [Cyborg] Separate spec for compute node flows? In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED5D61D1B@fmsmsx122.amr.corp.intel.com> <1CC272501B5BC543A05DB90AA509DED5D61F40@fmsmsx122.amr.corp.intel.com> <4B1BB321037C0849AAE171801564DFA6889FBB8E@IRSMSX107.ger.corp.intel.com> Message-ID: Hi Sundar, Zhuli will work on os-acc spec, and Li Liu will work on the glance and metadata one, as we assigned during ptg. But you are very welcomed to reach out to them and work together if you have the bandwidth :) On Wed, Mar 21, 2018 at 3:00 PM, Nadathur, Sundar wrote: > Hi all, > > The Cyborg Nova scheduling specification > > addresses the scheduling aspects alone. There needs to be a separate spec > to address: > * Cyborg/Nova interactions in the compute node, incl. the newly proposed > os-acc library. > * Programming, including fetching bitstreams from Glance. > * Bitstream metadata. > > Shall I send such a spec while the first one is still in review? > Regards, > Sundar > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasca at redhat.com Wed Mar 21 09:37:23 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Wed, 21 Mar 2018 10:37:23 +0100 Subject: [openstack-dev] [tripleo] The Weekly Owl - 13th Edition In-Reply-To: References: Message-ID: On 20/03/2018 20:16, Emilien Macchi wrote: > On Tue, Mar 20, 2018 at 9:01 AM, Emilien Macchi > wrote: > +--> Matt is John and ruck is John. Please let them know any new CI > issue. > so I double checked and Matt isn't John but in fact he's the rover ;-)  But Rover is Rover or not? -- Raoul Scarazzini rasca at redhat.com From majopela at redhat.com Wed Mar 21 10:14:47 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Wed, 21 Mar 2018 10:14:47 +0000 Subject: [openstack-dev] =?utf-8?q?=5Bneutron=5DDoes_neutron-server_suppor?= =?utf-8?q?t_the_main_backup_redundancy=EF=BC=9F?= In-Reply-To: References: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> Message-ID: You can run as many as you want, generally an haproxy is used in front of them to balance load across neutron servers. Also, keep in mind, that the db backend is a single mysql, you can also distribute that with galera. That is the configuration you will get by default when you deploy in HA with RDO/TripleO or OSP/Director. On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton wrote: > You can run as many neutron server processes as you want in an > active/active setup. > > On Tue, Mar 20, 2018, 18:35 Frank Wang wrote: > >> Hi All, >> As far as I know, neutron-server only can be a single node, In order >> to improve the reliability of the system, Does it support the main backup >> or active/active redundancy? Any comment would be appreciated. >> >> Thanks, >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Wed Mar 21 10:49:02 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 21 Mar 2018 10:49:02 +0000 Subject: [openstack-dev] Following the new PTI for document build, broken local builds Message-ID: <1521629342.8587.20.camel@redhat.com> tl;dr: Make sure you stop using pbr's autodoc feature before converting them to the new PTI for docs. There have been a lot of patches converting projects to use the new Project Testing Interface for building docs [1]. Generally, these make the following changes: 1. Move any requirements for building docs or release notes from 'test- requirements.txt' to 'doc/requirements.txt' 2. Modify 'tox.ini' to call 'sphinx-build' instead of 'python setup.py build_sphinx' Once done, the idea is that the gate will be able to start building docs by calling the 'sphinx-build' executable directly instead of using the 'build_sphinx' setuptools command. Unfortunately, this doesn't always do what you think and has resulted in a few now-broken projects (mostly oslo). As noted by Monty in a prior openstack-dev post [2], some projects rely on a pbr extension to the 'build_sphinx' setuptools command which can automatically run the 'sphinx-apidoc' tool before building docs. This is enabled by configuring some settings in the '[pbr]' section of the 'setup.cfg' file [3]. To ensure this continued working, the zuul jobs definitions [4] check for the presence of these settings and build docs using the legacy 'build_sphinx' command if found. **At no point do the jobs call the tox job**. As a result, if you convert a project to use 'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you lose the ability to build docs locally. I've gone through and proposed a couple of reverts to fix projects we've already broken. However, going forward, there are two things people should do to prevent issues like this popping up. * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections from 'setup.cfg' in any patches that aim to convert a project to use the new PTI. This will ensure the gate catches any potential issues.  * In addition, if your project uses the pbr autodoc feature, you should either (a) remove these docs from your documentation tree or (b) migrate to something else like the 'sphinx.ext.autosummary' extension [5]. I aim to post instructions on the latter shortly. If anyone has any questions on the above, feel free to reply here or contact me on IRC (stephenfin). Cheers, Stephen [1] https://review.openstack.org/#/q/topic:updated-pti+(status:open+OR+status:merged) [2] http://lists.openstack.org/pipermail/openstack-dev/2017-December/125710.html [3] https://docs.openstack.org/pbr/latest/user/using.html#pbr-setup-cfg [4] https://github.com/openstack-infra/zuul-jobs/blob/d75f5d2b/roles/sphinx/tasks/main.yaml [5] http://www.sphinx-doc.org/en/stable/ext/autosummary.html From geguileo at redhat.com Wed Mar 21 11:14:56 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 21 Mar 2018 12:14:56 +0100 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: Message-ID: <20180321111456.gp7pymagsts4i4mm@localhost> On 20/03, Jay S Bryant wrote: > > > On 3/19/2018 10:55 PM, TommyLike Hu wrote: > > Now Cinder can transfer volume (with or without snapshots) to different > > projects,  and this make it possbile to transfer data across tenant via > > volume or image. Recently we had a conversation with our customer from > > Germany, they mentioned they are more pleased if we can support transfer > > data accross tenant via backup not image or volume, and these below are > > some of their concerns: > > > > 1. There is a use case that they would like to deploy their > > develop/test/product systems in the same region but within different > > tenants, so they have the requirment to share/transfer data across > > tenants. > > > > 2. Users are more willing to use backups to secure/store their volume > > data since backup feature is more advanced in product openstack version > > (incremental backups/periodic backups/etc.). > > > > 3. Volume transfer is not a valid option as it's in AZ and it's a > > complicated process if we would like to share the data to multiple > > projects (keep copy in all the tenants). > > > > 4. Most of the users would like to use image for bootable volume only > > and share volume data via image means the users have to maintain lots of > > image copies when volume backup changed as well as the whole system > > needs to differentiate bootable images and none bootable images, most > > important, we can not restore volume data via image now. > > > > 5. The easiest way for this seems to support sharing backup to different > > projects, the owner project have the full authority while shared > > projects only can view/read the backups. > > > > 6. AWS has the similar concept, share snapshot. We can share it by > > modify the snapshot's create volume permissions [1]. > > > > Looking forward to any like or dislike or suggestion on this idea > > accroding to my feature proposal experience:) > > > > > > Thanks > > TommyLike > > > > > > [1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Tommy, > > As discussed at the PTG, this still sounds like improper usage of Backup.  > Happy to hear input from others but I am having trouble getting my head > around it. > > The idea of sharing a snapshot, as you mention AWS supports sounds like it > could be a more sensible approach.  Why are you not proposing that? > > Jay > Hi, I agree with Jay that this sounds like an improper use of Backups, and I believe that this feature, just like trying to transfer snapshots, would incur in a lot of code changes as well as an ever greater number of bugs, because the ownership structure in Cinder is hierarchical and well defined. So if you transferred a snapshot then you would lose that snapshot information on the source volume, which means that we could not prevent a volume deletion with a snapshot, or we could prevent it but would either have to prevent the deletion from happening (creating a terrible user experience since the user can't delete the volume now because somebody else still has one of its snapshots) or we have to implement some kind of "trash" mechanism to postpone cleanup until all the snapshots have been deleted, which would make our quota code more complex as well as make our stats reporting and scheduling diverge from what the user think has actually happened (they deleted a bunch of volumes but the data has not been freed from the backend). As for backups, you have an even worse situation because of our incremental backups, since transferring ownership of an incremental backup will create similar deletion issues as the snapshots but we also require access to all all incremental snapshots to restore a volume. So the only alternative would be to only allow transferring a full Backup and this would carry all the incremental backups with it. All in all I think this would be an abuse of the Backups, and as stated by TommyLike we already have mechanisms to do this via images and volume transfers. Although I have to admit that after giving this some thought there is a very good case where it wouldn't be an abuse and where we should allow transferring full backups together with all their incremental backups, and that is when you transfer a volume. If we transfer a volume with all its snapshots, it makes sense that we should also allow transferring its backups, after all the original source of the backups no longer belongs to the owner of the backups. To summarize, if we are talking about transferring only full backups with all their dependent incremental backup then I probably won't oppose the change. Cheers, Gorka. From lijie at unitedstack.com Wed Mar 21 11:34:30 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Wed, 21 Mar 2018 19:34:30 +0800 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: <93666d2a-c543-169c-fe07-499e5340622b@gmail.com> References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> <93666d2a-c543-169c-fe07-499e5340622b@gmail.com> Message-ID: So what should we do then about rebuild the volume backed server?Until the cinder could re-image a volume? ------------------ Original ------------------ From: "Matt Riedemann"; Date: 2018年3月16日(星期五) 上午6:35 To: "OpenStack Developmen"; Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume On 3/15/2018 5:29 PM, Dan Smith wrote: > Yep, for sure. I think if there are snapshots, we have to refuse to do > te thing. My comment was about the "does nova have authority to destroy > the root volume during a rebuild" and I think it does, if > delete_on_termination=True, and if there are no snapshots. Agree with this. Things do get a bit weird with delete_on_termination and if nova 'owns' the volume. delete_on_termination is False by default, even if you're doing boot from volume with source_type of 'image' or 'snapshot' where nova creates the volume for you. If a user really cared about preserving the volume, they'd probably pre-create it (with their favorite volume type since you can't tell nova the volume type to use) and pass it to nova with delete_on_termination=False explicitly. Given the defaults, I'm not sure how many people are going to specify delete_on_termination=True, thinking about the implications, which then means they can't rebuild their volume-backed instance later because nova can't / won't delete the volume. If we can solve this without deleting the volume at all and just re-image it, then it's a non-issue. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Mar 21 12:43:53 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 21 Mar 2018 14:43:53 +0200 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: HI Team, >From my perspective, I'm OK both with #2 and #3 options. I agree that #4 could be too complicated for us. Anyway, we've got this topic on the meeting agenda [1] so we'll discuss it there too. I'll share our decision after the meeting. [1] https://wiki.openstack.org/wiki/Meetings/Horizon Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki wrote: > Hi Kaz and Ivan, > > Yeah, it is worth discussed officially in the horizon team meeting or the > mailing list thread to get a consensus. > Hopefully you can add this topic to the horizon meeting agenda. > > After sending the previous mail, I noticed anther option. I see there are > several options now. > (1) Keep xstatic-core and horizon-core same. > (2) Add specific members to xstatic-core > (3) Add specific horizon-plugin core to xstatic-core > (4) Split core membership into per-repo basis (perhaps too complicated!!) > > My current vote is (2) as xstatic-core needs to understand what is xstatic > and how it is maintained. > > Thanks, > Akihiro > > > 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : > >> Hi Akihiro, >> >> >> Thanks for your comment. >> The background of my request to add us to xstatic-core comes from >> Ivan's comment in last PTG's etherpad for heat-dashboard discussion. >> >> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >> Line135 >> , >> "we can share ownership if needed - e0ne" >> >> Just in case, could you guys confirm unified opinion on this matter as >> Horizon team ? >> >> Frankly speaking I'm feeling the benefit to make us xstatic-core >> because it's easier & smoother to manage what we are taking for >> heat-dashboard. >> On the other hand, I can understand what Akihiro you are saying, the >> newly added repos belong to Horizon project & being managed by not >> Horizon core is not consistent. >> Also having exception might make unexpected confusion in near future. >> >> Eventually we will follow your opinion, let me hear Horizon team's >> conclusion. >> >> Regards, >> Kaz >> >> >> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >> > Hi Kaz, >> > >> > These repositories are under horizon project. It looks better to keep >> the >> > current core team. >> > It potentially brings some confusion if we treat some horizon plugin >> team >> > specially. >> > Reviewing xstatic repos would be a small burden, wo I think it would >> work >> > without problem even if only horizon-core can approve xstatic reviews. >> > >> > >> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : >> >> >> >> Hi Ivan, Horizon folks, >> >> >> >> >> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. >> >> >> >> In project-config for them, I've set same acl-config as the existing >> >> xstatic repos. >> >> It means only "xstatic-core" can manage the newly created repos on >> gerrit. >> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as >> >> what horizon-core is doing ? >> >> >> >> xstatic-core >> >> https://review.openstack.org/#/admin/groups/385,members >> >> >> >> heat-dashboard-core >> >> https://review.openstack.org/#/admin/groups/1844,members >> >> >> >> Of course, we will surely touch only what we made, just would like to >> >> manage them smoothly by ourselves. >> >> In case we need to touch the other ones, will ask Horizon team for >> help. >> >> >> >> Thanks in advance. >> >> >> >> Regards, >> >> Kaz >> >> >> >> >> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> >> > Hi Horizon Team, >> >> > >> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin option, >> >> > and submitted a patch for it. >> >> > Could you please help to review the patch. >> >> > >> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> >> > https://review.openstack.org/#/c/552259/ >> >> > >> >> > Thank you very much. >> >> > >> >> > Best Regards, >> >> > Xinni >> >> > >> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> >> > wrote: >> >> >> >> >> >> Hi Kaz, >> >> >> >> >> >> Thanks for cleaning this up. I put +1 on both of these patches >> >> >> >> >> >> Regards, >> >> >> Ivan Kolodyazhny, >> >> >> http://blog.e0ne.info/ >> >> >> >> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara < >> ksnhr.tech at gmail.com> >> >> >> wrote: >> >> >>> >> >> >>> Hi Ivan & Horizon folks, >> >> >>> >> >> >>> >> >> >>> Now we are submitting a couple of patches to have the new xstatic >> >> >>> modules. >> >> >>> Let me request you to have review the following patches. >> >> >>> We need Horizon PTL's +1 to move these forward. >> >> >>> >> >> >>> project-config >> >> >>> https://review.openstack.org/#/c/551978/ >> >> >>> >> >> >>> governance >> >> >>> https://review.openstack.org/#/c/551980/ >> >> >>> >> >> >>> Thanks in advance:) >> >> >>> >> >> >>> Regards, >> >> >>> Kaz >> >> >>> >> >> >>> >> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >> >>> : >> >> >>> > Yes, please do that. We can then discuss in the review about >> >> >>> > technical >> >> >>> > details. >> >> >>> > >> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge < >> xinni.ge1990 at gmail.com> >> >> >>> > wrote: >> >> >>> >> >> >> >>> >> Hi, Akihiro >> >> >>> >> >> >> >>> >> Thanks for the quick reply. >> >> >>> >> >> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not >> be >> >> >>> >> modified. >> >> >>> >> It is much better to enhance horizon plugin settings, >> >> >>> >> and I think maybe there could be one option like >> >> >>> >> ADD_XSTATIC_MODULES. >> >> >>> >> This option adds the plugin's xstatic files in STATICFILES_DIRS. >> >> >>> >> I am considering to add a bug report to describe it at first, >> and >> >> >>> >> give >> >> >>> >> a >> >> >>> >> patch later maybe. >> >> >>> >> Is that ok with the Horizon team? >> >> >>> >> >> >> >>> >> Best Regards. >> >> >>> >> Xinni >> >> >>> >> >> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki < >> amotoki at gmail.com> >> >> >>> >> wrote: >> >> >>> >>> >> >> >>> >>> Hi Xinni, >> >> >>> >>> >> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >> >> >>> >>> > Hello Horizon Team, >> >> >>> >>> > >> >> >>> >>> > I would like to hear about your opinions about how to add new >> >> >>> >>> > xstatic >> >> >>> >>> > modules to horizon settings. >> >> >>> >>> > >> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files issue, >> >> >>> >>> > thanks >> >> >>> >>> > for >> >> >>> >>> > your advices in Dublin PTG, we are now removing them and >> >> >>> >>> > referencing as >> >> >>> >>> > new >> >> >>> >>> > xstatic-* libs. >> >> >>> >>> >> >> >>> >>> Thanks for moving this forward. >> >> >>> >>> >> >> >>> >>> > So we installed the new xstatic files (not uploaded as >> openstack >> >> >>> >>> > official >> >> >>> >>> > repos yet) in our development environment now, but hesitate >> to >> >> >>> >>> > decide >> >> >>> >>> > how to >> >> >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS in >> >> >>> >>> > openstack_dashboard.settings so that the static files could >> be >> >> >>> >>> > automatically >> >> >>> >>> > collected by *collectstatic* process. >> >> >>> >>> > >> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant >> static >> >> >>> >>> > fils >> >> >>> >>> > are >> >> >>> >>> > added >> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin >> >> >>> >>> > dashboard. >> >> >>> >>> > We may want new plugin setting keywords ( something similar >> to >> >> >>> >>> > ADD_JS_FILES) >> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >> >>> >>> > STATICFILES_DIRS). >> >> >>> >>> >> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic >> modules >> >> >>> >>> through horizon plugin settings. I don't think it is a good >> idea >> >> >>> >>> to >> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon plugin >> >> >>> >>> usages. It makes difficult to track why and where a xstatic >> module >> >> >>> >>> in >> >> >>> >>> BASE_XSTATIC_MODULES is used. >> >> >>> >>> Multiple horizon plugins can add a same entry, so horizon code >> to >> >> >>> >>> handle plugin settings should merge multiple entries to a >> single >> >> >>> >>> one >> >> >>> >>> hopefully. >> >> >>> >>> My vote is to enhance the horizon plugin settings. >> >> >>> >>> >> >> >>> >>> Akihiro >> >> >>> >>> >> >> >>> >>> > >> >> >>> >>> > Looking forward to hearing any suggestions from you guys, and >> >> >>> >>> > Best Regards, >> >> >>> >>> > >> >> >>> >>> > Xinni Ge >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > ____________________________________________________________ >> ______________ >> >> >>> >>> > OpenStack Development Mailing List (not for usage questions) >> >> >>> >>> > Unsubscribe: >> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> >> >>> >>> > >> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>> >>> > >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> >> >> >>> >>> ____________________________________________________________ >> ______________ >> >> >>> >>> OpenStack Development Mailing List (not for usage questions) >> >> >>> >>> Unsubscribe: >> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> -- >> >> >>> >> 葛馨霓 Xinni Ge >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> >>> >> ____________________________________________________________ >> ______________ >> >> >>> >> OpenStack Development Mailing List (not for usage questions) >> >> >>> >> Unsubscribe: >> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>> >> >> >> >>> > >> >> >>> > >> >> >>> > >> >> >>> > >> >> >>> > ____________________________________________________________ >> ______________ >> >> >>> > OpenStack Development Mailing List (not for usage questions) >> >> >>> > Unsubscribe: >> >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>> > >> >> >>> >> >> >>> >> >> >>> >> >> >>> ____________________________________________________________ >> ______________ >> >> >>> OpenStack Development Mailing List (not for usage questions) >> >> >>> Unsubscribe: >> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> > >> >> > >> >> > >> >> > -- >> >> > 葛馨霓 Xinni Ge >> >> > >> >> > >> >> > ____________________________________________________________ >> ______________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> >> >> ____________________________________________________________ >> ______________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Mar 21 13:12:44 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 21 Mar 2018 08:12:44 -0500 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: <76917686-697d-9471-a298-90364323c5f8@gmail.com> References: <76917686-697d-9471-a298-90364323c5f8@gmail.com> Message-ID: +1 for the-earlier-the-better, for the additional reason that, if we don't finish, we can do another one in time for spec freeze. And I, for one, wouldn't be offended if we could "officially start development" (i.e. focus on patches, start runways, etc.) before the mystical but arbitrary spec freeze date. On 03/20/2018 07:29 PM, Matt Riedemann wrote: > On 3/20/2018 6:47 PM, melanie witt wrote: >> I was thinking that 2-3 weeks ahead of spec freeze would be >> appropriate, so that would be March 27 (next week) or April 3 if we do >> it on a Tuesday. > > It's spring break here on April 3 so I'll be listening to screaming > kids, I mean on vacation. Not that my schedule matters, just FYI. > > But regardless of that, I think the earlier the better to flush out > what's already there, since we've already approved quite a few > blueprints this cycle (32 to so far). > From ksnhr.tech at gmail.com Wed Mar 21 13:29:20 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Wed, 21 Mar 2018 22:29:20 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan, Akihiro, Thanks for your kind arrangement. Looking forward to hearing your decision soon. Regards, Kaz 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : > HI Team, > > From my perspective, I'm OK both with #2 and #3 options. I agree that #4 > could be too complicated for us. Anyway, we've got this topic on the meeting > agenda [1] so we'll discuss it there too. I'll share our decision after the > meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/Horizon > > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki wrote: >> >> Hi Kaz and Ivan, >> >> Yeah, it is worth discussed officially in the horizon team meeting or the >> mailing list thread to get a consensus. >> Hopefully you can add this topic to the horizon meeting agenda. >> >> After sending the previous mail, I noticed anther option. I see there are >> several options now. >> (1) Keep xstatic-core and horizon-core same. >> (2) Add specific members to xstatic-core >> (3) Add specific horizon-plugin core to xstatic-core >> (4) Split core membership into per-repo basis (perhaps too complicated!!) >> >> My current vote is (2) as xstatic-core needs to understand what is xstatic >> and how it is maintained. >> >> Thanks, >> Akihiro >> >> >> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >>> >>> Hi Akihiro, >>> >>> >>> Thanks for your comment. >>> The background of my request to add us to xstatic-core comes from >>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion. >>> >>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >>> Line135, "we can share ownership if needed - e0ne" >>> >>> Just in case, could you guys confirm unified opinion on this matter as >>> Horizon team ? >>> >>> Frankly speaking I'm feeling the benefit to make us xstatic-core >>> because it's easier & smoother to manage what we are taking for >>> heat-dashboard. >>> On the other hand, I can understand what Akihiro you are saying, the >>> newly added repos belong to Horizon project & being managed by not >>> Horizon core is not consistent. >>> Also having exception might make unexpected confusion in near future. >>> >>> Eventually we will follow your opinion, let me hear Horizon team's >>> conclusion. >>> >>> Regards, >>> Kaz >>> >>> >>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >>> > Hi Kaz, >>> > >>> > These repositories are under horizon project. It looks better to keep >>> > the >>> > current core team. >>> > It potentially brings some confusion if we treat some horizon plugin >>> > team >>> > specially. >>> > Reviewing xstatic repos would be a small burden, wo I think it would >>> > work >>> > without problem even if only horizon-core can approve xstatic reviews. >>> > >>> > >>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : >>> >> >>> >> Hi Ivan, Horizon folks, >>> >> >>> >> >>> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. >>> >> >>> >> In project-config for them, I've set same acl-config as the existing >>> >> xstatic repos. >>> >> It means only "xstatic-core" can manage the newly created repos on >>> >> gerrit. >>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as >>> >> what horizon-core is doing ? >>> >> >>> >> xstatic-core >>> >> https://review.openstack.org/#/admin/groups/385,members >>> >> >>> >> heat-dashboard-core >>> >> https://review.openstack.org/#/admin/groups/1844,members >>> >> >>> >> Of course, we will surely touch only what we made, just would like to >>> >> manage them smoothly by ourselves. >>> >> In case we need to touch the other ones, will ask Horizon team for >>> >> help. >>> >> >>> >> Thanks in advance. >>> >> >>> >> Regards, >>> >> Kaz >>> >> >>> >> >>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >>> >> > Hi Horizon Team, >>> >> > >>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >>> >> > option, >>> >> > and submitted a patch for it. >>> >> > Could you please help to review the patch. >>> >> > >>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >>> >> > https://review.openstack.org/#/c/552259/ >>> >> > >>> >> > Thank you very much. >>> >> > >>> >> > Best Regards, >>> >> > Xinni >>> >> > >>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >>> >> > wrote: >>> >> >> >>> >> >> Hi Kaz, >>> >> >> >>> >> >> Thanks for cleaning this up. I put +1 on both of these patches >>> >> >> >>> >> >> Regards, >>> >> >> Ivan Kolodyazhny, >>> >> >> http://blog.e0ne.info/ >>> >> >> >>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >>> >> >> >>> >> >> wrote: >>> >> >>> >>> >> >>> Hi Ivan & Horizon folks, >>> >> >>> >>> >> >>> >>> >> >>> Now we are submitting a couple of patches to have the new xstatic >>> >> >>> modules. >>> >> >>> Let me request you to have review the following patches. >>> >> >>> We need Horizon PTL's +1 to move these forward. >>> >> >>> >>> >> >>> project-config >>> >> >>> https://review.openstack.org/#/c/551978/ >>> >> >>> >>> >> >>> governance >>> >> >>> https://review.openstack.org/#/c/551980/ >>> >> >>> >>> >> >>> Thanks in advance:) >>> >> >>> >>> >> >>> Regards, >>> >> >>> Kaz >>> >> >>> >>> >> >>> >>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >>> >> >>> : >>> >> >>> > Yes, please do that. We can then discuss in the review about >>> >> >>> > technical >>> >> >>> > details. >>> >> >>> > >>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>> >> >>> > >>> >> >>> > wrote: >>> >> >>> >> >>> >> >>> >> Hi, Akihiro >>> >> >>> >> >>> >> >>> >> Thanks for the quick reply. >>> >> >>> >> >>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not >>> >> >>> >> be >>> >> >>> >> modified. >>> >> >>> >> It is much better to enhance horizon plugin settings, >>> >> >>> >> and I think maybe there could be one option like >>> >> >>> >> ADD_XSTATIC_MODULES. >>> >> >>> >> This option adds the plugin's xstatic files in >>> >> >>> >> STATICFILES_DIRS. >>> >> >>> >> I am considering to add a bug report to describe it at first, >>> >> >>> >> and >>> >> >>> >> give >>> >> >>> >> a >>> >> >>> >> patch later maybe. >>> >> >>> >> Is that ok with the Horizon team? >>> >> >>> >> >>> >> >>> >> Best Regards. >>> >> >>> >> Xinni >>> >> >>> >> >>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>> >> >>> >> >>> >> >>> >> wrote: >>> >> >>> >>> >>> >> >>> >>> Hi Xinni, >>> >> >>> >>> >>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >>> >> >>> >>> > Hello Horizon Team, >>> >> >>> >>> > >>> >> >>> >>> > I would like to hear about your opinions about how to add >>> >> >>> >>> > new >>> >> >>> >>> > xstatic >>> >> >>> >>> > modules to horizon settings. >>> >> >>> >>> > >>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files >>> >> >>> >>> > issue, >>> >> >>> >>> > thanks >>> >> >>> >>> > for >>> >> >>> >>> > your advices in Dublin PTG, we are now removing them and >>> >> >>> >>> > referencing as >>> >> >>> >>> > new >>> >> >>> >>> > xstatic-* libs. >>> >> >>> >>> >>> >> >>> >>> Thanks for moving this forward. >>> >> >>> >>> >>> >> >>> >>> > So we installed the new xstatic files (not uploaded as >>> >> >>> >>> > openstack >>> >> >>> >>> > official >>> >> >>> >>> > repos yet) in our development environment now, but hesitate >>> >> >>> >>> > to >>> >> >>> >>> > decide >>> >> >>> >>> > how to >>> >> >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS >>> >> >>> >>> > in >>> >> >>> >>> > openstack_dashboard.settings so that the static files could >>> >> >>> >>> > be >>> >> >>> >>> > automatically >>> >> >>> >>> > collected by *collectstatic* process. >>> >> >>> >>> > >>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>> >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant >>> >> >>> >>> > static >>> >> >>> >>> > fils >>> >> >>> >>> > are >>> >> >>> >>> > added >>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin >>> >> >>> >>> > dashboard. >>> >> >>> >>> > We may want new plugin setting keywords ( something similar >>> >> >>> >>> > to >>> >> >>> >>> > ADD_JS_FILES) >>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >>> >> >>> >>> > STATICFILES_DIRS). >>> >> >>> >>> >>> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic >>> >> >>> >>> modules >>> >> >>> >>> through horizon plugin settings. I don't think it is a good >>> >> >>> >>> idea >>> >> >>> >>> to >>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon >>> >> >>> >>> plugin >>> >> >>> >>> usages. It makes difficult to track why and where a xstatic >>> >> >>> >>> module >>> >> >>> >>> in >>> >> >>> >>> BASE_XSTATIC_MODULES is used. >>> >> >>> >>> Multiple horizon plugins can add a same entry, so horizon code >>> >> >>> >>> to >>> >> >>> >>> handle plugin settings should merge multiple entries to a >>> >> >>> >>> single >>> >> >>> >>> one >>> >> >>> >>> hopefully. >>> >> >>> >>> My vote is to enhance the horizon plugin settings. >>> >> >>> >>> >>> >> >>> >>> Akihiro >>> >> >>> >>> >>> >> >>> >>> > >>> >> >>> >>> > Looking forward to hearing any suggestions from you guys, >>> >> >>> >>> > and >>> >> >>> >>> > Best Regards, >>> >> >>> >>> > >>> >> >>> >>> > Xinni Ge >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > __________________________________________________________________________ >>> >> >>> >>> > OpenStack Development Mailing List (not for usage questions) >>> >> >>> >>> > Unsubscribe: >>> >> >>> >>> > >>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >>> > >>> >> >>> >>> >>> >> >>> >>> >>> >> >>> >>> >>> >> >>> >>> >>> >> >>> >>> >>> >> >>> >>> __________________________________________________________________________ >>> >> >>> >>> OpenStack Development Mailing List (not for usage questions) >>> >> >>> >>> Unsubscribe: >>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> >>> >>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> >>> >> 葛馨霓 Xinni Ge >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> __________________________________________________________________________ >>> >> >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> >>> >> Unsubscribe: >>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> >> >>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >> >>> >> >>> > >>> >> >>> > >>> >> >>> > >>> >> >>> > >>> >> >>> > >>> >> >>> > __________________________________________________________________________ >>> >> >>> > OpenStack Development Mailing List (not for usage questions) >>> >> >>> > Unsubscribe: >>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> > >>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> > >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> __________________________________________________________________________ >>> >> >>> OpenStack Development Mailing List (not for usage questions) >>> >> >>> Unsubscribe: >>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> __________________________________________________________________________ >>> >> >> OpenStack Development Mailing List (not for usage questions) >>> >> >> Unsubscribe: >>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > 葛馨霓 Xinni Ge >>> >> > >>> >> > >>> >> > >>> >> > __________________________________________________________________________ >>> >> > OpenStack Development Mailing List (not for usage questions) >>> >> > Unsubscribe: >>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > >>> >> >>> >> >>> >> __________________________________________________________________________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> > >>> > >>> > __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sbauza at redhat.com Wed Mar 21 13:33:46 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 21 Mar 2018 14:33:46 +0100 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: References: <76917686-697d-9471-a298-90364323c5f8@gmail.com> Message-ID: On Wed, Mar 21, 2018 at 2:12 PM, Eric Fried wrote: > +1 for the-earlier-the-better, for the additional reason that, if we > don't finish, we can do another one in time for spec freeze. > > +1 for Wed 27th March. > And I, for one, wouldn't be offended if we could "officially start > development" (i.e. focus on patches, start runways, etc.) before the > mystical but arbitrary spec freeze date. > > Sure, but given we have a lot of specs to review, TBH it'll be possible for me to look at implementation patches only close to the 1st milestone. > On 03/20/2018 07:29 PM, Matt Riedemann wrote: > > On 3/20/2018 6:47 PM, melanie witt wrote: > >> I was thinking that 2-3 weeks ahead of spec freeze would be > >> appropriate, so that would be March 27 (next week) or April 3 if we do > >> it on a Tuesday. > > > > It's spring break here on April 3 so I'll be listening to screaming > > kids, I mean on vacation. Not that my schedule matters, just FYI. > > > > But regardless of that, I think the earlier the better to flush out > > what's already there, since we've already approved quite a few > > blueprints this cycle (32 to so far). > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Mar 21 13:59:10 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 21 Mar 2018 13:59:10 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: <1521490029-sup-9732@lrrr.local> References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> <1521490029-sup-9732@lrrr.local> Message-ID: > > We want them all to use the openstackdocstheme so you could look > into creating a "subclass" of that one with the extra content in > the header, then ensure all of the specs repos use it. We would > have to land a small patch to trigger a rebuild, but the patch > switching them from oslosphinx to openstackdocstheme would serve > for that and a small change to the readme or another file would do it > for any that are already using the theme. > Thanks Doug, I'll investigate this route more when I have some free time to do so. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Wed Mar 21 14:14:12 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Wed, 21 Mar 2018 14:14:12 +0000 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: <20180321111456.gp7pymagsts4i4mm@localhost> References: <20180321111456.gp7pymagsts4i4mm@localhost> Message-ID: Hey Gorka, Thanks for your input:) I think I need to clarify that our idea is to share the backup resource to another tenants, that is different with transfer as the original tenant still can fully control the backup resource, and the tenants that have been shared to only have the ability to see and read the content of that backup. Gorka Eguileor 于2018年3月21日周三 下午7:15写道: > On 20/03, Jay S Bryant wrote: > > > > > > On 3/19/2018 10:55 PM, TommyLike Hu wrote: > > > Now Cinder can transfer volume (with or without snapshots) to different > > > projects, and this make it possbile to transfer data across tenant via > > > volume or image. Recently we had a conversation with our customer from > > > Germany, they mentioned they are more pleased if we can support > transfer > > > data accross tenant via backup not image or volume, and these below are > > > some of their concerns: > > > > > > 1. There is a use case that they would like to deploy their > > > develop/test/product systems in the same region but within different > > > tenants, so they have the requirment to share/transfer data across > > > tenants. > > > > > > 2. Users are more willing to use backups to secure/store their volume > > > data since backup feature is more advanced in product openstack version > > > (incremental backups/periodic backups/etc.). > > > > > > 3. Volume transfer is not a valid option as it's in AZ and it's a > > > complicated process if we would like to share the data to multiple > > > projects (keep copy in all the tenants). > > > > > > 4. Most of the users would like to use image for bootable volume only > > > and share volume data via image means the users have to maintain lots > of > > > image copies when volume backup changed as well as the whole system > > > needs to differentiate bootable images and none bootable images, most > > > important, we can not restore volume data via image now. > > > > > > 5. The easiest way for this seems to support sharing backup to > different > > > projects, the owner project have the full authority while shared > > > projects only can view/read the backups. > > > > > > 6. AWS has the similar concept, share snapshot. We can share it by > > > modify the snapshot's create volume permissions [1]. > > > > > > Looking forward to any like or dislike or suggestion on this idea > > > accroding to my feature proposal experience:) > > > > > > > > > Thanks > > > TommyLike > > > > > > > > > [1]: > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Tommy, > > > > As discussed at the PTG, this still sounds like improper usage of > Backup. > > Happy to hear input from others but I am having trouble getting my head > > around it. > > > > The idea of sharing a snapshot, as you mention AWS supports sounds like > it > > could be a more sensible approach. Why are you not proposing that? > > > > Jay > > > > Hi, > > I agree with Jay that this sounds like an improper use of Backups, and I > believe that this feature, just like trying to transfer snapshots, would > incur in a lot of code changes as well as an ever greater number of > bugs, because the ownership structure in Cinder is hierarchical and well > defined. > > So if you transferred a snapshot then you would lose that snapshot > information on the source volume, which means that we could not prevent > a volume deletion with a snapshot, or we could prevent it but would > either have to prevent the deletion from happening (creating a terrible > user experience since the user can't delete the volume now because > somebody else still has one of its snapshots) or we have to implement > some kind of "trash" mechanism to postpone cleanup until all the > snapshots have been deleted, which would make our quota code more > complex as well as make our stats reporting and scheduling diverge from > what the user think has actually happened (they deleted a bunch of > volumes but the data has not been freed from the backend). > > As for backups, you have an even worse situation because of our > incremental backups, since transferring ownership of an incremental > backup will create similar deletion issues as the snapshots but we also > require access to all all incremental snapshots to restore a volume. So > the only alternative would be to only allow transferring a full Backup > and this would carry all the incremental backups with it. > > All in all I think this would be an abuse of the Backups, and as stated > by TommyLike we already have mechanisms to do this via images and volume > transfers. > > Although I have to admit that after giving this some thought there is a > very good case where it wouldn't be an abuse and where we should allow > transferring full backups together with all their incremental backups, > and that is when you transfer a volume. If we transfer a volume with > all its snapshots, it makes sense that we should also allow transferring > its backups, after all the original source of the backups no longer > belongs to the owner of the backups. > > To summarize, if we are talking about transferring only full backups > with all their dependent incremental backup then I probably won't oppose > the change. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Wed Mar 21 14:19:59 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 21 Mar 2018 07:19:59 -0700 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: (Sylvain Bauza's message of "Wed, 21 Mar 2018 14:33:46 +0100") References: <76917686-697d-9471-a298-90364323c5f8@gmail.com> Message-ID: >> And I, for one, wouldn't be offended if we could "officially start >> development" (i.e. focus on patches, start runways, etc.) before the >> mystical but arbitrary spec freeze date. Yeah, I agree. I see runways as an attempt to add pressure to the earlier part of the cycle, where we're ignoring things that have been ready but aren't super high priority because "we have plenty of time." The later part of the cycle is when we start having to make hard decisions on things to de-focus, and where focus on the important core changes goes up naturally anyway. Personally, I think we're already kinda late in the cycle to be going on this, as I would have hoped to exit PTG with a plan to start operating in the new process immediately. Maybe I'm in the minority there, but I think that if we start this process late in the middle of a cycle, we'll probably need to adjust the prioritization of things in the queue more strictly, and remember that when retrospecting on the process for next cycle. > Sure, but given we have a lot of specs to review, TBH it'll be > possible for me to look at implementation patches only close to the > 1st milestone. I'm not sure I get this. We can't not review code while we review specs for weeks on end. We've already approved 75% of the blueprints (in number) that we completed in queens. One of the intended outcomes of this effort was to complete a higher percentage of what we approved, so we're not lying to contributors and so we have more focused review of things so they actually get completed instead of half-landed. To that end, I would kind of expect that we need to constantly be throttling (or maybe re-starting) spec review/approval rates to keep the queue full enough so we don't run dry, but without just ending up with a thousand approved things that we'll never get to. Anyway, just MHO. Obviously this will be an experiment and we won't get it right the first time. --Dan From ifat.afek at nokia.com Wed Mar 21 14:37:04 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 21 Mar 2018 14:37:04 +0000 Subject: [openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core Message-ID: <22A44F0C-E13C-4A2C-9946-E8F60E5CF597@nokia.com> Hi, I would like to nominate Dong Wenjuan for Vitrage core. Wenjuan has been contributing to Vitrage for a long time, since Newton version. She implemented several important features and has a deep knowledge of Vitrage architecture. I’m sure she can be a great addition to our team. Thanks, Ifat. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Mar 21 14:49:22 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 21 Mar 2018 10:49:22 -0400 Subject: [openstack-dev] Poll: S Release Naming In-Reply-To: <20180313235859.GA14573@localhost.localdomain> References: <20180313235859.GA14573@localhost.localdomain> Message-ID: <20180321144922.GA2922@localhost.localdomain> On Tue, Mar 13, 2018 at 07:58:59PM -0400, Paul Belanger wrote: > Greetings all, > > It is time again to cast your vote for the naming of the S Release. This time > is little different as we've decided to use a public polling option over per > user private URLs for voting. This means, everybody should proceed to use the > following URL to cast their vote: > > https://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_40b95cb2be3fcdf1&akey=8cfdc1f5df5fe4d3 > > Because this is a public poll, results will currently be only viewable by myself > until the poll closes. Once closed, I'll post the URL making the results > viewable to everybody. This was done to avoid everybody seeing the results while > the public poll is running. > > The poll will officially end on 2018-03-21 23:59:59[1], and results will be > posted shortly after. > > [1] http://git.openstack.org/cgit/openstack/governance/tree/reference/release-naming.rst > --- > > According to the Release Naming Process, this poll is to determine the > community preferences for the name of the R release of OpenStack. It is > possible that the top choice is not viable for legal reasons, so the second or > later community preference could wind up being the name. > > Release Name Criteria > > Each release name must start with the letter of the ISO basic Latin alphabet > following the initial letter of the previous release, starting with the > initial release of "Austin". After "Z", the next name should start with > "A" again. > > The name must be composed only of the 26 characters of the ISO basic Latin > alphabet. Names which can be transliterated into this character set are also > acceptable. > > The name must refer to the physical or human geography of the region > encompassing the location of the OpenStack design summit for the > corresponding release. The exact boundaries of the geographic region under > consideration must be declared before the opening of nominations, as part of > the initiation of the selection process. > > The name must be a single word with a maximum of 10 characters. Words that > describe the feature should not be included, so "Foo City" or "Foo Peak" > would both be eligible as "Foo". > > Names which do not meet these criteria but otherwise sound really cool > should be added to a separate section of the wiki page and the TC may make > an exception for one or more of them to be considered in the Condorcet poll. > The naming official is responsible for presenting the list of exceptional > names for consideration to the TC before the poll opens. > > Exact Geographic Region > > The Geographic Region from where names for the S release will come is Berlin > > Proposed Names > > Spree (a river that flows through the Saxony, Brandenburg and Berlin states of > Germany) > > SBahn (The Berlin S-Bahn is a rapid transit system in and around Berlin) > > Spandau (One of the twelve boroughs of Berlin) > > Stein (Steinstraße or "Stein Street" in Berlin, can also be conveniently > abbreviated as 🍺) > > Steglitz (a locality in the South Western part of the city) > > Springer (Berlin is headquarters of Axel Springer publishing house) > > Staaken (a locality within the Spandau borough) > > Schoenholz (A zone in the Niederschönhausen district of Berlin) > > Shellhaus (A famous office building) > > Suedkreuz ("southern cross" - a railway station in Tempelhof-Schöneberg) > > Schiller (A park in the Mitte borough) > > Saatwinkel (The name of a super tiny beach, and its surrounding neighborhood) > (The adjective form, Saatwinkler is also a really cool bridge but > that form is too long) > > Sonne (Sonnenallee is the name of a large street in Berlin crossing the former > wall, also translates as "sun") > > Savigny (Common place in City-West) > > Soorstreet (Street in Berlin restrict Charlottenburg) > > Solar (Skybar in Berlin) > > See (Seestraße or "See Street" in Berlin) > A friendly reminder, the naming poll will be closing later today (2018-03-21 23:59:59 UTC). If you haven't done so, please take a moment to vote. Thanks, Paul From eyalb1 at gmail.com Wed Mar 21 14:57:04 2018 From: eyalb1 at gmail.com (Eyal B) Date: Wed, 21 Mar 2018 16:57:04 +0200 Subject: [openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core Message-ID: +2 On 21 March 2018 at 16:37, Afek, Ifat (Nokia - IL/Kfar Sava) < ifat.afek at nokia.com> wrote: > Hi, > > > > I would like to nominate Dong Wenjuan for Vitrage core. > > Wenjuan has been contributing to Vitrage for a long time, since Newton > version. She implemented several important features and has a deep > knowledge of Vitrage architecture. I’m sure she can be a great addition to > our team. > > > > Thanks, > > Ifat. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Mar 21 14:57:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 21 Mar 2018 09:57:17 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521629342.8587.20.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> Message-ID: <20180321145716.GA23250@sm-xps> On Wed, Mar 21, 2018 at 10:49:02AM +0000, Stephen Finucane wrote: > tl;dr: Make sure you stop using pbr's autodoc feature before converting > them to the new PTI for docs. > > [snip] > > I've gone through and proposed a couple of reverts to fix projects > we've already broken. However, going forward, there are two things > people should do to prevent issues like this popping up. > Unfortunately this will not work to just revert the changes. That may fix things locally, but they will not pass in gate by going back to the old way. Any cases of this will have to actually be updated to not use the unsupported pieces you point out. But the doc builds will still need to be done the way they are now, as that is what the PTI requires at this point. > * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections > from 'setup.cfg' in any patches that aim to convert a project to use > the new PTI. This will ensure the gate catches any potential > issues.  > * In addition, if your project uses the pbr autodoc feature, you > should either (a) remove these docs from your documentation tree or > (b) migrate to something else like the 'sphinx.ext.autosummary' > extension [5]. I aim to post instructions on the latter shortly. > From pkovar at redhat.com Wed Mar 21 15:04:26 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 21 Mar 2018 16:04:26 +0100 Subject: [openstack-dev] [docs] Documentation meeting canceled Message-ID: <20180321160426.6401ea994f65957f96cb5466@redhat.com> Hi all, Apologies but have to cancel today's docs meeting due to a meeting conflict. If you want to talk to the docs team, we're in #openstack-doc, as always! Thanks, pk From sean.mcginnis at gmx.com Wed Mar 21 15:05:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 21 Mar 2018 10:05:13 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-12 In-Reply-To: References: Message-ID: <20180321150512.GB23250@sm-xps> On Tue, Mar 20, 2018 at 11:24:19PM +0000, Chris Dent wrote: > > HTML: https://anticdent.org/tc-report-18-12.html > > This week's TC Report goes off in the weeds a bit with the editorial > commentary from yours truly. I had trouble getting started, so had > to push myself through some thinking by writing stuff that at least > for the last few weeks I wouldn't normally be including in the > summaries. After getting through it, I realized that the reason I > was struggling is because I haven't been including these sorts of > things. Including them results in a longer and more meandering report > but it is more authentically my experience, which was my original > intention. > ++ Thanks for doing this Chris! From dmsimard at redhat.com Wed Mar 21 15:51:43 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Wed, 21 Mar 2018 11:51:43 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-12 In-Reply-To: References: Message-ID: In case people have missed it, Jim Blair sent an email recently to shed some light on where Zuul is headed [1]. [1]: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Tue, Mar 20, 2018 at 7:24 PM, Chris Dent wrote: > > HTML: https://anticdent.org/tc-report-18-12.html > > This week's TC Report goes off in the weeds a bit with the editorial > commentary from yours truly. I had trouble getting started, so had > to push myself through some thinking by writing stuff that at least > for the last few weeks I wouldn't normally be including in the > summaries. After getting through it, I realized that the reason I > was struggling is because I haven't been including these sorts of > things. Including them results in a longer and more meandering report > but it is more authentically my experience, which was my original > intention. > > # Zuul Extraction and the Difficult Nature of Communication > > Last [Tuesday > Morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-13.log.html#t2018-03-13T17:22:38) > we had some initial discussion about Zuul being extracted from > OpenStack governance as a precursor to becoming part of the CI/CD > strategic area being born elsewhere in the OpenStack Foundation. > > Then on > [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:08:06) > we revisited the topic, especially as it related to how we > communicate change in the community and how we invite participation > in making decisions about change. In this case by "community" we're > talking about anything under the giant umbrella of "stuff associated > with the OpenStack Foundation". > > Plenty of people expressed that though they were not surprised by > the change, it was because they are insiders and could understand > how some, who are not, might be surprised by what seemed like a big > change. This led to addressing the immediate shortcomings and > clarifying the history of the event. > > There was also > [concern](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:27:22) > that some of the reluctance to talk openly about the change appeared > to stem from needing to preserve the potency of a Foundation marketing > release. > > I [expressed some > frustration](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:36:50): > "...as usual, we're getting caught up in > details of a particular event (one that in the end we're all happy > to see happen), rather than the general problem we saw with it > (early transparency etc). Solving the immediate problem is easy, but > since we _keep doing it_, we've got a general issues to resolve." > > We went round and round about the various ways in which we have tried > and failed to do good communication in the past, and while we make > some progress, we fail to establish a pattern. As Doug [pointed > out](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-15.log.html#t2018-03-15T15:41:33), > no method can be 100% successful, but if we pick a method and stick to > it, people can learn that method. > > We have a cycle where we not only sometimes communicate poorly but > we also communicate poorly about that poor communication. So when I > come round to another week of writing this report, and am reminded > that these issues persist and I am once again communicating about > them, it's frustrating. Communicating, a lot, is generally a good > thing, but if things don't change as a result, that can be a strain. > If I'm still writing these things in a year's time, and we haven't > managed to achieve at least a bit more grace, consistency, and > transparency in the ways that we share information within and > between groups (including, and maybe especially, the Foundation > executive wing) in the wider community, it will be a shame and I will > have a sad. > > In a somewhat related and good sign, there is [great > thread](http://lists.openstack.org/pipermail/openstack-operators/2018-March/014994.html) > on the operators list that raises the potential of merging the Ops > Meeting and the PTG into some kind of "OpenStack Community Working > Gathering". > > # Encouraging Upstream Contribution > > On > [Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-16.log.html#t2018-03-16T14:29:21), > tbarron raised some interesting questions about how the summit talk > selection process might relate to the [four > opens](https://governance.openstack.org/tc/reference/opens.html). The > talk eventually led to a positive plan to try bring some potential > contributors upstream in advance of summit as, well as to work to > create more clear guidelines for track chairs. > > # Executive Power > > I had a question at [this morning's office > hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-20.log.html#t2018-03-20T09:00:00), > related to some work in the API-SIG that hasn't had a lot of traction, > about how best to explain how executive power is gained and spent > in a community where we intentionally spread power around a lot. As > with communication above, this is a topic that comes up a fair > amount, and investigating the underlying patterns can be > instructive. > > My initial reaction on the topic was the fairly standard (but in > different words): If this is important to you, step up and make it > happen. > > I think, however, that when we discuss these things we fail to take > enough account of the nature of OpenStack as a professional open > source environment. Usually, nonhierarchical, consensual > collaborations are found in environments where members represent > their own interests. > > In OpenStack our interactions are sometimes made more complex (and > alienating) by virtue of needing to represent the interests of a > company or other financial interest (including the interest of > keeping our nice job) while at the same time not having the recourse > of being able to complain to someone's boss when they are difficult > (because that boss is part of a different hierarchy than the one you > operate in). We love (rightfully so) the grand project which is > OpenStack, and want to preserve and extend as much as possible the > beliefs in things that make it feel unique, like "influence tokens". > But we must respect that these things are collectively agreed > hallucinations that require regular care and feeding, and balance > them against the surrounding context which is not operating with > those agreements. > > Further, those of us who have leeway to spend time building > influence tokens are operating from a position of privilege. One of > the ways we sustain that position is by behaving as if those tokens > are more readily available to more people than they really are. > > /me wipes brow > > # TC Elections Coming > > The next round of TC elections will be coming up in late April. If > you're thinking about it, but feel like you need more information > about what it might entail, please feel free to contact me. I'm sure > most of the other TC members would be happy to share their thoughts > as well. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lvmxhster at gmail.com Wed Mar 21 15:56:04 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Wed, 21 Mar 2018 23:56:04 +0800 Subject: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host? Message-ID: For today's IRC discussion. There is question about the weigher. Can cyborg support a get list API to get more than one host accelerators info when weigher. Sorry, I did not attend the PTG. Is it said there is a conclusion: Scheduler weigher will call into Cyborg REST API for each host instead of one REST API for all hosts. Is there some reason? INFO: http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-03-21-14.00.log.html BR Shaohe Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Wed Mar 21 16:11:18 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 21 Mar 2018 11:11:18 -0500 Subject: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host? In-Reply-To: References: Message-ID: <26CEBF5E-CC1B-4C7E-A8DD-0961DB22EC51@leafe.com> On Mar 21, 2018, at 10:56 AM, 少合冯 wrote: > > Sorry, I did not attend the PTG. > Is it said there is a conclusion: > Scheduler weigher will call into Cyborg REST API for each host instead of one REST API for all hosts. > Is there some reason? By default, hosts are weighed one by one. You can subclass the BaseWeigher (in nova/weights.py) to weigh all objects at once. -- Ed Leafe From kgiusti at gmail.com Wed Mar 21 16:14:54 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Wed, 21 Mar 2018 12:14:54 -0400 Subject: [openstack-dev] [oslo][all] Deprecation Notice: Pika driver for oslo.messaging Message-ID: Folks, Last year at the Boston summit the Oslo team decided to deprecate support for the Pika transport in oslo.messaging with removal planned for Rocky [0]. This was announced on the operators list last May [1]. No objections have been raised to date. We're not aware of any deployments using this transport and its removal is not anticipated to affect anyone. This is notice that the removal is currently underway [2]. Thanks, [0] https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations [1] http://lists.openstack.org/pipermail/openstack-operators/2017-May/013579.html [2] https://review.openstack.org/#/c/536960/ -- Ken Giusti (kgiusti at gmail.com) From lvmxhster at gmail.com Wed Mar 21 16:35:26 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Thu, 22 Mar 2018 00:35:26 +0800 Subject: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host? In-Reply-To: <26CEBF5E-CC1B-4C7E-A8DD-0961DB22EC51@leafe.com> References: <26CEBF5E-CC1B-4C7E-A8DD-0961DB22EC51@leafe.com> Message-ID: 2018-03-22 0:11 GMT+08:00 Ed Leafe : > On Mar 21, 2018, at 10:56 AM, 少合冯 wrote: > > > > Sorry, I did not attend the PTG. > > Is it said there is a conclusion: > > Scheduler weigher will call into Cyborg REST API for each host instead > of one REST API for all hosts. > > Is there some reason? > > By default, hosts are weighed one by one. You can subclass the BaseWeigher > (in nova/weights.py) to weigh all objects at once. > > Does that means it require call cyborg accelerator one by one? the pseudo code as follow: for host in hosts: accelerator = cyborg.http_get_ accelerator(host) do_weight_by_accelerator Instead of call cyborg accelerators once, the pseudo code as follow : accelerators = cyborg.http_get_ accelerator(hosts) for acc in accelerators: do_weight_by_accelerator -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Mar 21 16:40:30 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 21 Mar 2018 18:40:30 +0200 Subject: [openstack-dev] [horizon] Do we want new meeting time? Message-ID: Hi team, As was discussed at PTG, usually we've got a very few participants on our weekly meetings. I hope, mostly it's because of not comfort meeting time for many of us. Let's try to re-schedule Horizon weekly meetings and get more attendees there. I've created a doodle for it [1]. Please, vote for the best time for you. [1] https://doodle.com/poll/ei5gstt73d8v3a35 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Wed Mar 21 16:50:33 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 21 Mar 2018 11:50:33 -0500 Subject: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host? In-Reply-To: References: <26CEBF5E-CC1B-4C7E-A8DD-0961DB22EC51@leafe.com> Message-ID: <654BA627-BE09-4F28-86F0-48B0FA2FD62A@leafe.com> On Mar 21, 2018, at 11:35 AM, 少合冯 wrote: > >> By default, hosts are weighed one by one. You can subclass the BaseWeigher (in nova/weights.py) to weigh all objects at once. > > Does that means it require call cyborg accelerator one by one? the pseudo code as follow: > for host in hosts: > accelerator = cyborg.http_get_ accelerator(host) > do_weight_by_accelerator > > Instead of call cyborg accelerators once, the pseudo code as follow : > accelerators = cyborg.http_get_ accelerator(hosts) > for acc in accelerators: > do_weight_by_accelerator What it means is that if you override the weigh_objects() method of the BaseWeigher class, you can make a single call to Cyborg with a list of all the hosts. That call could then create a list of weights for all the hosts and return that. So if you have 100 hosts, you don’t need to make 100 calls to Cyborg; only 1. -- Ed Leafe From lvmxhster at gmail.com Wed Mar 21 16:59:07 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Thu, 22 Mar 2018 00:59:07 +0800 Subject: [openstack-dev] [Nova] [Cyborg] why cyborg can not support an accelerators info list for more than one host? In-Reply-To: <654BA627-BE09-4F28-86F0-48B0FA2FD62A@leafe.com> References: <26CEBF5E-CC1B-4C7E-A8DD-0961DB22EC51@leafe.com> <654BA627-BE09-4F28-86F0-48B0FA2FD62A@leafe.com> Message-ID: got it, thanks. 2018-03-22 0:50 GMT+08:00 Ed Leafe : > On Mar 21, 2018, at 11:35 AM, 少合冯 wrote: > > > >> By default, hosts are weighed one by one. You can subclass the > BaseWeigher (in nova/weights.py) to weigh all objects at once. > > > > Does that means it require call cyborg accelerator one by one? the > pseudo code as follow: > > for host in hosts: > > accelerator = cyborg.http_get_ accelerator(host) > > do_weight_by_accelerator > > > > Instead of call cyborg accelerators once, the pseudo code as follow : > > accelerators = cyborg.http_get_ accelerator(hosts) > > for acc in accelerators: > > do_weight_by_accelerator > > What it means is that if you override the weigh_objects() method of the > BaseWeigher class, you can make a single call to Cyborg with a list of all > the hosts. That call could then create a list of weights for all the hosts > and return that. So if you have 100 hosts, you don’t need to make 100 calls > to Cyborg; only 1. > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehabkost at redhat.com Wed Mar 21 18:00:41 2018 From: ehabkost at redhat.com (Eduardo Habkost) Date: Wed, 21 Mar 2018 15:00:41 -0300 Subject: [openstack-dev] [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains In-Reply-To: <20180320151012.GU4530@redhat.com> References: <20180320142031.GB23007@wheatley> <20180320151012.GU4530@redhat.com> Message-ID: <20180321180041.GA4245@localhost.localdomain> On Tue, Mar 20, 2018 at 03:10:12PM +0000, Daniel P. Berrangé wrote: > On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote: > > 1) Default devices/values > > > > Libvirt itself must default to whatever values there were before any > > particular element was introduced due to the fact that it strives to > > keep the guest ABI stable. That means, for example, that it can't just > > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic > > device to all QEMU machines, even though it would be useful, as that > > would change the guest ABI. > > > > For default values this is even more obvious. Let's say someone figures > > out some "pretty good" default values for various HyperV enlightenment > > feature tunables. Libvirt can't magically change them, but each one of > > the projects building on top of it doesn't want to keep that list > > updated and take care of setting them in every new XML. Some projects > > don't even expose those to the end user as a knob, while others might. > > This gets very tricky, very fast. > > Lets say that you have an initial good set of hyperv config > tunables. Now sometime passes and it is decided that there is a > different, better set of config tunables. If the module that is > providing this policy to apps like OpenStack just updates itself > to provide this new policy, this can cause problems with the > existing deployed applications in a number of ways. > > First the new config probably depends on specific versions of > libvirt and QEMU, and you can't mandate to consuming apps which > versions they must be using. [...] This is true. > [...] So you need a matrix of libvirt + > QEMU + config option settings. But this is not. If config options need support on the lower levels of the stack (libvirt and/or QEMU and/or KVM and/or host hardware), it already has to be represented by libvirt host capabilities somehow, so management layers know it's available. This means any new config generation system can (and must) use host(s) capabilities as input before generating the configuration. > > Even if you have the matching libvirt & QEMU versions, it is not > safe to assume the application will want to use the new policy. > An application may need live migration compatibility with older > versions. Or it may need to retain guaranteed ABI compatibility > with the way the VM was previously launched and be using transient > guests, generating the XML fresh each time. Why is that a problem? If you want live migration or ABI guarantees, you simply don't use this system to generate a new configuration. The same way you don't use the "pc" machine-type if you want to ensure compatibility with existing VMs. > > The application will have knowledge about when it wants to use new > vs old hyperv tunable policy, but exposing that to your policy module > is very tricky because it is inherantly application specific logic > largely determined by the way the application code is written. We have a huge set of features where this is simply not a problem. For most virtual hardware features, enabling them is not even a policy decision: it's just about telling the guest that the feature is now available. QEMU have been enabling new features in the "pc" machine-type for years. Now, why can't higher layers in the stack do something similar? The proposal is equivalent to what already happens when people use the "pc" machine-type in their configurations, but: 1) the new defaults/features wouldn't be hidden behind a opaque machine-type name, and would appear in the domain XML explicitly; 2) the higher layers won't depend on QEMU introducing a new machine-type just to have new features enabled by default; 3) features that depend on host capabilities but are available on all hosts in a cluster can now be enabled automatically if desired (which is something QEMU can't do because it doesn't have enough information about the other hosts). Choosing reasonable defaults might not be a trivial problem, but the current approach of pushing the responsibility to management layers doesn't improve the situation. [...] > > 2) Policies [...] > > 3) Abstracting the XML [...] > > 4) Identifying devices properly [...] > > 5) Generating the right XML snippet for device hot-(un)plug [...] These parts are trickier and I need to read the discussion more carefully before replying. -- Eduardo From ehabkost at redhat.com Wed Mar 21 19:34:23 2018 From: ehabkost at redhat.com (Eduardo Habkost) Date: Wed, 21 Mar 2018 16:34:23 -0300 Subject: [openstack-dev] [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains In-Reply-To: <20180321183952.GX8551@redhat.com> References: <20180320142031.GB23007@wheatley> <20180320151012.GU4530@redhat.com> <20180321180041.GA4245@localhost.localdomain> <20180321183952.GX8551@redhat.com> Message-ID: <20180321193423.GA3417@localhost.localdomain> On Wed, Mar 21, 2018 at 06:39:52PM +0000, Daniel P. Berrangé wrote: > On Wed, Mar 21, 2018 at 03:00:41PM -0300, Eduardo Habkost wrote: > > On Tue, Mar 20, 2018 at 03:10:12PM +0000, Daniel P. Berrangé wrote: > > > On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote: > > > > 1) Default devices/values > > > > > > > > Libvirt itself must default to whatever values there were before any > > > > particular element was introduced due to the fact that it strives to > > > > keep the guest ABI stable. That means, for example, that it can't just > > > > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic > > > > device to all QEMU machines, even though it would be useful, as that > > > > would change the guest ABI. > > > > > > > > For default values this is even more obvious. Let's say someone figures > > > > out some "pretty good" default values for various HyperV enlightenment > > > > feature tunables. Libvirt can't magically change them, but each one of > > > > the projects building on top of it doesn't want to keep that list > > > > updated and take care of setting them in every new XML. Some projects > > > > don't even expose those to the end user as a knob, while others might. > > > > > > This gets very tricky, very fast. > > > > > > Lets say that you have an initial good set of hyperv config > > > tunables. Now sometime passes and it is decided that there is a > > > different, better set of config tunables. If the module that is > > > providing this policy to apps like OpenStack just updates itself > > > to provide this new policy, this can cause problems with the > > > existing deployed applications in a number of ways. > > > > > > First the new config probably depends on specific versions of > > > libvirt and QEMU, and you can't mandate to consuming apps which > > > versions they must be using. [...] > > > > This is true. > > > > > [...] So you need a matrix of libvirt + > > > QEMU + config option settings. > > > > But this is not. If config options need support on the lower > > levels of the stack (libvirt and/or QEMU and/or KVM and/or host > > hardware), it already has to be represented by libvirt host > > capabilities somehow, so management layers know it's available. > > > > This means any new config generation system can (and must) use > > host(s) capabilities as input before generating the > > configuration. > > I don't think it is that simple. The capabilities reflect what the > current host is capable of only, not whether it is desirable to > actually use them. Just because a host reports that it has q35-2.11.0 > machine type doesn't mean that it should be used. The mgmt app may > only wish to use that if it is available on all hosts in a particular > grouping. The config generation library can't query every host directly > to determine this. The mgmt app may have a way to collate capabilities > info from hosts, but it is probably then stored in a app specific > format and data source, or it may just ends up being a global config > parameter to the mgmt app per host. In other words, you need host capabilities from all hosts as input when generating a new config XML. We already have a format to represent host capabilities defined by libvirt, users of the new system would just need to reproduce the data they got from libvirt and give it to the config generator. Not completely trivial, but maybe worth the effort if you want to benefit from work done by other people to find good defaults? > > There have been a number of times where a feature is available in > libvirt and/or QEMU, and the mgmt app still doesn't yet may still > not wish to use it because it is known broken / incompatible with > certain usage patterns. So the mgmt app would require an arbitrarily > newer libvirt/qemu before considering using it, regardless of > whether host capabilities report it is available. If this happens sometimes, why is it better for the teams maintaining management layers to duplicate the work of finding what works, instead of solving the problem only once? > > > > Even if you have the matching libvirt & QEMU versions, it is not > > > safe to assume the application will want to use the new policy. > > > An application may need live migration compatibility with older > > > versions. Or it may need to retain guaranteed ABI compatibility > > > with the way the VM was previously launched and be using transient > > > guests, generating the XML fresh each time. > > > > Why is that a problem? If you want live migration or ABI > > guarantees, you simply don't use this system to generate a new > > configuration. The same way you don't use the "pc" machine-type > > if you want to ensure compatibility with existing VMs. > > In many mgmt apps, every VM potentially needs live migration, so > unless I'm misunderstanding, you're effectively saying don't ever > use this config generator in these apps. If you only need live migration, you can choose between: a) not using it; b) using an empty host capability list as input when generating the XML (maybe this would be completely useless, but it's still an option); c) use only host _software_ capabilities as input, if you control the software that runs on all hosts. d) use an intersection of the software+host capabilities of all hosts as input. If you care about 100% static guest ABI (not just live migration), you either generate the XML once and save it for later, or you don't use the config generation system. (IOW, the same limitations as the "pc" machine-type alias). > > > > The application will have knowledge about when it wants to use new > > > vs old hyperv tunable policy, but exposing that to your policy module > > > is very tricky because it is inherantly application specific logic > > > largely determined by the way the application code is written. > > > > We have a huge set of features where this is simply not a > > problem. For most virtual hardware features, enabling them is > > not even a policy decision: it's just about telling the guest > > that the feature is now available. QEMU have been enabling new > > features in the "pc" machine-type for years. > > > > Now, why can't higher layers in the stack do something similar? > > > > The proposal is equivalent to what already happens when people > > use the "pc" machine-type in their configurations, but: > > 1) the new defaults/features wouldn't be hidden behind a opaque > > machine-type name, and would appear in the domain XML > > explicitly; > > 2) the higher layers won't depend on QEMU introducing a new > > machine-type just to have new features enabled by default; > > 3) features that depend on host capabilities but are available on > > all hosts in a cluster can now be enabled automatically if > > desired (which is something QEMU can't do because it doesn't > > have enough information about the other hosts). > > > > Choosing reasonable defaults might not be a trivial problem, but > > the current approach of pushing the responsibility to management > > layers doesn't improve the situation. > > The simple cases have been added to the "pc" machine type, but > more complex cases have not been dealt with as they often require > contextual knowledge of either the host setup or the guest OS > choice. Exactly. But on how many of those cases the decision requires knowledge that is specific to the management stack being used (like the ones you listed below), and how many are decisions that could be made by simply looking at the host software/hardware and guest OS? I am under the impression that we have a reasonable number of case of the latter. The ones I remember are all relate to CPU configuration: * Automatically enabling useful CPU features when they are available on all hosts; * Always enabling check='full' by default. Do we have other examples? > > We had a long debate over the best aio=threads,native setting for > OpenStack. Understanding the right defaults required knowledge about > the various different ways that Nova would setup its storage stack. > We certainly know enough now to be able to provide good recommendations > for the choice, with perf data to back it up, but interpreting those > recommendations still requires the app specific knowledge about its > storage mgmt approach, so ends up being code dev work. > > Another case is the pvpanic device - while in theory that could > have been enabled by default for all guests, by QEMU or a config > generator library, doing so is not useful on its own. The hard > bit of the work is adding code to the mgmt app to choose the > action for when pvpanic triggers, and code to handle the results > of that action. > > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- Eduardo From doug at doughellmann.com Wed Mar 21 20:02:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 21 Mar 2018 16:02:06 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <1521662425-sup-1628@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: > > TL;DR > ----- > > Let's stop copying exact dependency specifications into all our > projects to allow them to reflect the actual versions of things > they depend on. The constraints system in pip makes this change > safe. We still need to maintain some level of compatibility, so the > existing requirements-check job (run for changes to requirements.txt > within each repo) will change a bit rather than going away completely. > We can enable unit test jobs to verify the lower constraint settings > at the same time that we're doing the other work. The new job definition is in https://review.openstack.org/555034 and I have updated the oslo.config patch I mentioned before to use the new job instead of one defined in the oslo.config repo (see https://review.openstack.org/550603). I'll wait for that job patch to be reviewed and approved before I start adding the job to a bunch of other repositories. Doug From mordred at inaugust.com Wed Mar 21 20:44:39 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 21 Mar 2018 15:44:39 -0500 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration Message-ID: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Hey everybody! This upcoming Friday we're scheduled to complete the transition from python-openstacksdk to openstacksdk. This was started a while back (Tue Jun 16 12:05:38 2015 to be exact) by changing the name of what gets published to PyPI. Renaming the repo is to get those two back inline (and remove a hack in devstack to deal with them not being the same) Since this is a repo rename, it means that local git remotes will need to be updated. This can be done either via changing urls in .git/config - or by just re-cloning. Once that's done, we'll be in a position to migrate to storyboard. shade is already over there, which means we're currently split between storyboard and launchpad for the openstacksdk team repos. diablo_rojo has done a test migration and we're good to go there - so I'm thinking either Friday post-repo rename - or sometime early next week. Any thoughts or opinions? This will migrate bugs from launchpad for python-openstacksdk and os-client-config. Thanks! Monty From mordred at inaugust.com Wed Mar 21 20:49:07 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 21 Mar 2018 15:49:07 -0500 Subject: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> Message-ID: <3773bbba-964d-51d6-42d8-d081546a39fc@inaugust.com> On 03/16/2018 09:29 AM, Kwan, Louie wrote: > In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 are described in openstack's upper-constraints.txt, > > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411 > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297 > > If I do > >> git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens > > And then stack.sh > > We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0 > > Having said that, we need the older version, how to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 Would you mind sharing why you need the older versions? os-service-types is explicitly designed such that the latest version should always be correct. If there is something in 1.2.0 that has broken you in some way that you need an older version, that's a problem and we should look in to it. The story is intended to be similar for sdk moving forward ... but we're still pre-1.0, so that makes sense at the moment. I'm still interested in what specific issue you had, just to make sure we're aware of issues people are having. Thanks! Monty From iwienand at redhat.com Wed Mar 21 21:11:41 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 22 Mar 2018 08:11:41 +1100 Subject: [openstack-dev] [tripleo][infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0 In-Reply-To: <3008b3e9-47c2-077c-7acd-5a850b004e21@redhat.com> References: <3008b3e9-47c2-077c-7acd-5a850b004e21@redhat.com> Message-ID: <45324f81-2aaa-232c-604a-7aee714b7292@redhat.com> On 03/21/2018 03:39 PM, Ian Wienand wrote: > We will prepare dib 2.12.1 with the fix. As usual there are > complications, since the dib gate is broken due to unrelated triple-o > issues [2]. In the mean time, probably avoid 2.12.0 if you can. > [2] https://review.openstack.org/554705 Since we have having issues getting this verified due to some instability in the tripleo gate, I've proposed a temporary removal of the jobs for dib in [1]. [1] https://review.openstack.org/555037 From kennelson11 at gmail.com Wed Mar 21 22:14:47 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 21 Mar 2018 22:14:47 +0000 Subject: [openstack-dev] [PTLS] Project Updates & Project Onboarding Message-ID: Hello! Project Updates[1] & Project Onboarding[2] sessions are now live on the schedule! We did as best as we could to keep project onboarding sessions adjacent to project update slots. Though, given the differences in duration and the number of each we have per day that got increasingly difficult as the days went on, hopefully what is there will work for everyone. If there are any speakers you need added to your slots, or any conflicts you need addressed, feel free to email speakersupport at openstack.org and they should be able to help you out. Thanks! -Kendall Nelson (diablo_rojo) [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Update [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Onboarding -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Mar 21 23:57:30 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 22 Mar 2018 08:57:30 +0900 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: References: <76917686-697d-9471-a298-90364323c5f8@gmail.com> Message-ID: On Wed, Mar 21, 2018 at 10:33 PM, Sylvain Bauza wrote: > > > On Wed, Mar 21, 2018 at 2:12 PM, Eric Fried wrote: >> >> +1 for the-earlier-the-better, for the additional reason that, if we >> don't finish, we can do another one in time for spec freeze. >> > > +1 for Wed 27th March. > > >> >> And I, for one, wouldn't be offended if we could "officially start >> development" (i.e. focus on patches, start runways, etc.) before the >> mystical but arbitrary spec freeze date. >> > > Sure, but given we have a lot of specs to review, TBH it'll be possible for > me to look at implementation patches only close to the 1st milestone. > > >> >> On 03/20/2018 07:29 PM, Matt Riedemann wrote: >> > On 3/20/2018 6:47 PM, melanie witt wrote: >> >> I was thinking that 2-3 weeks ahead of spec freeze would be >> >> appropriate, so that would be March 27 (next week) or April 3 if we do >> >> it on a Tuesday. +1 for either one. I think we had enough time to update/push specs after PTG and doing it 2-3 weeks ahead of spec freeze are always helpful. >> > >> > It's spring break here on April 3 so I'll be listening to screaming >> > kids, I mean on vacation. Not that my schedule matters, just FYI. >> > >> > But regardless of that, I think the earlier the better to flush out >> > what's already there, since we've already approved quite a few >> > blueprints this cycle (32 to so far). >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -gmann From pabelanger at redhat.com Thu Mar 22 00:32:38 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 21 Mar 2018 20:32:38 -0400 Subject: [openstack-dev] OpenStack "S" Release Naming Preliminary Results Message-ID: <20180322003238.GB14691@localhost.localdomain> Hello all! We decided to run a public poll this time around, we'll likely discuss the process during a TC meeting, but we'd love the hear your feedback. The raw results are below - however ... **PLEASE REMEMBER** that these now have to go through legal vetting. So it is too soon to say 'OpenStack Solar' is our next release, given that previous polls have had some issues with the top choice. In any case, the names will been sent off to legal for vetting. As soon as we have a final winner, I'll let you all know. https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_40b95cb2be3fcdf1&rkey=c04ca6bca83a1427 Result 1. Solar (Condorcet winner: wins contests with all other choices) 2. Stein loses to Solar by 159–138 3. Spree loses to Solar by 175–122, loses to Stein by 148–141 4. Sonne loses to Solar by 190–99, loses to Spree by 174–97 5. Springer loses to Solar by 214–60, loses to Sonne by 147–103 6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 7. See loses to Solar by 203–61, loses to Spandau by 121–111 8. Schiller loses to Solar by 207–70, loses to See by 112–106 9. SBahn loses to Solar by 212–74, loses to Schiller by 111–101 10. Staaken loses to Solar by 219–59, loses to SBahn by 115–89 11. Shellhaus loses to Solar by 213–61, loses to Staaken by 94–85 12. Steglitz loses to Solar by 216–50, loses to Shellhaus by 90–83 13. Saatwinkel loses to Solar by 219–55, loses to Steglitz by 96–57 14. Savigny loses to Solar by 219–51, loses to Saatwinkel by 77–76 15. Schoenholz loses to Solar by 221–46, loses to Savigny by 78–70 16. Suedkreuz loses to Solar by 220–50, loses to Schoenholz by 68–67 17. Soorstreet loses to Solar by 226–32, loses to Suedkreuz by 75–58 - Paul From kobayashi.hiroaki at lab.ntt.co.jp Thu Mar 22 00:42:22 2018 From: kobayashi.hiroaki at lab.ntt.co.jp (Hiroaki Kobayashi) Date: Thu, 22 Mar 2018 09:42:22 +0900 Subject: [openstack-dev] [Blazar] Nominating Bertrand Souville to Blazar core In-Reply-To: <84717E64-73B9-481C-AADD-8FE1D5412128@uchicago.edu> References: <68dbaf1f-6546-9efe-e98b-8ffd23c1b117@lab.ntt.co.jp> <84717E64-73B9-481C-AADD-8FE1D5412128@uchicago.edu> Message-ID: <6e641f5b-64d2-263d-41c9-dc7738fcca1c@lab.ntt.co.jp> >> Hi Blazar folks, >> >> I'd like to nominate Bertrand Souville to blazar core team. He has been involved in the project since the Ocata release. He has worked on NFV usecase, gap analysis and feedback in OPNFV and ETSI NFV as well as in Blazar itself. Additionally, he has reviewed not only Blazar repository but Blazar related repository with nice long-term perspective. >> >> I believe he would make the project much nicer. >> >> best regards, >> Masahito > > +1 +1 From rochelle.grober at huawei.com Thu Mar 22 01:05:42 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Thu, 22 Mar 2018 01:05:42 +0000 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: <1521490029-sup-9732@lrrr.local> References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> <1521490029-sup-9732@lrrr.local> Message-ID: It could be *really* useful if you could include the date (month/year would be good enough)of the last significant patch (not including the reformat to Openstackdocstheme). That could give folks a great stick in the mud for what "past" is for the spec. It might even incent some to see if there are newer, conflicting or enhancing specs or docs to reference. --Eoxky > Doug Hellmann wrote: > > Excerpts from Jim Rollenhagen's message of 2018-03-19 19:06:38 +0000: > > On Mon, Mar 19, 2018 at 3:46 PM, Jeremy Stanley > wrote: > > > > > On 2018-03-19 14:57:58 +0000 (+0000), Jim Rollenhagen wrote: > > > [...] > > > > What do folks think about a banner at the top of the specs website > > > > (or each individual spec) that points this out? I'm happy to do > > > > the work if we agree it's a good thing to do. > > > [...] > > > > > > Sounds good in principle, but the execution may take a bit of work. > > > Specs sites are independently generated Sphinx documents stored in > > > different repositories managed by different teams, and don't > > > necessarily share a common theme or configuration. > > > > > > Huh, I had totally thought there was a theme for the specs site that > > most/all projects use. I may try to accomplish this anyway, but will > > likely be more work that I thought. I'll poke around at options (small > > sphinx plugin, etc). > > We want them all to use the openstackdocstheme so you could look into > creating a "subclass" of that one with the extra content in the header, then > ensure all of the specs repos use it. We would have to land a small patch to > trigger a rebuild, but the patch switching them from oslosphinx to > openstackdocstheme would serve for that and a small change to the readme > or another file would do it for any that are already using the theme. > > Doug > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From wangpeihuixyz at 126.com Thu Mar 22 02:25:11 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Thu, 22 Mar 2018 10:25:11 +0800 (CST) Subject: [openstack-dev] =?gbk?q?=5Bneutron=5DDoes_neutron-server_support_?= =?gbk?q?the_main_backup_redundancy=A3=BF?= In-Reply-To: References: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> Message-ID: <48414ae5.286b.1624b863cea.Coremail.wangpeihuixyz@126.com> Thanks for your response, another question is Does the compute nodes or agents know how many neutron-servers running? I mean If there was a server corrupt, they will automatically connect to other servers? Thanks, At 2018-03-21 18:14:47, "Miguel Angel Ajo Pelayo" wrote: You can run as many as you want, generally an haproxy is used in front of them to balance load across neutron servers. Also, keep in mind, that the db backend is a single mysql, you can also distribute that with galera. That is the configuration you will get by default when you deploy in HA with RDO/TripleO or OSP/Director. On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton wrote: You can run as many neutron server processes as you want in an active/active setup. On Tue, Mar 20, 2018, 18:35 Frank Wang wrote: Hi All, As far as I know, neutron-server only can be a single node, In order to improve the reliability of the system, Does it support the main backup or active/active redundancy? Any comment would be appreciated. Thanks, __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 22 04:48:40 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 21 Mar 2018 21:48:40 -0700 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: References: Message-ID: <960aea25-5e90-1423-ad51-c8016de3e967@gmail.com> On Tue, 20 Mar 2018 16:47:58 -0700, Melanie Witt wrote: > The past several cycles, we've had a spec review day in the cycle where > reviewers focus on specs and iterating quickly with spec authors for the > day. Spec freeze is April 19 so I wanted to get some input from all of > you about what day would work best for a spec review day. > > I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, > so that would be March 27 (next week) or April 3 if we do it on a Tuesday. Thanks for all who replied on the thread. There was consensus that earlier is better, so let's do the spec review day next week: Tuesday March 27. Best, -melanie From j.harbott at x-ion.de Thu Mar 22 07:51:44 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Thu, 22 Mar 2018 08:51:44 +0100 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: 2018-03-21 21:44 GMT+01:00 Monty Taylor : > Hey everybody! > > This upcoming Friday we're scheduled to complete the transition from > python-openstacksdk to openstacksdk. This was started a while back (Tue Jun > 16 12:05:38 2015 to be exact) by changing the name of what gets published to > PyPI. Renaming the repo is to get those two back inline (and remove a hack > in devstack to deal with them not being the same) > > Since this is a repo rename, it means that local git remotes will need to be > updated. This can be done either via changing urls in .git/config - or by > just re-cloning. > > Once that's done, we'll be in a position to migrate to storyboard. shade is > already over there, which means we're currently split between storyboard and > launchpad for the openstacksdk team repos. > > diablo_rojo has done a test migration and we're good to go there - so I'm > thinking either Friday post-repo rename - or sometime early next week. Any > thoughts or opinions? > > This will migrate bugs from launchpad for python-openstacksdk and > os-client-config. IMO this list is still much too long [0] and I expect it will make dealing with the long backlog even more tedious if the bugs are moved. Also there are lots of issues that intersect between sdk and python-openstackclient, so moving both at the same time would also sound reasonable. [0] https://storyboard.openstack.org/#!/story/list?status=active&tags=blocking-storyboard-migration From slawek at kaplonski.pl Thu Mar 22 07:52:08 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Thu, 22 Mar 2018 08:52:08 +0100 Subject: [openstack-dev] =?utf-8?q?=5Bneutron=5DDoes_neutron-server_suppor?= =?utf-8?q?t_the_main_backup_redundancy=EF=BC=9F?= In-Reply-To: <48414ae5.286b.1624b863cea.Coremail.wangpeihuixyz@126.com> References: <55403b04.809.16245faa585.Coremail.wangpeihuixyz@126.com> <48414ae5.286b.1624b863cea.Coremail.wangpeihuixyz@126.com> Message-ID: Hi, Neutron agents communicate with server through RPC messages. So agent don't need to know how many servers are on the other end of queue to consume messages. If You will start new neutron-server it will connect to same rabbitmq as other servers and to same db and will start processing messages. If Your new server should also process API requests, You should put You neutron server behind some load balancer and add new server as a backend to Your farm. — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Frank Wang w dniu 22.03.2018, o godz. 03:25: > > Thanks for your response, another question is Does the compute nodes or agents know how many neutron-servers running? I mean If there was a server corrupt, they will automatically connect to other servers? > > Thanks, > > At 2018-03-21 18:14:47, "Miguel Angel Ajo Pelayo" wrote: > You can run as many as you want, generally an haproxy is used in front of them to balance load across neutron servers. > > Also, keep in mind, that the db backend is a single mysql, you can also distribute that with galera. > > That is the configuration you will get by default when you deploy in HA with RDO/TripleO or OSP/Director. > > On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton wrote: > You can run as many neutron server processes as you want in an active/active setup. > > On Tue, Mar 20, 2018, 18:35 Frank Wang wrote: > Hi All, > As far as I know, neutron-server only can be a single node, In order to improve the reliability of the system, Does it support the main backup or active/active redundancy? Any comment would be appreciated. > > Thanks, > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From slawek at kaplonski.pl Thu Mar 22 08:12:52 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Thu, 22 Mar 2018 09:12:52 +0100 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime Message-ID: Hi, I took care of implementation of [1] in Neutron and I have couple questions to about this goal. 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html [3] https://review.openstack.org/#/c/554259/ — Best regards Slawek Kaplonski slawek at kaplonski.pl From kevin at benton.pub Thu Mar 22 08:48:57 2018 From: kevin at benton.pub (Kevin Benton) Date: Thu, 22 Mar 2018 03:48:57 -0500 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: I understand. I think you will likely need to run a bit of custom code for Pike if you want to override the default behavior of the port security extension. You should be able to use something like the following (untested) code as a service plugin. Install it somewhere on the server and then put the import path to it in the service_plugins configuration. from neutron_lib.api.definitions import port_security from neutron_lib.callbacks import events from neutron_lib.callbacks import registry from neutron_lib.callbacks import resources from neutron_lib.services import base @registry.has_registry_receivers class NetdefaultPortSecurity(base.ServicePluginBase): @registry.receives(resources.NETWORK, [events.BEFORE_CREATE]) def force_default_portsec_false(self, rtype, event, trigger, context, network, **kwargs): network[port_security.PORTSECURITY] = False @classmethod def get_plugin_type(cls): return 'portsecdefaultoverride' def get_plugin_description(self): return "workaround" To have this fixed in future versions I suggest filing an RFE to allow all security to be disabled completely if the port security extension isn't loaded. On Mon, Mar 19, 2018 at 9:34 AM, Vadim Ponomarev wrote: > If I understood correctly, you talk about rules which are generated by > security_group extension as default from the fixed_ips + > allowed_address_pairs list. In our openstack installation we disabled the > security_group and the allowed_address_pairs extensions to simplify the > configuration the HA clusters. > > Currently we configure the neutron as follows: > 1. prevent_arp_spoofing = False > 2. disable security_group extension > 3. disable allowed_address_pairs extension > > Actually, if port_security will be like a "central regulator" which > controll all mechanisms, it's perfectly in our case. But, we will lose > flexibility, because we can't changed default value for this option. And, > even if we disable the port_security extension in the neutron, the prevent > ARP-spoofing mechanism will work as default [1]. > > It's very important question, how do we may disable globally the prevent > ARP spoofing in the Pike release? To create all networks without specifying > an option port_security_enabled=False. > > Changes that were proposed by Tatiana, just let save current flexability. > > [1] https://github.com/openstack/neutron/blob/stable/pike/ > neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L907 > > 2018-03-19 16:24 GMT+03:00 Kevin Benton : > >> Disabling ARP spoofing protection alone will not let the standby instance >> source traffic using the active instance's IP. IP filtering rules >> independent of ARP enforcement rules ensure the source IP is in the >> fixed_ips or allowed_address_pairs. >> >> Are you already using allowed address pairs to add the shared IP to both? >> >> On Mon, Mar 19, 2018, 22:13 Vadim Ponomarev >> wrote: >> >>> Yes, there's really a need for mechanisms of high availability like >>> corosync, vrrp etc. Another simple example: we have two servers with the >>> active/standby HA configuration (for example keepalived + haproxy) and we >>> have third-party monitoring system for these servers. The monitoring system >>> gets some load metrics and when one of the servers is unavailable, the >>> monitoring system scales architecture (adds new server to cluster) in this >>> way saving the HA architecture. In your case, this monitoring system must >>> do the following steps: create new instance, add new instance's MAC address >>> to allowed_address_pairs and only after that reconfigure all other nodes. >>> Otherwise cluster will not work. The solution to the problem is simple - >>> disable the prevent ARP spoofing mechnism. >>> >>> Ok, we may used port_security options for this network with the HA >>> cluster. For this case we must reconfigure our monitoring systems, create >>> allowed_address_pairs for all current servers and (it's hardest) train our >>> users how that done. >>> >>> Currently, we don't use the prevent ARP spoofing option >>> (prevent_arp_spoofing = False) and honestly I don't understand why this >>> option is enabled as default in private networks. Each such network belongs >>> to one user, who controls all instances. Who would decide to perform a MITM >>> attack in his own network? >>> >>> 2018-03-19 12:53 GMT+03:00 Kevin Benton : >>> >>>> Do you need to spoof arbitrary addresses? If not (i.e. a set you know >>>> ahead of time), you can put entries in the allowed_address_pairs field of >>>> the port that will allow you to send traffic using other MAC/IPs. >>>> >>>> On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" >>>> wrote: >>>> >>>> Hi, >>>> >>>> I support, that is a problem. It's unclear, how after removing the >>>> option prevent_arp_spoofing, I can manage the prevent ARP spoofing >>>> mechanism. Example: I use security groups but I don't want to use ARP >>>> spoofing protection. How do I can disable the protection? >>>> >>>> 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : >>>> >>>>> Sure, there is an ability to enable ARP spoofing for the port/network, >>>>> but it is impossible to make it enabled by default for all ports. >>>>> It looks a bit complicated to me and I think it would be better to >>>>> have an ability to set default port security via config file. >>>>> >>>>> Best regards, >>>>> Tatiana >>>>> >>>>> 2018-03-13 15:10 GMT+03:00 Claudiu Belu >>>>> : >>>>> >>>>>> Hi, >>>>>> >>>>>> Indeed ARP spoofing is prevented by default, but AFAIK, if you want >>>>>> it enabled for a port / network, you can simply disable the security groups >>>>>> on that neutron network / port. >>>>>> >>>>>> Best regards, >>>>>> >>>>>> Claudiu Belu >>>>>> >>>>>> ------------------------------ >>>>>> *From:* Татьяна Холкина [holkina at selectel.ru] >>>>>> *Sent:* Tuesday, March 13, 2018 12:54 PM >>>>>> *To:* openstack-dev at lists.openstack.org >>>>>> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >>>>>> >>>>>> Hi, >>>>>> I'm using an ocata release of OpenStack where the option >>>>>> prevent_arp_spoofing can be managed via conf. But later in pike it was >>>>>> removed and it was decided to prevent spoofing by default. >>>>>> There are cases where security features should be disabled. As I can >>>>>> see now we can use a port_security option for these cases. But this option >>>>>> should be set for a particular port or network on create. The default value >>>>>> is set to True [1] and itt is impossible to change it. I'd like to >>>>>> suggest to get default value for port_security [2] from config option. >>>>>> It would be nice to know your opinion. >>>>>> >>>>>> [1] https://github.com/openstack/neutron-lib/blob/stable/que >>>>>> ens/neutron_lib/api/definitions/port_security.py#L21 >>>>>> [2] https://github.com/openstack/neutron/blob/stable/queens/ >>>>>> neutron/objects/extensions/port_security.py#L24 >>>>>> >>>>>> Best regards, >>>>>> Tatiana >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best regards, >>>> Vadim Ponomarev >>>> Developer of network automation department at Selectel Ltd. >>>> >>>> ---- >>>> This message may contain confidential information that can't be >>>> distributed without the consent of the sender or the authorized person Selectel >>>> Ltd. >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> Best regards, >>> Vadim Ponomarev >>> Developer of network automation department at Selectel Ltd. >>> >>> ---- >>> This message may contain confidential information that can't be >>> distributed without the consent of the sender or the authorized person Selectel >>> Ltd. >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Best regards, > Vadim Ponomarev > Developer of network automation department at Selectel Ltd. > > ---- > This message may contain confidential information that can't be > distributed without the consent of the sender or the authorized person Selectel > Ltd. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ponomarev at selectel.ru Thu Mar 22 09:20:22 2018 From: ponomarev at selectel.ru (Vadim Ponomarev) Date: Thu, 22 Mar 2018 12:20:22 +0300 Subject: [openstack-dev] [neutron] Prevent ARP spoofing In-Reply-To: References: Message-ID: Kevin, thank you for your help. Currently, we already have a similar patch for Pike release. We started this discuss that to pay attention to the degradation of flexibility and discuss to need to create a RFE. 2018-03-22 11:48 GMT+03:00 Kevin Benton : > I understand. I think you will likely need to run a bit of custom code for > Pike if you want to override the default behavior of the port security > extension. You should be able to use something like the following > (untested) code as a service plugin. Install it somewhere on the server and > then put the import path to it in the service_plugins configuration. > > > > > from neutron_lib.api.definitions import port_security > from neutron_lib.callbacks import events > from neutron_lib.callbacks import registry > from neutron_lib.callbacks import resources > from neutron_lib.services import base > > > @registry.has_registry_receivers > class NetdefaultPortSecurity(base.ServicePluginBase): > > @registry.receives(resources.NETWORK, [events.BEFORE_CREATE]) > def force_default_portsec_false(self, rtype, event, trigger, context, > network, **kwargs): > network[port_security.PORTSECURITY] = False > > @classmethod > def get_plugin_type(cls): > return 'portsecdefaultoverride' > > def get_plugin_description(self): > return "workaround" > > > > To have this fixed in future versions I suggest filing an RFE to allow all > security to be disabled completely if the port security extension isn't > loaded. > > On Mon, Mar 19, 2018 at 9:34 AM, Vadim Ponomarev > wrote: > >> If I understood correctly, you talk about rules which are generated by >> security_group extension as default from the fixed_ips + >> allowed_address_pairs list. In our openstack installation we disabled the >> security_group and the allowed_address_pairs extensions to simplify the >> configuration the HA clusters. >> >> Currently we configure the neutron as follows: >> 1. prevent_arp_spoofing = False >> 2. disable security_group extension >> 3. disable allowed_address_pairs extension >> >> Actually, if port_security will be like a "central regulator" which >> controll all mechanisms, it's perfectly in our case. But, we will lose >> flexibility, because we can't changed default value for this option. And, >> even if we disable the port_security extension in the neutron, the prevent >> ARP-spoofing mechanism will work as default [1]. >> >> It's very important question, how do we may disable globally the prevent >> ARP spoofing in the Pike release? To create all networks without specifying >> an option port_security_enabled=False. >> >> Changes that were proposed by Tatiana, just let save current flexability. >> >> [1] https://github.com/openstack/neutron/blob/stable/pike/ne >> utron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L907 >> >> 2018-03-19 16:24 GMT+03:00 Kevin Benton : >> >>> Disabling ARP spoofing protection alone will not let the standby >>> instance source traffic using the active instance's IP. IP filtering rules >>> independent of ARP enforcement rules ensure the source IP is in the >>> fixed_ips or allowed_address_pairs. >>> >>> Are you already using allowed address pairs to add the shared IP to both? >>> >>> On Mon, Mar 19, 2018, 22:13 Vadim Ponomarev >>> wrote: >>> >>>> Yes, there's really a need for mechanisms of high availability like >>>> corosync, vrrp etc. Another simple example: we have two servers with the >>>> active/standby HA configuration (for example keepalived + haproxy) and we >>>> have third-party monitoring system for these servers. The monitoring system >>>> gets some load metrics and when one of the servers is unavailable, the >>>> monitoring system scales architecture (adds new server to cluster) in this >>>> way saving the HA architecture. In your case, this monitoring system must >>>> do the following steps: create new instance, add new instance's MAC address >>>> to allowed_address_pairs and only after that reconfigure all other nodes. >>>> Otherwise cluster will not work. The solution to the problem is simple - >>>> disable the prevent ARP spoofing mechnism. >>>> >>>> Ok, we may used port_security options for this network with the HA >>>> cluster. For this case we must reconfigure our monitoring systems, create >>>> allowed_address_pairs for all current servers and (it's hardest) train our >>>> users how that done. >>>> >>>> Currently, we don't use the prevent ARP spoofing option >>>> (prevent_arp_spoofing = False) and honestly I don't understand why this >>>> option is enabled as default in private networks. Each such network belongs >>>> to one user, who controls all instances. Who would decide to perform a MITM >>>> attack in his own network? >>>> >>>> 2018-03-19 12:53 GMT+03:00 Kevin Benton : >>>> >>>>> Do you need to spoof arbitrary addresses? If not (i.e. a set you know >>>>> ahead of time), you can put entries in the allowed_address_pairs field of >>>>> the port that will allow you to send traffic using other MAC/IPs. >>>>> >>>>> On Mar 19, 2018 8:42 PM, "Vadim Ponomarev" >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> I support, that is a problem. It's unclear, how after removing the >>>>> option prevent_arp_spoofing, I can manage the prevent ARP spoofing >>>>> mechanism. Example: I use security groups but I don't want to use ARP >>>>> spoofing protection. How do I can disable the protection? >>>>> >>>>> 2018-03-14 10:26 GMT+03:00 Tatiana Kholkina : >>>>> >>>>>> Sure, there is an ability to enable ARP spoofing for the >>>>>> port/network, but it is impossible to make it enabled by default for all >>>>>> ports. >>>>>> It looks a bit complicated to me and I think it would be better to >>>>>> have an ability to set default port security via config file. >>>>>> >>>>>> Best regards, >>>>>> Tatiana >>>>>> >>>>>> 2018-03-13 15:10 GMT+03:00 Claudiu Belu >>>>> >: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Indeed ARP spoofing is prevented by default, but AFAIK, if you want >>>>>>> it enabled for a port / network, you can simply disable the security groups >>>>>>> on that neutron network / port. >>>>>>> >>>>>>> Best regards, >>>>>>> >>>>>>> Claudiu Belu >>>>>>> >>>>>>> ------------------------------ >>>>>>> *From:* Татьяна Холкина [holkina at selectel.ru] >>>>>>> *Sent:* Tuesday, March 13, 2018 12:54 PM >>>>>>> *To:* openstack-dev at lists.openstack.org >>>>>>> *Subject:* [openstack-dev] [neutron] Prevent ARP spoofing >>>>>>> >>>>>>> Hi, >>>>>>> I'm using an ocata release of OpenStack where the option >>>>>>> prevent_arp_spoofing can be managed via conf. But later in pike it was >>>>>>> removed and it was decided to prevent spoofing by default. >>>>>>> There are cases where security features should be disabled. As I can >>>>>>> see now we can use a port_security option for these cases. But this option >>>>>>> should be set for a particular port or network on create. The default value >>>>>>> is set to True [1] and itt is impossible to change it. I'd like to >>>>>>> suggest to get default value for port_security [2] from config option. >>>>>>> It would be nice to know your opinion. >>>>>>> >>>>>>> [1] https://github.com/openstack/neutron-lib/blob/stable/que >>>>>>> ens/neutron_lib/api/definitions/port_security.py#L21 >>>>>>> [2] https://github.com/openstack/neutron/blob/stable/queens/ >>>>>>> neutron/objects/extensions/port_security.py#L24 >>>>>>> >>>>>>> Best regards, >>>>>>> Tatiana >>>>>>> >>>>>>> ____________________________________________________________ >>>>>>> ______________ >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>>> enstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>>> >>>>>>> >>>>>> >>>>>> ____________________________________________________________ >>>>>> ______________ >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>>> enstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Best regards, >>>>> Vadim Ponomarev >>>>> Developer of network automation department at Selectel Ltd. >>>>> >>>>> ---- >>>>> This message may contain confidential information that can't be >>>>> distributed without the consent of the sender or the authorized person Selectel >>>>> Ltd. >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> >>>> >>>> >>>> -- >>>> Best regards, >>>> Vadim Ponomarev >>>> Developer of network automation department at Selectel Ltd. >>>> >>>> ---- >>>> This message may contain confidential information that can't be >>>> distributed without the consent of the sender or the authorized person Selectel >>>> Ltd. >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Best regards, >> Vadim Ponomarev >> Developer of network automation department at Selectel Ltd. >> >> ---- >> This message may contain confidential information that can't be >> distributed without the consent of the sender or the authorized person Selectel >> Ltd. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best regards, Vadim Ponomarev Developer of network automation department at Selectel Ltd. ---- This message may contain confidential information that can't be distributed without the consent of the sender or the authorized person Selectel Ltd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Mar 22 09:54:41 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 22 Mar 2018 10:54:41 +0100 Subject: [openstack-dev] [cinder] Support share backup to different projects? In-Reply-To: References: <20180321111456.gp7pymagsts4i4mm@localhost> Message-ID: <20180322095441.g3zroutldid3poax@localhost> On 21/03, TommyLike Hu wrote: > Hey Gorka, > Thanks for your input:) I think I need to clarify that our idea is to > share the backup resource to another tenants, that is different with > transfer as the original tenant still can fully control the backup > resource, and the tenants that have been shared to only have the ability to > see and read the content of that backup. > Hi TommyLike, Thanks for correcting my misconception of the feature. Sean brought this up in yesterday's meeting and, after explaining to me that we were talking about sharing and not transferring, we agreed that this is not an acceptable feature. I am OK with transferring backups, but certainly not sharing them. For sharing data you would use Glance images, and maybe you can even use the "image_upload_use_cinder_backend" feature to get what you want, though I have never used it myself. Or if the client wants to use the volumes to boot from them they could attach the "golden volume" to a nova instance, create a "nova instance snapshot" which will result in a Glance image placeholder for a Cinder snapshot. But as you can see all the solutions I can think of go through Glance images, which is the right way to share volume data between users. I think our efforts would be better spent removing the bottlenecks we currently have on all create volume from source operations if what the customer is trying to do is workaround it via the backups. Cheers, Gorka. > Gorka Eguileor 于2018年3月21日周三 下午7:15写道: > > > On 20/03, Jay S Bryant wrote: > > > > > > > > > On 3/19/2018 10:55 PM, TommyLike Hu wrote: > > > > Now Cinder can transfer volume (with or without snapshots) to different > > > > projects, and this make it possbile to transfer data across tenant via > > > > volume or image. Recently we had a conversation with our customer from > > > > Germany, they mentioned they are more pleased if we can support > > transfer > > > > data accross tenant via backup not image or volume, and these below are > > > > some of their concerns: > > > > > > > > 1. There is a use case that they would like to deploy their > > > > develop/test/product systems in the same region but within different > > > > tenants, so they have the requirment to share/transfer data across > > > > tenants. > > > > > > > > 2. Users are more willing to use backups to secure/store their volume > > > > data since backup feature is more advanced in product openstack version > > > > (incremental backups/periodic backups/etc.). > > > > > > > > 3. Volume transfer is not a valid option as it's in AZ and it's a > > > > complicated process if we would like to share the data to multiple > > > > projects (keep copy in all the tenants). > > > > > > > > 4. Most of the users would like to use image for bootable volume only > > > > and share volume data via image means the users have to maintain lots > > of > > > > image copies when volume backup changed as well as the whole system > > > > needs to differentiate bootable images and none bootable images, most > > > > important, we can not restore volume data via image now. > > > > > > > > 5. The easiest way for this seems to support sharing backup to > > different > > > > projects, the owner project have the full authority while shared > > > > projects only can view/read the backups. > > > > > > > > 6. AWS has the similar concept, share snapshot. We can share it by > > > > modify the snapshot's create volume permissions [1]. > > > > > > > > Looking forward to any like or dislike or suggestion on this idea > > > > accroding to my feature proposal experience:) > > > > > > > > > > > > Thanks > > > > TommyLike > > > > > > > > > > > > [1]: > > https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html > > > > > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > Tommy, > > > > > > As discussed at the PTG, this still sounds like improper usage of > > Backup. > > > Happy to hear input from others but I am having trouble getting my head > > > around it. > > > > > > The idea of sharing a snapshot, as you mention AWS supports sounds like > > it > > > could be a more sensible approach. Why are you not proposing that? > > > > > > Jay > > > > > > > Hi, > > > > I agree with Jay that this sounds like an improper use of Backups, and I > > believe that this feature, just like trying to transfer snapshots, would > > incur in a lot of code changes as well as an ever greater number of > > bugs, because the ownership structure in Cinder is hierarchical and well > > defined. > > > > So if you transferred a snapshot then you would lose that snapshot > > information on the source volume, which means that we could not prevent > > a volume deletion with a snapshot, or we could prevent it but would > > either have to prevent the deletion from happening (creating a terrible > > user experience since the user can't delete the volume now because > > somebody else still has one of its snapshots) or we have to implement > > some kind of "trash" mechanism to postpone cleanup until all the > > snapshots have been deleted, which would make our quota code more > > complex as well as make our stats reporting and scheduling diverge from > > what the user think has actually happened (they deleted a bunch of > > volumes but the data has not been freed from the backend). > > > > As for backups, you have an even worse situation because of our > > incremental backups, since transferring ownership of an incremental > > backup will create similar deletion issues as the snapshots but we also > > require access to all all incremental snapshots to restore a volume. So > > the only alternative would be to only allow transferring a full Backup > > and this would carry all the incremental backups with it. > > > > All in all I think this would be an abuse of the Backups, and as stated > > by TommyLike we already have mechanisms to do this via images and volume > > transfers. > > > > Although I have to admit that after giving this some thought there is a > > very good case where it wouldn't be an abuse and where we should allow > > transferring full backups together with all their incremental backups, > > and that is when you transfer a volume. If we transfer a volume with > > all its snapshots, it makes sense that we should also allow transferring > > its backups, after all the original source of the backups no longer > > belongs to the owner of the backups. > > > > To summarize, if we are talking about transferring only full backups > > with all their dependent incremental backup then I probably won't oppose > > the change. > > > > Cheers, > > Gorka. > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zijian1012 at 163.com Thu Mar 22 10:14:48 2018 From: zijian1012 at 163.com (zijian1012 at 163.com) Date: Thu, 22 Mar 2018 18:14:48 +0800 Subject: [openstack-dev] Is openstacksdk backward compatible? Message-ID: <2018032218134790221828@163.com> Hello, everyone The openstack version I deployed was Newton, now I am ready to use openstacksdk for development, , I noticed that the latest openstacksdk version is 0.12,this makes me a bit confused, How do I know if this sdk(0.12) is compatible with my openstack(Newton)? I also noticed that the python-xxxclient version and openstack version are strictly corresponding, such as "python-novaclient/tree/newton-eol", "python-neutronclient/tree/newton-eol". So, should I use python-xxxclient instead of openstacksdk? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Mar 22 10:43:45 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 22 Mar 2018 10:43:45 +0000 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180321145716.GA23250@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> Message-ID: <1521715425.17048.8.camel@redhat.com> On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote: > On Wed, Mar 21, 2018 at 10:49:02AM +0000, Stephen Finucane wrote: > > tl;dr: Make sure you stop using pbr's autodoc feature before converting > > them to the new PTI for docs. > > > > [snip] > > > > I've gone through and proposed a couple of reverts to fix projects > > we've already broken. However, going forward, there are two things > > people should do to prevent issues like this popping up. > > Unfortunately this will not work to just revert the changes. That may fix > things locally, but they will not pass in gate by going back to the old way. > > Any cases of this will have to actually be updated to not use the unsupported > pieces you point out. But the doc builds will still need to be done the way > they are now, as that is what the PTI requires at this point. That's unfortunate. What we really need is a migration path from the 'pbr' way of doing things to something else. I see three possible avenues at this point in time: 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar things to 'sphinx-apidoc' but it takes the form of an extension. From my brief experiments, the output generated from this is radically different and far less comprehensive than what 'sphinx- apidoc' generates. However, it supports templating so we could probably configure this somehow and add our own special directive somewhere like 'openstackdocstheme' 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back against upstream Sphinx [1]. This essentially does what the PBR extension does but moves configuration into 'conf.py'. However, this is currently held up as I can't adequately explain the differences between this and 'sphinx.ext.autosummary' (there's definite overlap but I don't understand 'autosummary' well enough to compare them). 3. Modify the upstream jobs that detect the pbr integration and have them run 'sphinx-apidoc' before 'sphinx-build'. This is the least technically appealing approach as it still leaves us unable to build stuff locally and adds yet more "magic" to the gate, but it does let us progress. Try as I may, I don't really have the bandwidth to work on this for another few weeks so I'd appreciate help from anyone with sufficient Sphinx-fu to come up with a long-term solution to this issue. Cheers, Stephen [1] https://github.com/sphinx-doc/sphinx/pull/4101/files > > * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections > > from 'setup.cfg' in any patches that aim to convert a project to use > > the new PTI. This will ensure the gate catches any potential > > issues. > > * In addition, if your project uses the pbr autodoc feature, you > > should either (a) remove these docs from your documentation tree or > > (b) migrate to something else like the 'sphinx.ext.autosummary' > > extension [5]. I aim to post instructions on the latter shortly. From mordred at inaugust.com Thu Mar 22 12:01:02 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 22 Mar 2018 07:01:02 -0500 Subject: [openstack-dev] Is openstacksdk backward compatible? In-Reply-To: <2018032218134790221828@163.com> References: <2018032218134790221828@163.com> Message-ID: <95c72666-1ead-3425-a59b-8d0c128133e0@inaugust.com> On 03/22/2018 05:14 AM, zijian1012 at 163.com wrote: > Hello, everyone > The openstack version I deployed was Newton,  now > I am ready to use openstacksdk for development, *, > *I noticed that the latest *openstacksdk *version is 0.12 > /0.12.0>,this makes me a bit confused, How do I know if this sdk(0.12) > is compatible with my openstack(Newton)? openstacksdk should work with all existing versions of OpenStack. If you ever find a scenario when latest openstacksdk does not work with the cloud you are using, it is a bug and we will fix it as quickly as possible. We'll hopefully be releasing a 1.0 of openstacksdk soon, however 0.12 should be safe for you to use. > I also noticed that the > python-xxxclient version and openstack version are strictly corresponding, > such as "python-novaclient/tree/newton-eol", > "python-neutronclient/tree/newton-eol". So, should I use > python-xxxclient instead of openstacksdk? Please do not use python-xxxclient for any new work. Doing so will cause you nothing but pain. Let us know if you have any issues with openstacksdk ... #openstack-sdks in IRC is a great place to find people. Monty From mordred at inaugust.com Thu Mar 22 12:29:55 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 22 Mar 2018 07:29:55 -0500 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: On 03/22/2018 02:51 AM, Jens Harbott wrote: > 2018-03-21 21:44 GMT+01:00 Monty Taylor : >> Hey everybody! >> >> This upcoming Friday we're scheduled to complete the transition from >> python-openstacksdk to openstacksdk. This was started a while back (Tue Jun >> 16 12:05:38 2015 to be exact) by changing the name of what gets published to >> PyPI. Renaming the repo is to get those two back inline (and remove a hack >> in devstack to deal with them not being the same) >> >> Since this is a repo rename, it means that local git remotes will need to be >> updated. This can be done either via changing urls in .git/config - or by >> just re-cloning. >> >> Once that's done, we'll be in a position to migrate to storyboard. shade is >> already over there, which means we're currently split between storyboard and >> launchpad for the openstacksdk team repos. >> >> diablo_rojo has done a test migration and we're good to go there - so I'm >> thinking either Friday post-repo rename - or sometime early next week. Any >> thoughts or opinions? >> >> This will migrate bugs from launchpad for python-openstacksdk and >> os-client-config. > > IMO this list is still much too long [0] and I expect it will make > dealing with the long backlog even more tedious if the bugs are moved. storyboard is certainly not perfect, but there are also great features it does have to help deal with backlog. We can set up a board, like we did for zuulv3: https://storyboard.openstack.org/#!/board/41 Jim also wrote 'boartty' which is like gertty but for doing storyboard things. Which is to say - it's got issues, but it's also got a bunch of positives too. > Also there are lots of issues that intersect between sdk and > python-openstackclient, so moving both at the same time would also > sound reasonable. I could see waiting until we move python-openstackclient. However, we've got the issue already with shade bugs being in storyboard already and sdk bugs being in launchpad. With shade moving to having its implementation be in openstacksdk, over this cycle I expect the number of bugs people report against shade wind up actually being against openstacksdk to increase quite a bit. Maybe we should see if the python-openstackclient team wants to migrate too? What do people think? Monty From whayutin at redhat.com Thu Mar 22 12:33:31 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 22 Mar 2018 08:33:31 -0400 Subject: [openstack-dev] [tripleo][infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0 In-Reply-To: <45324f81-2aaa-232c-604a-7aee714b7292@redhat.com> References: <3008b3e9-47c2-077c-7acd-5a850b004e21@redhat.com> <45324f81-2aaa-232c-604a-7aee714b7292@redhat.com> Message-ID: On Wed, Mar 21, 2018 at 5:11 PM, Ian Wienand wrote: > On 03/21/2018 03:39 PM, Ian Wienand wrote: > >> We will prepare dib 2.12.1 with the fix. As usual there are >> complications, since the dib gate is broken due to unrelated triple-o >> issues [2]. In the mean time, probably avoid 2.12.0 if you can. >> > > [2] https://review.openstack.org/554705 >> > > Since we have having issues getting this verified due to some > instability in the tripleo gate, I've proposed a temporary removal of > the jobs for dib in [1]. > > [1] https://review.openstack.org/555037 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Thanks Ian! I'm not sure if the build job had enough visibility to everyone, trying to correct that now. tripleo-buildimage-overcloud-full-centos-7 gate status is available at http://cistatus.tripleo.org:8000/ Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Thu Mar 22 12:42:39 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 22 Mar 2018 21:42:39 +0900 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: 2018-03-22 21:29 GMT+09:00 Monty Taylor : > On 03/22/2018 02:51 AM, Jens Harbott wrote: > >> 2018-03-21 21:44 GMT+01:00 Monty Taylor : >> >>> Hey everybody! >>> >>> This upcoming Friday we're scheduled to complete the transition from >>> python-openstacksdk to openstacksdk. This was started a while back (Tue >>> Jun >>> 16 12:05:38 2015 to be exact) by changing the name of what gets >>> published to >>> PyPI. Renaming the repo is to get those two back inline (and remove a >>> hack >>> in devstack to deal with them not being the same) >>> >>> Since this is a repo rename, it means that local git remotes will need >>> to be >>> updated. This can be done either via changing urls in .git/config - or by >>> just re-cloning. >>> >>> Once that's done, we'll be in a position to migrate to storyboard. shade >>> is >>> already over there, which means we're currently split between storyboard >>> and >>> launchpad for the openstacksdk team repos. >>> >>> diablo_rojo has done a test migration and we're good to go there - so I'm >>> thinking either Friday post-repo rename - or sometime early next week. >>> Any >>> thoughts or opinions? >>> >>> This will migrate bugs from launchpad for python-openstacksdk and >>> os-client-config. >>> >> >> IMO this list is still much too long [0] and I expect it will make >> dealing with the long backlog even more tedious if the bugs are moved. >> > > storyboard is certainly not perfect, but there are also great features it > does have to help deal with backlog. We can set up a board, like we did for > zuulv3: > > https://storyboard.openstack.org/#!/board/41 > > Jim also wrote 'boartty' which is like gertty but for doing storyboard > things. > > Which is to say - it's got issues, but it's also got a bunch of positives > too. > > Also there are lots of issues that intersect between sdk and >> python-openstackclient, so moving both at the same time would also >> sound reasonable. >> > > I could see waiting until we move python-openstackclient. However, we've > got the issue already with shade bugs being in storyboard already and sdk > bugs being in launchpad. With shade moving to having its implementation be > in openstacksdk, over this cycle I expect the number of bugs people report > against shade wind up actually being against openstacksdk to increase quite > a bit. > > Maybe we should see if the python-openstackclient team wants to migrate > too? > Although I have limited experience on storyboard, I think it is ready for our bug tracking. As Jens mentioned, not a small number of bugs are referred to from both OSC and SDK. One good news on OSC launchpad bug is that we do not use tag aggressively. If Dean is okay, I believe we can migrate to storyboard. Akihiro > > What do people think? > > Monty > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Mar 22 12:54:11 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 22 Mar 2018 07:54:11 -0500 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: On Thu, Mar 22, 2018 at 7:42 AM, Akihiro Motoki wrote: > 2018-03-22 21:29 GMT+09:00 Monty Taylor : >> I could see waiting until we move python-openstackclient. However, we've >> got the issue already with shade bugs being in storyboard already and sdk >> bugs being in launchpad. With shade moving to having its implementation be >> in openstacksdk, over this cycle I expect the number of bugs people report >> against shade wind up actually being against openstacksdk to increase quite >> a bit. >> >> Maybe we should see if the python-openstackclient team wants to migrate >> too? > > Although I have limited experience on storyboard, I think it is ready for > our bug tracking. > As Jens mentioned, not a small number of bugs are referred to from both OSC > and SDK. > One good news on OSC launchpad bug is that we do not use tag aggressively. > If Dean is okay, I believe we can migrate to storyboard. I am all in favor of migrating OSC to use to Storyboard, however I am totally unable to give it any time in the near future. If Akhiro or anyone else wants to take on that task, you will have my support and as much help as I am able to give. dt -- Dean Troyer dtroyer at gmail.com From ehabkost at redhat.com Thu Mar 22 13:11:52 2018 From: ehabkost at redhat.com (Eduardo Habkost) Date: Thu, 22 Mar 2018 10:11:52 -0300 Subject: [openstack-dev] [kubevirt-dev] Re: [libvirt] [virt-tools-list] Project for profiles and defaults for libvirt domains In-Reply-To: <20180322095612.GC3583@redhat.com> References: <20180320142031.GB23007@wheatley> <20180320151012.GU4530@redhat.com> <20180321180041.GA4245@localhost.localdomain> <20180321183952.GX8551@redhat.com> <20180321193423.GA3417@localhost.localdomain> <20180322095612.GC3583@redhat.com> Message-ID: <20180322131152.GG3417@localhost.localdomain> On Thu, Mar 22, 2018 at 09:56:12AM +0000, Daniel P. Berrangé wrote: > On Wed, Mar 21, 2018 at 04:34:23PM -0300, Eduardo Habkost wrote: > > On Wed, Mar 21, 2018 at 06:39:52PM +0000, Daniel P. Berrangé wrote: > > > On Wed, Mar 21, 2018 at 03:00:41PM -0300, Eduardo Habkost wrote: > > > > On Tue, Mar 20, 2018 at 03:10:12PM +0000, Daniel P. Berrangé wrote: > > > > > On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote: > > > > > > 1) Default devices/values > > > > > > > > > > > > Libvirt itself must default to whatever values there were before any > > > > > > particular element was introduced due to the fact that it strives to > > > > > > keep the guest ABI stable. That means, for example, that it can't just > > > > > > add -vmcoreinfo option (for KASLR support) or magically add the pvpanic > > > > > > device to all QEMU machines, even though it would be useful, as that > > > > > > would change the guest ABI. > > > > > > > > > > > > For default values this is even more obvious. Let's say someone figures > > > > > > out some "pretty good" default values for various HyperV enlightenment > > > > > > feature tunables. Libvirt can't magically change them, but each one of > > > > > > the projects building on top of it doesn't want to keep that list > > > > > > updated and take care of setting them in every new XML. Some projects > > > > > > don't even expose those to the end user as a knob, while others might. > > > > > > > > > > This gets very tricky, very fast. > > > > > > > > > > Lets say that you have an initial good set of hyperv config > > > > > tunables. Now sometime passes and it is decided that there is a > > > > > different, better set of config tunables. If the module that is > > > > > providing this policy to apps like OpenStack just updates itself > > > > > to provide this new policy, this can cause problems with the > > > > > existing deployed applications in a number of ways. > > > > > > > > > > First the new config probably depends on specific versions of > > > > > libvirt and QEMU, and you can't mandate to consuming apps which > > > > > versions they must be using. [...] > > > > > > > > This is true. > > > > > > > > > [...] So you need a matrix of libvirt + > > > > > QEMU + config option settings. > > > > > > > > But this is not. If config options need support on the lower > > > > levels of the stack (libvirt and/or QEMU and/or KVM and/or host > > > > hardware), it already has to be represented by libvirt host > > > > capabilities somehow, so management layers know it's available. > > > > > > > > This means any new config generation system can (and must) use > > > > host(s) capabilities as input before generating the > > > > configuration. > > > > > > I don't think it is that simple. The capabilities reflect what the > > > current host is capable of only, not whether it is desirable to > > > actually use them. Just because a host reports that it has q35-2.11.0 > > > machine type doesn't mean that it should be used. The mgmt app may > > > only wish to use that if it is available on all hosts in a particular > > > grouping. The config generation library can't query every host directly > > > to determine this. The mgmt app may have a way to collate capabilities > > > info from hosts, but it is probably then stored in a app specific > > > format and data source, or it may just ends up being a global config > > > parameter to the mgmt app per host. > > > > In other words, you need host capabilities from all hosts as > > input when generating a new config XML. We already have a format > > to represent host capabilities defined by libvirt, users of the > > new system would just need to reproduce the data they got from > > libvirt and give it to the config generator. > > Things aren't that simple - when openstack reports info from each host > it doesn't do it in any libvirt format - it uses an arbitrary format it > defines itself. Going from libvirt host capabilities to the app specific > format and back to libvirt host capabilities will loose information. > Then you also have matter of hosts coming & going over time, so fragile > to assume that the set of host capabilities you currently see are > representative of the steady state you desire. Well, then the management layer should stop losing useful data. ;) (But I understand that this is not that simple) > > > Not completely trivial, but maybe worth the effort if you want to > > benefit from work done by other people to find good defaults? > > Perhaps, but there's many ways to share the work of figuring out > good defaults. Beyond what's represented in libosinfo database, > no one has even tried to document what current desirable defaults > are. Jumping straight from no documented best practice, to lets > build a API is a big ask, particularly when the suggestion involves > major architectural changes to any app that wants to use it. > > For most immediate benefit actually documenting some best practice > would be the most tangible win for application developers, as they > can much more easily adapt existing code to follow it. ALso expanding > range of info we record in libosinfo would be beneficial, since there > is still plenty of OS specific data not captured. Not to mention that > most applications aren't even leveraging much of the stuff already > available. > This is a good point. > > > > There have been a number of times where a feature is available in > > > libvirt and/or QEMU, and the mgmt app still doesn't yet may still > > > not wish to use it because it is known broken / incompatible with > > > certain usage patterns. So the mgmt app would require an arbitrarily > > > newer libvirt/qemu before considering using it, regardless of > > > whether host capabilities report it is available. > > > > If this happens sometimes, why is it better for the teams > > maintaining management layers to duplicate the work of finding > > what works, instead of solving the problem only once? > > This point was in relation to my earlier thread where I said that > it would be neccessary to maintain a matrix of policy vs QEMU and > libvirt versions, not merely relying on host capabilities. I see what you mean. But any component in the system needs to keep a matrix of QEMU and libvirt versions, I'd argue that the APIs provided by QEMU & libvirt are broken and need to be fixed. If this happens with QEMU, I ask for everybody involved to please ask QEMU developers for help, so we can at least document the issue, and find a better way to detect if a given feature is working. (This request applies even if our effort is focused towards documenting best practices and not an API.) > > > > > Now, why can't higher layers in the stack do something similar? > > > > > > > > The proposal is equivalent to what already happens when people > > > > use the "pc" machine-type in their configurations, but: > > > > 1) the new defaults/features wouldn't be hidden behind a opaque > > > > machine-type name, and would appear in the domain XML > > > > explicitly; > > > > 2) the higher layers won't depend on QEMU introducing a new > > > > machine-type just to have new features enabled by default; > > > > 3) features that depend on host capabilities but are available on > > > > all hosts in a cluster can now be enabled automatically if > > > > desired (which is something QEMU can't do because it doesn't > > > > have enough information about the other hosts). > > > > > > > > Choosing reasonable defaults might not be a trivial problem, but > > > > the current approach of pushing the responsibility to management > > > > layers doesn't improve the situation. > > > > > > The simple cases have been added to the "pc" machine type, but > > > more complex cases have not been dealt with as they often require > > > contextual knowledge of either the host setup or the guest OS > > > choice. > > > > Exactly. But on how many of those cases the decision requires > > knowledge that is specific to the management stack being used > > (like the ones you listed below), and how many are decisions that > > could be made by simply looking at the host software/hardware and > > guest OS? I am under the impression that we have a reasonable > > number of case of the latter. > > Anything todo with virtual hardware that is guest OS dependant > should be in scope of libosinfo project / database. > > For other things, I think it would be useful if we at least started > to document some recommended best practices, so we have a better idea > of what we're trying to address. It would also give apps an idea of > what they're missing right now letting them fix gaps, if desired. Agreed, though I expect any documented best practices will also end up including guest-specific recommendations (which may or may not be already in the libosinfo database). > > > The ones I remember are all relate to CPU configuration: > > * Automatically enabling useful CPU features when they are > > available on all hosts; > > This is really hard todo in an automated fashion because it > relies on having an accessible global view of all hosts, that is > accurate. I can easily see a situation where you have 20 hosts, 5 > old CPUs, 15 new CPUs, and the old ones are coincidentally offline > for maintenance or software upgrade. Meanwhile you spawn a guest, > and check available host capabilities and never see the info from > older CPUs, so automatically enable a bunch of features that we > really did not want. It is more reliable if you just declare this > in the application config file, and have a mgmt tool that can > do distributed updates of the config file when needed. This surprises me a bit. I really expect any management layer to have an accurate and updated global view of all hosts, even if they are temporarily offline. And if new hosts are expected to be added in the future, the system should have a very clearly defined expectation[1] of what are the minimal capabilities required for new hosts. Otherwise we will be pushing complex decisions to human operators (which will probably end up making worse mistakes because they may not understand how everything works internally). > > > * Always enabling check='full' by default. > > > > Do we have other examples? > > I'm sure we can find plenty, but its a matter of someone doing the > work to investigate & pull together docs. Agreed that documenting this stuff is the most important step right now. > > > > We had a long debate over the best aio=threads,native setting for > > > OpenStack. Understanding the right defaults required knowledge about > > > the various different ways that Nova would setup its storage stack. > > > We certainly know enough now to be able to provide good recommendations > > > for the choice, with perf data to back it up, but interpreting those > > > recommendations still requires the app specific knowledge about its > > > storage mgmt approach, so ends up being code dev work. > > > > > > Another case is the pvpanic device - while in theory that could > > > have been enabled by default for all guests, by QEMU or a config > > > generator library, doing so is not useful on its own. The hard > > > bit of the work is adding code to the mgmt app to choose the > > > action for when pvpanic triggers, and code to handle the results > > > of that action. > > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -- Eduardo From sean.mcginnis at gmx.com Thu Mar 22 13:37:12 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 22 Mar 2018 08:37:12 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521715425.17048.8.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> Message-ID: <20180322133712.GA19274@sm-xps> > > > > Unfortunately this will not work to just revert the changes. That may fix > > things locally, but they will not pass in gate by going back to the old way. > > > > Any cases of this will have to actually be updated to not use the unsupported > > pieces you point out. But the doc builds will still need to be done the way > > they are now, as that is what the PTI requires at this point. > > That's unfortunate. What we really need is a migration path from the > 'pbr' way of doing things to something else. I see three possible > avenues at this point in time: > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > things to 'sphinx-apidoc' but it takes the form of an extension. > From my brief experiments, the output generated from this is > radically different and far less comprehensive than what 'sphinx- > apidoc' generates. However, it supports templating so we could > probably configure this somehow and add our own special directive > somewhere like 'openstackdocstheme' > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > against upstream Sphinx [1]. This essentially does what the PBR > extension does but moves configuration into 'conf.py'. However, this > is currently held up as I can't adequately explain the differences > between this and 'sphinx.ext.autosummary' (there's definite overlap > but I don't understand 'autosummary' well enough to compare them). > 3. Modify the upstream jobs that detect the pbr integration and have > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > technically appealing approach as it still leaves us unable to build > stuff locally and adds yet more "magic" to the gate, but it does let > us progress. > > Try as I may, I don't really have the bandwidth to work on this for > another few weeks so I'd appreciate help from anyone with sufficient > Sphinx-fu to come up with a long-term solution to this issue. > > Cheers, > Stephen > I think we could probably go with 1 until and if 2 becomes an option. It does change output quite a bit. I played around with 3, but I think we will have enough differences between projects as to _where_ specifically this generated content needs to be placed that it will make that approach a little more brittle. All that said, I think once we decide on a path and get something out there, there does seem to be a group of folks that are more than willing to follow that established pattern and desseminate it out to all other projects. We just need to decide I guess. Your sphinx-fu is probably stronger than mine, so hopefully someone else with more experience can chime in here. ;) Sean From jon at csail.mit.edu Thu Mar 22 14:03:05 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Thu, 22 Mar 2018 10:03:05 -0400 Subject: [openstack-dev] [Openstack-operators] OpenStack "S" Release Naming Preliminary Results In-Reply-To: <20180322003238.GB14691@localhost.localdomain> References: <20180322003238.GB14691@localhost.localdomain> Message-ID: <20180322140305.GI21100@csail.mit.edu> On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote: :6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 Given this is at #6 and formal vetting is yet to come it's probably not much of an issue, but "Spandau's" first association for many will be Nazi war criminals via Spandau Prison https://en.wikipedia.org/wiki/Spandau_Prison So best avoided to say the least. -Jon From davanum at gmail.com Thu Mar 22 14:05:33 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Thu, 22 Mar 2018 10:05:33 -0400 Subject: [openstack-dev] [k8s] Hosting location for OpenStack Kubernetes Provider In-Reply-To: References: Message-ID: FYI, New repo is ready! https://github.com/kubernetes/cloud-provider-openstack We could use a lot of help. So please join us. -- Dims On Tue, Mar 13, 2018 at 1:54 PM, Chris Hoge wrote: > At the PTG in Dublin, SIG-K8s started working towards migrating the > external Kubernetes OpenStack cloud provider[1] work to be an OpenStack > project. Coincident with that, an upstream patch[2] was proposed by > WG-Cloud-Provider to create upstream Kubernetes repositories for the > various cloud providers. > > I want to begin a conversation about where we want this provider code to > live and how we want to manage it. Three main options are to: > > 1) Host the provider code within the OpenStack ecosystem. The advantages > are that we can follow OpenStack community development practices, and > we have a good list of people signed up to help maintain it. We would > also have easier access to infra test resources. The downside is we pull > the code further away from the Kubernetes community, possibly making it > more difficult for end users to find and use in a way that is consistent > with other external providers. > > 2) Host the provider code within the Kubernetes ecosystem. The advantage > is that the code will be in a well-defined and well-known place, and > members of the Kubernetes community who want to participate will be able > to continue to use the community practices. We would still be able to > take advantage of infra resources, but it would require more setup to > trigger and report on jobs. > > 3) Host in OpenStack, and mirror in a Kubernetes repository. We would > need to work with the K8s team to make sure this is an acceptable option, > but would allow for a hybrid development model that could satisty the > needs of members of both communities. This would require a committment > from the K8s-SIG-OpenStack/OpenStack-SIG-K8s team to handling tickets > and pull requests that come in to the Kubernetes hosted repository. > > My personal opinion is that we should take advantage of the Kubernetes > hosting, and migrate the project to one of the repositories listed in > the WG-Cloud-Provider patch. This wouldn't preclude moving it into > OpenStack infra hosting at some point in the future and possibly > adopting the hybrid approach down the line after more communication with > K8s infrastructure leaders. > > There is a sense of urgency, as Dims has asked that we relieve him of > the responsibility of hosing the external provider work in his personal > GitHub repository. > > Please chime in with your opinions on this here so that we can work out > an where the appropriate hosting for this project should be. > > Thanks, > Chris Hoge > K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead > > [1] https://github.com/dims/openstack-cloud-controller-manager > [2] https://github.com/kubernetes/community/pull/1862 > [3] https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From bodenvmw at gmail.com Thu Mar 22 14:42:07 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 22 Mar 2018 08:42:07 -0600 Subject: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints In-Reply-To: <20180315005859.GE25428@thor.bakeyournoodle.com> References: <6de436b7-7d3c-71d9-d765-44ec94d7fe3d@suse.com> <029f600d-d141-acbe-00c8-b9bbf5ac2058@suse.com> <9d699d7f-25f0-6d57-b915-e5517d730d4e@suse.com> <20180315005859.GE25428@thor.bakeyournoodle.com> Message-ID: On 3/14/18 6:59 PM, Tony Breeds wrote: > On Thu, Mar 15, 2018 at 07:16:11AM +0900, Akihiro Motoki wrote: >> (1) it makes difficult to run tests in local environment >> We have only released version of neutron/horizon on PyPI. It means >> PyPI version (i.e. queens) is installed when we run tox in our local >> development. Most neutron stadium projects and horizon plugins depends >> on the latest master. Test run in local environment will be broken. We >> need to install the latest neutron/horizon manually. This confuses >> most developers. We need to ensure that tox can run successfully in a >> same manner in our CI and local environments. > > This is an issue I agree and one we need to think about but it will be > somewhat mitigated for local development by pbr siblings[1] > > In the short term, developers can do something like: > > for env in pep8,py35,py27 ; do > tox -e $env --notest > .tox/$env/bin/pip install -e /path/to/{horizon,neutron} > tox -e $env > done > > Which is far from ideal but gives as a little breathing room to decide > if we need to revert and try again in a while or persist with the plan > as it stands. > > pbr siblings wont fix all the issues we have and still makes consumption of > neutron and horizon (and plugins / stadium projects) difficult outside > of test. Unless I'm missing something, devstack is also impacted in these scenarios and doesn't account for installing master branches of select dependencies. As a result we are seeing failures in our external CI jobs (that use devstack) due to invalid package versions. Is the proper way to address this to specify the _REPO and _BRANCH in our project's devstack lib sciprt(s) as needed so that devstack will grab master for them? Thanks From gkotton at vmware.com Thu Mar 22 14:50:43 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 22 Mar 2018 14:50:43 +0000 Subject: [openstack-dev] [requirements][l2gw] Latest requirements Message-ID: Hi, We have a problem with the l2gw tags. The requirements file indicates that we should be pulling in >=12.0.0 and in fact its pulling networking_l2gw-2016.1.0 Please see http://207.189.188.190/logs/neutron/554292/4/tempest-api-vmware-tvd-t/logs/stack.sh.log.txt.gz#_2018-03-22_08_56_03_905 Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkletzan at redhat.com Thu Mar 22 14:54:01 2018 From: mkletzan at redhat.com (Martin Kletzander) Date: Thu, 22 Mar 2018 15:54:01 +0100 Subject: [openstack-dev] [virt-tools-list] Project for profiles and defaults for libvirt domains In-Reply-To: <20180320151012.GU4530@redhat.com> References: <20180320142031.GB23007@wheatley> <20180320151012.GU4530@redhat.com> Message-ID: <20180322145401.GD19999@wheatley> [ I fixed up ovirt-devel at redhat.com to be devel at ovirt.org since the former is deprecated. I'm also not trimming down much of the reply so that they can get the whole picture. Sorry for the confusion ] On Tue, Mar 20, 2018 at 03:10:12PM +0000, Daniel P. Berrangé wrote: >On Tue, Mar 20, 2018 at 03:20:31PM +0100, Martin Kletzander wrote: >> 1) Default devices/values >> >> Libvirt itself must default to whatever values there were before any >> particular element was introduced due to the fact that it strives to >> keep the guest ABI stable. That means, for example, that it can't just >> add -vmcoreinfo option (for KASLR support) or magically add the pvpanic >> device to all QEMU machines, even though it would be useful, as that >> would change the guest ABI. >> >> For default values this is even more obvious. Let's say someone figures >> out some "pretty good" default values for various HyperV enlightenment >> feature tunables. Libvirt can't magically change them, but each one of >> the projects building on top of it doesn't want to keep that list >> updated and take care of setting them in every new XML. Some projects >> don't even expose those to the end user as a knob, while others might. > >This gets very tricky, very fast. > >Lets say that you have an initial good set of hyperv config >tunables. Now sometime passes and it is decided that there is a >different, better set of config tunables. If the module that is >providing this policy to apps like OpenStack just updates itself >to provide this new policy, this can cause problems with the >existing deployed applications in a number of ways. > >First the new config probably depends on specific versions of >libvirt and QEMU, and you can't mandate to consuming apps which >versions they must be using. So you need a matrix of libvirt + >QEMU + config option settings. > >Even if you have the matching libvirt & QEMU versions, it is not >safe to assume the application will want to use the new policy. >An application may need live migration compatibility with older >versions. Or it may need to retain guaranteed ABI compatibility >with the way the VM was previously launched and be using transient >guests, generating the XML fresh each time. > >The application will have knowledge about when it wants to use new >vs old hyperv tunable policy, but exposing that to your policy module >is very tricky because it is inherantly application specific logic >largely determined by the way the application code is written. > The idea was for updating XML based on policy, which is something you want for new machines. You should then keep the XML per domain and only do changes to if requested by the user or when libvirt fills in new values in a guest ABI compatible fashion. > >> One more thing could be automatically figuring out best values based on >> libosinfo-provided data. >> >> 2) Policies >> >> Lot of the time there are parts of the domain definition that need to be >> added, but nobody really cares about them. Sometimes it's enough to >> have few templates, another time you might want to have a policy >> per-scenario and want to combine them in various ways. For example with >> the data provided by point 1). >> >> For example if you want PCI-Express, you need the q35 machine type, but >> you don't really want to care about the machine type. Or you want to >> use SPICE, but you don't want to care about adding QXL. >> >> What if some of these policies could be specified once (using some DSL >> for example), and used by virtuned to merge them in a unified and >> predictable way? >> >> 3) Abstracting the XML >> >> This is probably just usable for stateless apps, but it might happen >> that some apps don't really want to care about the XML at all. They >> just want an abstract view of the domain, possibly add/remove a device >> and that's it. We could do that as well. I can't really tell how much >> of a demand there is for it, though. > >It is safe to say that applications do not want to touch XML at all. >Any non-trivial application has created an abstraction around XML, >so that they have an API to express what they want, rather than >manipulating of strings to format/parse XML. > Sure, this was just meant to be a question as to whether it's worth pursuing or not. You make a good point on why it is not (at least for existing apps). However, since this was optional, the way this would look without the XML abstraction is that both input and output would be valid domain definitions, ultimately resulting in something similar to virt-xml with the added benefit of applying a policy from a file/string either supplied by the application itself. Whether that policy was taken from a common repository of such knowledge is orthogonal to this idea. Since you would work with the same data, the upgrade could be incremental as you'd only let virtuned fill in values for new options and could slowly move on to using it for some pre-existing ones. None of the previous approaches did this, if I'm not mistaken. Of course it gets more difficult when you need to expose all the bits libvirt does and keep them in sync (as you write below). [...] >If there was something higher level that gets more interesting, >but the hard bit is that you still need a way to get at all the >low level bits becuase a higher level abstracted API will never >cover every niche use case. > Oh, definitely not every, but I see two groups of projects that have a lot in common between themselves and between the groups as well. On the other hand just templating and defaults is something that's easy enough to do that it's not worth outsourcing that into another one's codebase. >> 4) Identifying devices properly >> >> In contrast to the previous point, stateful apps might have a problem >> identifying devices after hotplug. For example, let's say you don't >> care about the addresses and leave that up to libvirt. You hotplug a >> device into the domain and dump the new XML of it. Depending on what >> type of device it was, you might need to identify it based on different >> values. It could be for disks, for >> interfaces etc. For some devices it might not even be possible and you >> need to remember the addresses of all the previous devices and then >> parse them just to identify that one device and then throw them away. >> >> With new enough libvirt you could use the user aliases for that, but >> turns out it's not that easy to use them properly anyway. Also the >> aliases won't help users identify that device inside the guest. > >NB, relating between host device config and guest visible device >config is a massive problem space in its own right, and not very >easy to address. In OpenStack we ended up defining a concept of >"device tagging" via cloud-init metadata, where openstack allows >users to set opaque string tags against devices their VM has. >OpenStack that generates a metadata file that records various >pieces of identifying hardware attributes (PCI address, MAC >addr, disk serial, etc) alongside the user tag. This metadata >file is exposed to the guest with the hope that there's enough >info to allow the user to decide which device is to be used for >which purpose > This is good point, but I was mostly thinking about identifying devices from the host POV between two different XMLs (pre- and post- some XML-modifying action, like hotplug). >https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/virt-device-role-tagging.html >https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/use-tagging > >> >> We really should've gone with new attribute for the user alias instead >> of using an existing one, given how many problems that is causing. >> >> >> 5) Generating the right XML snippet for device hot-(un)plug >> >> This is kind of related to some previous points. >> >> When hot-plugging a device and creating an XML snippet for it, you want >> to keep the defaults from point 1) and policies from 2) in mind. Or >> something related to the already existing domain which you can describe >> systematically. And adding something for identification (see previous >> point). >> >> Doing the hot-unplug is easy depending on how much information about >> that device is saved by your application. The less you save about the >> device (or show to the user in a GUI, if applicable) the harder it might >> be to generate an XML that libvirt will accept. Again, some problems >> with this should be fixed in libvirt, some of them are easy to >> workaround. But having a common ground that takes care of this should >> help some projects. >> >> Hot-unplug could be implemented just based on the alias. This is >> something that would fit into libvirt as well. >> >> ======================================================================== >> >> To mention some pre-existing solutions: >> >> - I understand OpenStack has some really sensible and wisely chosen >> and/or tested default values. > >In terms of default devices and OS specific choices, OpenStack's >decisions have been largely inspired by previous work in oVirt >and / or virt-manager. So there's obviously overlap in the >conceptual area, but there's also plenty that is very specific >to OpenStack - untangling the two extract the common bits from >the app specific bits is hard. > It definitely is, but do you think it's so difficult it's worthless to pursuit? I did a tiny PoC based on the code from virt-manager, which was trivial mainly thanks to the XMLBuilder for the domain objects. Maybe exposing an easy way to work with the XML would be enough for some projects. Little birdie from oVirt told me that they would like some of sort of thing that does what you can achieve with virt-xml if we, for example, made it work on pure XML definitions without connecting to libvirt. >> - I know KubeVirt has VirtualMachinePresets. That is something closely >> related to points 1) and 2). Also their abstraction of the XML might >> be usable for point 3). >> >> - There was an effort on creating policy based configuration of libvirt >> objects called libvirt-designer. This is closely related to points 2) >> and 3). Unfortunately there was no much going on lately and part of >> virt-manager repository has currently more features implemented with >> the same ideas in mind, just not exported for public use. > >This is the same kind of problem we faced wrt libvirt-gconfig and >libvirt-gobject usage from virt-manager - it has an extensive code >base that already works, and rewriting it to use something new >is alot of work for no short-term benefit. libvirt-gconfig/gobject >were supposed to be the "easy" bits for virt-manager to adopt, as >they don't really include much logic that would step on virt-manager's >toes. libvirt-designer was going to be a very opinionated library >and in retrospective that makes it even harder to consider adopting >it for usage in virt-manager, as it'll have signficant liklihood >of making functionally significant changes in behaviour. > The initial idea (which I forgot to mention) was that all the decisions libvirt currently does (so that it keeps the guest ABI stable) would be moved into data (let's say some DSL) and it could then be switched or adjusted if that's not what the mgmt app wants (on a per-definition basis, of course). I didn't feel very optimistic about the upstream acceptance for that idea, so I figured that there could be something that lives beside libvirt, helps with some policies if requested and then the resulting XML could be fed into libvirt for determining the rest. >There's also the problem with use of native libraries that would >impact many apps. We only got OpenStack to grudgingly allow the By native you mean actual binary libraries or native to the OpenStack code as in python module? Because what I had in mind for this project was a python module with optional wrapper for REST API. >use of libosinfo native library via GObject Introspection, by >promising to do work to turn the osinfo database into an approved >stable format which OpenStack could then consume directly, dropping >the native API usage :-( Incidentally, the former was done (formal >spec for the DB format), but the latter was not yet (direct DB usage >by OpenStack) > > >BTW, I don't like that I'm being so negative to your proposal :-( >I used to hope that we would be able to build higher level APIs on >top of libvirt to reduce the overlap between different applications >reinventing the wheel. Even the simplest bits we tried like the >gconfig/gobject API are barely used. libvirt-designer is basically >a failure. Though admittedly it didn't have enough development resource >applied to make it compelling, in retrospect adoption was always going >to be a hard sell except in greenfield developments. > I'm glad for the knowledge you provided. So maybe instead of focusing on de-duplication of existing codebases we could _at least_ aim at future mgmt apps. OTOH improving documentation on how to properly build higher level concepts on top of libvirt would benefit them as well. >Libosinfo is probably the bit we've had most success with, and has >most promise for the future, particularly now that we formally allow >apps to read the osinfo database directly and bypass the API. It is >quite easy to fit into existing application codebases which helps alot. >Even there I'm still disappointed that we only have GNOME Boxes using >the kickstart generator part of osinfo - oVirt and Oz both still have >their own kickstart generator code for automating OS installs. > >In general though, I fear anything API based is going to be a really >hard sell to get wide adoption for based on what we've seen before. > >I think the biggest bang-for-buck is identifying more areas where we >can turn code into data. There's definitely scope for recording more >types of information in the osinfo database. There might also be >scope for defining entirely new databases to complement the osinfo >data, if something looks out of scope for libosinfo. > >Regards, >Daniel >-- >|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| >|: https://libvirt.org -o- https://fstop138.berrange.com :| >|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Digital signature URL: From matt at nycresistor.com Thu Mar 22 15:37:34 2018 From: matt at nycresistor.com (Matt Joyce) Date: Thu, 22 Mar 2018 11:37:34 -0400 Subject: [openstack-dev] [Openstack-operators] OpenStack "S" Release Naming Preliminary Results In-Reply-To: <20180322140305.GI21100@csail.mit.edu> References: <20180322003238.GB14691@localhost.localdomain> <20180322140305.GI21100@csail.mit.edu> Message-ID: +1 On Thu, Mar 22, 2018 at 10:03 AM, Jonathan Proulx wrote: > On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote: > > :6. Spandau loses to Solar by 195–88, loses to Springer by 125–118 > > Given this is at #6 and formal vetting is yet to come it's probably > not much of an issue, but "Spandau's" first association for many will > be Nazi war criminals via Spandau Prison > https://en.wikipedia.org/wiki/Spandau_Prison > > So best avoided to say the least. > > -Jon > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Mar 22 15:38:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 22 Mar 2018 10:38:38 -0500 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> Message-ID: <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> On 3/20/2018 6:44 PM, melanie witt wrote: > We were thinking of starting the runways process after the spec review > freeze (which is April 19) so that reviewers won't be split between spec > reviews and reviews of work in runways. I'm going to try and reign in the other thread [1] and bring it back here. The request to discuss this in the nova meeting was discussed, log starts here [2]. For me personally, I'm OK with starting runways now, before the spec freeze, if we also agree to move the spec freeze out past milestone 1, so either milestone 2 or feature freeze (~milestone 3). This is because we have far fewer people that actually review specs, and I personally don't want to have the expectation/pressure to be reviewing, at a high level anyway, both specs before the deadline along with runways at the same time. Moving out the spec freeze also means that assuming runways works and we're good about flushing things through the queue, we can seed more blueprints based on specs we approve later. Given we already have 33 approved but not yet completed blueprints, we have plenty of content to keep us busy with runways for the next couple of months. If we start runways now and don't move out the spec freeze, I'm OK with that but I personally will probably be devoting more of my time to reviewing specs than stuff in the runways. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128603.html [2] http://eavesdrop.openstack.org/meetings/nova/2018/nova.2018-03-22-14.00.log.html#l-205 -- Thanks, Matt From sean.mcginnis at gmx.com Thu Mar 22 15:39:22 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 22 Mar 2018 10:39:22 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180322133712.GA19274@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180322133712.GA19274@sm-xps> Message-ID: <20180322153921.GA29460@sm-xps> > > > > That's unfortunate. What we really need is a migration path from the > > 'pbr' way of doing things to something else. I see three possible > > avenues at this point in time: > > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > > things to 'sphinx-apidoc' but it takes the form of an extension. > > From my brief experiments, the output generated from this is > > radically different and far less comprehensive than what 'sphinx- > > apidoc' generates. However, it supports templating so we could > > probably configure this somehow and add our own special directive > > somewhere like 'openstackdocstheme' > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > > against upstream Sphinx [1]. This essentially does what the PBR > > extension does but moves configuration into 'conf.py'. However, this > > is currently held up as I can't adequately explain the differences > > between this and 'sphinx.ext.autosummary' (there's definite overlap > > but I don't understand 'autosummary' well enough to compare them). > > 3. Modify the upstream jobs that detect the pbr integration and have > > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > > technically appealing approach as it still leaves us unable to build > > stuff locally and adds yet more "magic" to the gate, but it does let > > us progress. > > > > Try as I may, I don't really have the bandwidth to work on this for > > another few weeks so I'd appreciate help from anyone with sufficient > > Sphinx-fu to come up with a long-term solution to this issue. > > > > Cheers, > > Stephen > > > > I think we could probably go with 1 until and if 2 becomes an option. It does > change output quite a bit. > > I played around with 3, but I think we will have enough differences between > projects as to _where_ specifically this generated content needs to be placed > that it will make that approach a little more brittle. > One other things that comes to mind - I think most service projects, if they are even using this, could probably just drop it. I've found the generated "API" documentation for service modules to be of very limited use. That would at least narrow things down to lib projects. So this would still be an issue for the oslo libs for sure. In that case, you do what that module API documentation in most cases. But personally, I would encourage service projects to get around this issue by just not doing it. It would appear that would take care of a large chunk of the current usage: http://codesearch.openstack.org/?q=autodoc_index_modules&i=nope&files=setup.cfg&repos= From john at johngarbutt.com Thu Mar 22 15:40:20 2018 From: john at johngarbutt.com (John Garbutt) Date: Thu, 22 Mar 2018 15:40:20 +0000 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> Message-ID: Hi So I am really excited for us to try runways out, ASAP. On 20 March 2018 at 23:44, melanie witt wrote: > We were thinking of starting the runways process after the spec review > freeze (which is April 19) so that reviewers won't be split between spec > reviews and reviews of work in runways. > I think spec reviews, blueprint reviews, and code review topics could all get a runway slot. What if we had these queues: Backlog Queue, Blueprint Runway, Approved Queue, Code Runway Currently all approved blueprints would sit in the Approved queue. As described, you leave the runway and go back in the queue if progress stalls. Basically abandon the spec freeze. Control with runways instead. The process and instructions are explained in detail on this etherpad, > which will also serve as the place we queue and track blueprints for > runways: > > https://etherpad.openstack.org/p/nova-runways-rocky I like its simplicity. If progress stalls you are booted out of a run way slot back to the queue. Having said all that, I am basically happy with anything that gets us trying this out ASAP. Thanks, johnthetubaguy -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Mar 22 15:48:18 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 22 Mar 2018 10:48:18 -0500 Subject: [openstack-dev] [infra][releases][requirements][l2gw] Latest requirements In-Reply-To: References: Message-ID: <20180322154818.kj5xzpe3z4sbqycu@gentoo.org> On 18-03-22 14:50:43, Gary Kotton wrote: > Hi, > We have a problem with the l2gw tags. The requirements file indicates that we should be pulling in >=12.0.0 and in fact its pulling networking_l2gw-2016.1.0 > Please see http://207.189.188.190/logs/neutron/554292/4/tempest-api-vmware-tvd-t/logs/stack.sh.log.txt.gz#_2018-03-22_08_56_03_905 > Thanks > Gary It sounds like those versions need to be unpublished. This will likely involve both infra and the releases team (not requirements). I've tagged them to get their attention (and mentioned this in the releases irc channel). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu Mar 22 15:53:37 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 22 Mar 2018 10:53:37 -0500 Subject: [openstack-dev] [infra][releases][requirements][l2gw] Latest requirements In-Reply-To: <20180322154818.kj5xzpe3z4sbqycu@gentoo.org> References: <20180322154818.kj5xzpe3z4sbqycu@gentoo.org> Message-ID: <20180322155337.GA30365@sm-xps> On Thu, Mar 22, 2018 at 10:48:18AM -0500, Matthew Thode wrote: > On 18-03-22 14:50:43, Gary Kotton wrote: > > Hi, > > We have a problem with the l2gw tags. The requirements file indicates that we should be pulling in >=12.0.0 and in fact its pulling networking_l2gw-2016.1.0 > > Please see http://207.189.188.190/logs/neutron/554292/4/tempest-api-vmware-tvd-t/logs/stack.sh.log.txt.gz#_2018-03-22_08_56_03_905 > > Thanks > > Gary > > It sounds like those versions need to be unpublished. This will likely > involve both infra and the releases team (not requirements). I've > tagged them to get their attention (and mentioned this in the releases > irc channel). > > -- > Matthew Thode (prometheanfire) I know we've had to take down some of these older ones in the past. I think this is a clear case where we should just unpublish it from pypi. This is quite old now, so I highly doubt anyone out there still needs access to this. Especially via pypi. But I think if we wait maybe a day or so to see if anyone brings up a reason to not take it down and don't hear anything, then we should go ahead with its removal. Although, if this is a blocker for the project for now, I would be fine with just taking it down now and working out any issues directly with anyone that does need access to it for some reason. Sean From miguel at mlavalle.com Thu Mar 22 16:25:54 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 22 Mar 2018 11:25:54 -0500 Subject: [openstack-dev] Adding Brian Haley to the Neutron Drivers team Message-ID: Hi Neutrinos, As we all know, the Neutron Drivers team plays a crucial role in helping the community to evolve the OpenStack Networking architecture to meet the needs of our current and future users [1]. To strengthen this team, I have decided to add Brian Haley to it. Brian has two decades of experience in open source networking technology. He joined the OpenStack community contributing code to nova-network in the Diablo release. His first Neutron commit (known as Quantum at the time) dates back to March of 2013 [2]. Since then, his many contributions include the implementation and evolution of our L3 and DVR code. He is one of our most active core reviewers and a prolific contributor of code to our reference implementation. I am very confident that Brian will be a continuous source of knowledge, experience and valuable insight to the Drivers team. Best regards Miguel [1] https://docs.openstack.org/neutron/pike/contributor/policies/neutron-teams.html#drivers-team [2] https://review.openstack.org/#/c/25564/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Mar 22 16:28:39 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 22 Mar 2018 16:28:39 +0000 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: I can run test migrations today for the rest of the OSC launchpad projects just to make sure it all goes smoothly and report back. -Kendall (diablo_rojo) On Thu, 22 Mar 2018, 5:54 am Dean Troyer, wrote: > On Thu, Mar 22, 2018 at 7:42 AM, Akihiro Motoki wrote: > > 2018-03-22 21:29 GMT+09:00 Monty Taylor : > >> I could see waiting until we move python-openstackclient. However, we've > >> got the issue already with shade bugs being in storyboard already and > sdk > >> bugs being in launchpad. With shade moving to having its implementation > be > >> in openstacksdk, over this cycle I expect the number of bugs people > report > >> against shade wind up actually being against openstacksdk to increase > quite > >> a bit. > >> > >> Maybe we should see if the python-openstackclient team wants to migrate > >> too? > > > > Although I have limited experience on storyboard, I think it is ready for > > our bug tracking. > > As Jens mentioned, not a small number of bugs are referred to from both > OSC > > and SDK. > > One good news on OSC launchpad bug is that we do not use tag > aggressively. > > If Dean is okay, I believe we can migrate to storyboard. > > I am all in favor of migrating OSC to use to Storyboard, however I am > totally unable to give it any time in the near future. If Akhiro or > anyone else wants to take on that task, you will have my support and > as much help as I am able to give. > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence at ljalbinson.com Thu Mar 22 16:52:19 2018 From: lawrence at ljalbinson.com (Lawrence J. Albinson) Date: Thu, 22 Mar 2018 16:52:19 +0000 Subject: [openstack-dev] [os-tempest][libvirt] Message-ID: <4FB5ED42-0B89-466F-B1EB-7A8D4CA0BA51@ljalbinson.com> Dear colleagues, I have recently moved my environment over to openstack-ansible 16.0.8 and when I run tempest testing as the last step of the deployment it is failing with the following errors logged in daemon.log on the compute server. Before I use up more time looking into the details of libvirt, does anyone have a quick answer to why this might be happening? (Please note that the self same environment was working with Ocata.) Mar 22 16:41:07 compute1 libvirtd[2954]: 2018-03-22 16:41:07.897+0000: 3094: error : virProcessKillPainfully:401 : Failed to terminate process 3798 with SIGKILL: Device or resource busy Mar 22 16:41:22 compute1 libvirtd[2954]: 2018-03-22 16:41:22.930+0000: 3095: error : virProcessKillPainfully:401 : Failed to terminate process 3798 with SIGKILL: Device or resource busy Mar 22 16:41:37 compute1 libvirtd[2954]: 2018-03-22 16:41:37.964+0000: 3092: error : virProcessKillPainfully:401 : Failed to terminate process 3798 with SIGKILL: Device or resource busy Mar 22 16:41:37 compute1 libvirtd[2954]: 2018-03-22 16:41:37.980+0000: 3101: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:41:37 compute1 libvirtd[2954]: 2018-03-22 16:41:37.980+0000: 3101: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:42:37 compute1 libvirtd[2954]: 2018-03-22 16:42:37.528+0000: 3093: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:42:37 compute1 libvirtd[2954]: 2018-03-22 16:42:37.528+0000: 3093: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:43:38 compute1 libvirtd[2954]: 2018-03-22 16:43:38.744+0000: 3094: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:43:38 compute1 libvirtd[2954]: 2018-03-22 16:43:38.744+0000: 3094: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:44:38 compute1 libvirtd[2954]: 2018-03-22 16:44:38.281+0000: 3092: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:44:38 compute1 libvirtd[2954]: 2018-03-22 16:44:38.281+0000: 3092: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:45:38 compute1 libvirtd[2954]: 2018-03-22 16:45:38.243+0000: 3100: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:45:38 compute1 libvirtd[2954]: 2018-03-22 16:45:38.243+0000: 3100: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:46:39 compute1 libvirtd[2954]: 2018-03-22 16:46:39.221+0000: 3095: warning : qemuGetProcessInfo:1436 : cannot parse process status data Mar 22 16:46:39 compute1 libvirtd[2954]: 2018-03-22 16:46:39.221+0000: 3095: error : virProcessGetAffinity:506 : cannot get CPU affinity of process 3828: No such process Mar 22 16:46:57 compute1 libvirtd[2954]: 2018-03-22 16:46:57.604+0000: 3096: error : virProcessKillPainfully:401 : Failed to terminate process 3798 with SIGKILL: Device or resource busy Mar 22 16:47:12 compute1 libvirtd[2954]: 2018-03-22 16:47:12.648+0000: 3093: error : virProcessKillPainfully:401 : Failed to terminate process 3798 with SIGKILL: Device or resource busy Kind regards, Lawrence From melwittt at gmail.com Thu Mar 22 17:10:03 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 22 Mar 2018 10:10:03 -0700 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> Message-ID: On Thu, 22 Mar 2018 10:38:38 -0500, Matt Riedemann wrote: > On 3/20/2018 6:44 PM, melanie witt wrote: >> We were thinking of starting the runways process after the spec review >> freeze (which is April 19) so that reviewers won't be split between spec >> reviews and reviews of work in runways. > > I'm going to try and reign in the other thread [1] and bring it back here. > > The request to discuss this in the nova meeting was discussed, log > starts here [2]. > > For me personally, I'm OK with starting runways now, before the spec > freeze, if we also agree to move the spec freeze out past milestone 1, > so either milestone 2 or feature freeze (~milestone 3). > > This is because we have far fewer people that actually review specs, and > I personally don't want to have the expectation/pressure to be > reviewing, at a high level anyway, both specs before the deadline along > with runways at the same time. > > Moving out the spec freeze also means that assuming runways works and > we're good about flushing things through the queue, we can seed more > blueprints based on specs we approve later. > > Given we already have 33 approved but not yet completed blueprints, we > have plenty of content to keep us busy with runways for the next couple > of months. > > If we start runways now and don't move out the spec freeze, I'm OK with > that but I personally will probably be devoting more of my time to > reviewing specs than stuff in the runways. I kind of like the idea of moving spec freeze out to near milestone 3, that is, for the most part not have a freeze, and start runways now. My thinking is, as was mentioned in the other thread, we've already approved ~75% of the number of blueprints we approved last cycle and last cycle we had a 79% completion percentage of approved blueprints. I'd be inclined to mostly stop approving new things now until we've completed some via runways. I think the only concern around moving spec freeze out would be that I thought the original purpose of the spec freeze was to set expectations early about what was approved and not approved instead of having folks potentially in the situation where it's technically "maybe" for a large chunk of the cycle. I'm not sure which most people prefer -- would you rather know early and definitively whether your blueprint is approved/not approved or would you rather have the opportunity to get approval during a larger window in the cycle and not know definitively early on? Can anyone else chime in here? -melanie From msm at redhat.com Thu Mar 22 17:14:37 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 22 Mar 2018 13:14:37 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Another jovial meeting of the API-SIG was convened today. We began with a few housekeeping notes and then moved into a discussion of the api-ref work and how we might continue to assist Graham Hayes with the os-api-ref changes[7] that will output a machine-readable format for the API schemas. The group generally agrees that the SIG should continue to assist in moving these changes forward, there are concerns about the inclusion of microversions and how best to capture these, but for the time being the SIG will engage with the review and the parties interested in seeing this work through to completion. We then talked about the new reviews which have been added recently to the queue. The SIG has decided to freeze the proposed guidance[8] on exposing microversions in SDKs created by Dmitry Tantsur. There is also a review[9] from Ed Leafe to split up the current HTTP guidance document into separate documents to help improve the readability and presence of this material, this should be ready for freeze soon. Lastly, we discussed a new bug[10] concerning how services should be referenced in error messages. The current guidance specifies that a service name should be used and the proposed bug by Chris Dent is that the service type should be used instead. Chris has also created a review[11] to address this but the SIG would like more opinions on this change and how it might impact consuming the error messages. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None this week. # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814 # Guidelines Currently Under Review [3] * Break up the HTTP guideline into smaller documents https://review.openstack.org/#/c/554234/ * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/#/c/528801/ [8] https://review.openstack.org/#/c/532814/ [9] https://review.openstack.org/#/c/554234/ [10] https://bugs.launchpad.net/openstack-api-wg/+bug/1756464 [11] https://review.openstack.org/#/c/554921/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Thu Mar 22 17:16:30 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 22 Mar 2018 17:16:30 +0000 Subject: [openstack-dev] Adding Brian Haley to the Neutron Drivers team In-Reply-To: References: Message-ID: +1 From: Miguel Lavalle Reply-To: OpenStack List Date: Thursday, March 22, 2018 at 6:26 PM To: OpenStack List Subject: [openstack-dev] Adding Brian Haley to the Neutron Drivers team Hi Neutrinos, As we all know, the Neutron Drivers team plays a crucial role in helping the community to evolve the OpenStack Networking architecture to meet the needs of our current and future users [1]. To strengthen this team, I have decided to add Brian Haley to it. Brian has two decades of experience in open source networking technology. He joined the OpenStack community contributing code to nova-network in the Diablo release. His first Neutron commit (known as Quantum at the time) dates back to March of 2013 [2]. Since then, his many contributions include the implementation and evolution of our L3 and DVR code. He is one of our most active core reviewers and a prolific contributor of code to our reference implementation. I am very confident that Brian will be a continuous source of knowledge, experience and valuable insight to the Drivers team. Best regards Miguel [1] https://docs.openstack.org/neutron/pike/contributor/policies/neutron-teams.html#drivers-team [2] https://review.openstack.org/#/c/25564/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From slawek at kaplonski.pl Thu Mar 22 17:37:14 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Thu, 22 Mar 2018 18:37:14 +0100 Subject: [openstack-dev] Adding Brian Haley to the Neutron Drivers team In-Reply-To: References: Message-ID: <88D14C51-46E3-4923-9243-FDE9F3DD3632@kaplonski.pl> I’m not part of drivers team but big +1 for Brian from me :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Miguel Lavalle w dniu 22.03.2018, o godz. 17:25: > > Hi Neutrinos, > > As we all know, the Neutron Drivers team plays a crucial role in helping the community to evolve the OpenStack Networking architecture to meet the needs of our current and future users [1]. To strengthen this team, I have decided to add Brian Haley to it. Brian has two decades of experience in open source networking technology. He joined the OpenStack community contributing code to nova-network in the Diablo release. His first Neutron commit (known as Quantum at the time) dates back to March of 2013 [2]. Since then, his many contributions include the implementation and evolution of our L3 and DVR code. He is one of our most active core reviewers and a prolific contributor of code to our reference implementation. I am very confident that Brian will be a continuous source of knowledge, experience and valuable insight to the Drivers team. > > Best regards > > Miguel > > > [1] https://docs.openstack.org/neutron/pike/contributor/policies/neutron-teams.html#drivers-team > [2] https://review.openstack.org/#/c/25564/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From melwittt at gmail.com Thu Mar 22 17:38:26 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 22 Mar 2018 10:38:26 -0700 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> Message-ID: <08f54869-4ca6-fe3a-ceb9-3f5feda81bda@gmail.com> On Thu, 22 Mar 2018 15:40:20 +0000, John Garbutt wrote: > On 20 March 2018 at 23:44, melanie witt > wrote: > > We were thinking of starting the runways process after the spec > review freeze (which is April 19) so that reviewers won't be split > between spec reviews and reviews of work in runways. > > > I think spec reviews, blueprint reviews, and code review topics could > all get a runway slot. > > What if we had these queues: > Backlog Queue, Blueprint Runway, Approved Queue, Code Runway > > Currently all approved blueprints would sit in the Approved queue. > As described, you leave the runway and go back in the queue if progress > stalls. That might be a bit more complex than what I was thinking for our first go at trying this process. Is "Backlog Queue" the list of unapproved blueprints (specs) waiting for dedicated runway review and "Blueprint Runway" are the specs guaranteed to receive review during the two week time-box? I hadn't been thinking about putting spec reviews into runways as well. I had been focusing on only the review of implementations of approved specs. > Basically abandon the spec freeze. Control with runways instead. I'm open to this idea, just had a concern as described in my last reply to this thread, on whether getting rid of spec freeze may remove early expectation setting for people (if they prefer that). I'm okay with going with the majority vote about it. -melanie From sean.mcginnis at gmx.com Thu Mar 22 17:39:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 22 Mar 2018 12:39:02 -0500 Subject: [openstack-dev] [release] Release countdown for week R-22, March 26-30 Message-ID: <20180322173902.GA5279@sm-xps> Welcome back to our regular release countdown email. Development Focus ----------------- Teams should be focusing on planning what can be done for Rocky. General Information ------------------- All teams should review their release liaison information and make sure it is up to date [1]. [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons While reviewing liaisons, this would also be a good time to make sure your declared release model matches the project's plans for Rocky (e.g. [2]). This should be done prior to the first milestone and can be done by proposing a change to the Rocky deliverable file for the project(s) affected [3]. [2] https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5 [3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/rocky Teams should be collecting Forum topic ideas. Most project-specific etherpads, along with all the details on the process, can be found in Mike's posting to the openstack-dev mailing list [4]. Please take some time to think of any topics that would help make the Forum a successful and productive event. [4] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.html If you haven't seen it yet, the initial "S" naming poll results are in [5]. Keep in mind, this is just the initial ranking of choices and must go through vetting for trademark and other issues that may result voting winners from being disqualified. Expect to hear more finalized details soon. [5] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128626.html Upcoming Deadlines & Dates -------------------------- Rocky-1 milestone: April 19 (R-19 week) Forum at OpenStack Summit in Vancouver: May 21-24 -- Sean McGinnis (smcginnis) From fungi at yuggoth.org Thu Mar 22 17:51:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Mar 2018 17:51:58 +0000 Subject: [openstack-dev] [infra][releases][requirements][l2gw] Latest requirements In-Reply-To: <20180322155337.GA30365@sm-xps> References: <20180322154818.kj5xzpe3z4sbqycu@gentoo.org> <20180322155337.GA30365@sm-xps> Message-ID: <20180322175158.j5bz4phvz5nfo32w@yuggoth.org> On 2018-03-22 10:53:37 -0500 (-0500), Sean McGinnis wrote: [...] > Although, if this is a blocker for the project for now, I would be fine with > just taking it down now and working out any issues directly with anyone that > does need access to it for some reason. And would still be available from https://tarballs.openstack.org/ if anyone _does_ need a copy. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Thu Mar 22 18:26:48 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 22 Mar 2018 13:26:48 -0500 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> Message-ID: <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> > I think the only concern around moving spec freeze out would be that I > thought the original purpose of the spec freeze was to set expectations > early about what was approved and not approved instead of having folks > potentially in the situation where it's technically "maybe" for a large > chunk of the cycle. I'm not sure which most people prefer -- would you > rather know early and definitively whether your blueprint is > approved/not approved or would you rather have the opportunity to get > approval during a larger window in the cycle and not know definitively > early on? Can anyone else chime in here? This is a fair point. Putting specs into runways doesn't imply (re)moving spec freeze IMO. It's just a way to get us using runways RIGHT NOW, so that folks with ready specs can get reviewed sooner, know whether they're approved sooner, write their code sooner, and get their *code* into an earlier runway. A spec in a runway would be treated like anything else: reviewers focus on it and the author needs to be available to respond quickly to feedback. I would expect the ratio of specs:code in runways to start off high and dwindle rapidly as we approach spec freeze. It's worth pointing out that there's not an expectation for people to work more/harder when runways are in play. Just that it increases the chances of more people looking at the same things at the same time; and allows us to bring focus to things that might otherwise languish in ignoreland. efried From whayutin at redhat.com Thu Mar 22 18:31:39 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 22 Mar 2018 14:31:39 -0400 Subject: [openstack-dev] [tripleo][infra][dib] Gate "out of disk" errors and diskimage-builder 2.12.0 In-Reply-To: References: <3008b3e9-47c2-077c-7acd-5a850b004e21@redhat.com> <45324f81-2aaa-232c-604a-7aee714b7292@redhat.com> Message-ID: On Thu, Mar 22, 2018 at 8:33 AM, Wesley Hayutin wrote: > > > On Wed, Mar 21, 2018 at 5:11 PM, Ian Wienand wrote: > >> On 03/21/2018 03:39 PM, Ian Wienand wrote: >> >>> We will prepare dib 2.12.1 with the fix. As usual there are >>> complications, since the dib gate is broken due to unrelated triple-o >>> issues [2]. In the mean time, probably avoid 2.12.0 if you can. >>> >> >> [2] https://review.openstack.org/554705 >>> >> >> Since we have having issues getting this verified due to some >> instability in the tripleo gate, I've proposed a temporary removal of >> the jobs for dib in [1]. >> >> [1] https://review.openstack.org/555037 >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > Thanks Ian! > I'm not sure if the build job had enough visibility to everyone, trying to > correct that now. > > tripleo-buildimage-overcloud-full-centos-7 gate status is available at > http://cistatus.tripleo.org:8000/ > Thanks > > FYI.. we are moving the 3nodes-multinode job to non-voting [1] and out of the gate until we resolve the issues there. This should allow Ian's patches to land for DIB, and separately allow Alex's patch to fix openvswitch https://review.openstack.org/#/c/555056/ Thanks [1] https://review.openstack.org/#/c/555305/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Mar 22 19:03:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 22 Mar 2018 15:03:53 -0400 Subject: [openstack-dev] Adding "not docs" banner to specs website? In-Reply-To: References: <20180319154633.rwyt73b5llt4jfx6@yuggoth.org> <1521490029-sup-9732@lrrr.local> Message-ID: <1521745370-sup-5339@lrrr.local> Excerpts from Rochelle Grober's message of 2018-03-22 01:05:42 +0000: > It could be *really* useful if you could include the date (month/year > would be good enough)of the last significant patch (not including > the reformat to Openstackdocstheme). That could give folks a great > stick in the mud for what "past" is for the spec. It might even > incent some to see if there are newer, conflicting or enhancing > specs or docs to reference. > > --Eoxky The docs theme includes this information just below the title on each page. See https://docs.openstack.org/reno/latest/ and look for "UPDATED" for an example. Doug From kennelson11 at gmail.com Thu Mar 22 19:11:05 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 22 Mar 2018 19:11:05 +0000 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: Sounds like we have fungi set to run the migration of tripleO bugs with the 'ui' tag for tomorrow after he gets done with the ironic migration. So excited to have you guys start moving over! Any idea what squad will want to go next/when they might want to go? No rush, I'm curious more than anything. -Kendall (diablo_rojo) On Sat, Mar 17, 2018 at 1:17 AM Emilien Macchi wrote: > On Fri, Mar 16, 2018 at 11:15 PM, Jason E. Rist wrote: >> >> I just tried this but I think I might be doing something wrong... >> >> http://storyboard.macchi.pro:9000/ > > > Sorry I removed the VM a few weeks ago (I needed to clear up some > resources for my dev env). > > >> This URL mentioned in the previous storyboard evaluation email does not >> seem to work. >> >> >> http://lists.openstack.org/pipermail/openstack-dev/2018-January/126258.html >> >> Are you still evaluating this? Is the UI squad still expected to >> contribute? Do we have a better place to go for storyboard usage? I >> just ran into a bug and thought to myself "hey, I'll go drop this at the >> storyboard spot, since that's what had been the plan" but avast, I could >> not continue. >> >> Can you enlighten me to the status? >> > > The latest update is from March 2nd on this thread, where we're working > with Kendall to migrate all bugs with "ui" tag in launchpad/tripleo into > storyboard. It hasn't happened yet but I think it will over the next days > or so. > Once we get there, we'll provide more guidance on the plan. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Mar 22 19:19:03 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 22 Mar 2018 19:19:03 +0000 (GMT) Subject: [openstack-dev] [api] [heat] microversion_parse.middleware.MicroversionMiddleware In-Reply-To: References: Message-ID: On Tue, 6 Mar 2018, Chris Dent wrote: > Last week at the PTG, during the API-SIG session, there was > discussion of extracting the microversion handling middleware that > is used in the placement service into the relatively small > microversion-parse library. This is so people who want to adopt > microversions (or change their implementation) can share some code. > > This evening I've got a working version of that, and would like some > feedback (and a few other things as well). > > The code is in a stack starting with https://review.openstack.org/#/c/495356/ This has merged and has been released as microversion-parse 0.2.1: https://pypi.org/project/microversion_parse/ > As a sort of proof, there's also a nova patchset which shows the > removed code. If you install the above stack into the checked out > nova patchset, it works as expected. That nova change is at > https://review.openstack.org/#/c/550265/ This has now been changed away from being a DNM/WIP and depends on a requirements change https://review.openstack.org/#/c/555332/ These warts still remain > * It wants to use webob, because that's how it started out. This is > pretty easy to fix with one challenge being managing error > formatting. > > * At the moment it is not yet set up to align with deployment > strategies such as paste (it uses old school wsgi initialization > and wrapping). Also pretty easy to fix. but I wanted to get something out for people to experiment with. The nova patchset is probably as a good a starting point for understanding how to use it as anything. Eventually it would probably make sense to create a little demo WSGI app that uses it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From melwittt at gmail.com Thu Mar 22 19:59:02 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 22 Mar 2018 12:59:02 -0700 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> Message-ID: <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> On Thu, 22 Mar 2018 13:26:48 -0500, Eric Fried wrote: > >> I think the only concern around moving spec freeze out would be that I >> thought the original purpose of the spec freeze was to set expectations >> early about what was approved and not approved instead of having folks >> potentially in the situation where it's technically "maybe" for a large >> chunk of the cycle. I'm not sure which most people prefer -- would you >> rather know early and definitively whether your blueprint is >> approved/not approved or would you rather have the opportunity to get >> approval during a larger window in the cycle and not know definitively >> early on? Can anyone else chime in here? > > This is a fair point. > > Putting specs into runways doesn't imply (re)moving spec freeze IMO. > It's just a way to get us using runways RIGHT NOW, so that folks with > ready specs can get reviewed sooner, know whether they're approved > sooner, write their code sooner, and get their *code* into an earlier > runway. > > A spec in a runway would be treated like anything else: reviewers focus > on it and the author needs to be available to respond quickly to feedback. > > I would expect the ratio of specs:code in runways to start off high and > dwindle rapidly as we approach spec freeze. This would seem to have the same effect as waiting until after spec freeze to start using runways for focusing on reviews of implementations. And (MHO) I'm not sure we need help in reviewing more specs. While it's true that not a lot of people review specs, we do still also repeatedly approve more specs than we can complete in a cycle, every cycle. I'd rather keep things simpler and not add spec reviews into the mix at this time. Maybe a good compromise would be to start runways now and move spec freeze out to r-2 (Jun 7). That way we have less pressure on spec review earlier on, more time to review the current queue of approved implementations via runways, and a chance to approve more specs along the way if we find we're flushing the queue down enough. What does everyone think about that? > It's worth pointing out that there's not an expectation for people to > work more/harder when runways are in play. Just that it increases the > chances of more people looking at the same things at the same time; and > allows us to bring focus to things that might otherwise languish in > ignoreland. Yes, this. The goal IMHO is to make an improvement over what we usually do, which is having our review efforts scattered across various approved implementations. If we could focus a bit and increase the chances that we're reviewing the same things at the same time, I think we might have a better completion percentage on approved blueprints in the cycle. -melanie From doug at doughellmann.com Thu Mar 22 20:16:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 22 Mar 2018 16:16:06 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521662425-sup-1628@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> Message-ID: <1521749386-sup-1944@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-21 16:02:06 -0400: > Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: > > > > TL;DR > > ----- > > > > Let's stop copying exact dependency specifications into all our > > projects to allow them to reflect the actual versions of things > > they depend on. The constraints system in pip makes this change > > safe. We still need to maintain some level of compatibility, so the > > existing requirements-check job (run for changes to requirements.txt > > within each repo) will change a bit rather than going away completely. > > We can enable unit test jobs to verify the lower constraint settings > > at the same time that we're doing the other work. > > The new job definition is in https://review.openstack.org/555034 and I > have updated the oslo.config patch I mentioned before to use the new job > instead of one defined in the oslo.config repo (see > https://review.openstack.org/550603). > > I'll wait for that job patch to be reviewed and approved before I start > adding the job to a bunch of other repositories. > > Doug The job definition for openstack-tox-lower-constraints [1] was approved today (thanks AJaegar and pabelenger). I have started proposing the patches to add that job to the repos listed in openstack/requirements/projects.txt using the topic "requirements-stop-syncing" [2]. I hope to have the rest of those proposed by the end of the day tomorrow, but since they have to run in batches I don't know if that will be possible. The patch to remove the update proposal job is ready for review [3]. As is the patch to allow project requirements to diverge by changing the rules in the requirements-check job [4]. We ran into a snag with a few of the jobs for projects that rely on having service projects installed. There have been a couple of threads about that recently, but Monty has promised to start another one to provide all of the necessary context so we can fix the issues and move ahead. Doug [1] https://review.openstack.org/555034 [2] https://review.openstack.org/#/q/topic:requirements-stop-syncing+(status:open+OR+status:merged) [3] https://review.openstack.org/555426 [4] https://review.openstack.org/555402 From mriedemos at gmail.com Thu Mar 22 20:18:39 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 22 Mar 2018 15:18:39 -0500 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> Message-ID: On 3/22/2018 2:59 PM, melanie witt wrote: > Maybe a good compromise would be to start runways now and move spec > freeze out to r-2 (Jun 7). That way we have less pressure on spec review > earlier on, more time to review the current queue of approved > implementations via runways, and a chance to approve more specs along > the way if we find we're flushing the queue down enough. This is what I'd prefer to see. -- Thanks, Matt From openstack at fried.cc Thu Mar 22 20:44:02 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 22 Mar 2018 15:44:02 -0500 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> Message-ID: <6c73dd5b-662f-8e52-6057-b70f3c197f0b@fried.cc> WFM. November Oscar Victor Alpha you are cleared for takeoff. On 03/22/2018 03:18 PM, Matt Riedemann wrote: > On 3/22/2018 2:59 PM, melanie witt wrote: >> Maybe a good compromise would be to start runways now and move spec >> freeze out to r-2 (Jun 7). That way we have less pressure on spec >> review earlier on, more time to review the current queue of approved >> implementations via runways, and a chance to approve more specs along >> the way if we find we're flushing the queue down enough. > > This is what I'd prefer to see. > From fungi at yuggoth.org Thu Mar 22 20:52:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 22 Mar 2018 20:52:59 +0000 Subject: [openstack-dev] [infra][releases][requirements][l2gw] Latest requirements In-Reply-To: <20180322175158.j5bz4phvz5nfo32w@yuggoth.org> References: <20180322154818.kj5xzpe3z4sbqycu@gentoo.org> <20180322155337.GA30365@sm-xps> <20180322175158.j5bz4phvz5nfo32w@yuggoth.org> Message-ID: <20180322205259.uhpkzeuy6k5hri23@yuggoth.org> As requested, I have deleted the following from PyPI (they can still be found on tarballs.openstack.org if needed): networking_l2gw 2015.1.1 networking_l2gw 2016.1.0 The "highest" release version available on PyPI is now networking_l2gw 12.0.1. Let me know if you spot any similar date-based versions we need to clean up and I'm happy to take care of it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Thu Mar 22 21:12:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 22 Mar 2018 16:12:47 -0500 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> Message-ID: <24d0dd21-7e5e-aa9b-37b6-66a3004d412a@gmail.com> On 3/22/2018 2:59 PM, melanie witt wrote: > And (MHO) I'm not sure we need help in reviewing more specs. I wholly disagree here. If you're on the core team, or want to be on the core team, you should be reviewing specs, because those are the things that lay out the high level design and thinking about what eventually comes out in the code. If there are core team members that aren't involved in the specs review process, I certainly hope they are going back to do their homework on the agreed-to design *before* digging into code review. There are some specs that are pretty simple/mechanical changes, but there are others that take quite a bit of time ironing out details and edge cases, and sometimes changes in the initial design, such that it's important to have that context in mind when you're reviewing the code. There have been plenty of times I've gone through a lengthy spec review process and then during implementation review I find things and say, "wait, in the spec we said...". If you're not involved in both, you're likely to miss those things. At the least it gets the context in your head so you're not starting from scratch. Maybe you were just saying, "we don't need to review more specs because we already have enough approved specs to get through the related code changes", and that's fair, I've said the same before, but those are two different things. -- Thanks, Matt From melwittt at gmail.com Thu Mar 22 21:33:44 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 22 Mar 2018 14:33:44 -0700 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <24d0dd21-7e5e-aa9b-37b6-66a3004d412a@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> <5c04a5a0-b31c-d8a7-9f5e-c75cefe00cb6@gmail.com> <326c0e4d-5360-21de-eabe-1ecb98c8b946@fried.cc> <31e6300b-ab75-ffcf-ecce-5b116b45ee52@gmail.com> <24d0dd21-7e5e-aa9b-37b6-66a3004d412a@gmail.com> Message-ID: <1c0a3391-5f1c-caf5-85b5-d027a71a919f@gmail.com> On Thu, 22 Mar 2018 16:12:47 -0500, Matt Riedemann wrote: > On 3/22/2018 2:59 PM, melanie witt wrote: >> And (MHO) I'm not sure we need help in reviewing more specs. > > I wholly disagree here. If you're on the core team, or want to be on the > core team, you should be reviewing specs, because those are the things > that lay out the high level design and thinking about what eventually > comes out in the code. > > If there are core team members that aren't involved in the specs review > process, I certainly hope they are going back to do their homework on > the agreed-to design *before* digging into code review. > > There are some specs that are pretty simple/mechanical changes, but > there are others that take quite a bit of time ironing out details and > edge cases, and sometimes changes in the initial design, such that it's > important to have that context in mind when you're reviewing the code. > > There have been plenty of times I've gone through a lengthy spec review > process and then during implementation review I find things and say, > "wait, in the spec we said...". If you're not involved in both, you're > likely to miss those things. > > At the least it gets the context in your head so you're not starting > from scratch. > > Maybe you were just saying, "we don't need to review more specs because > we already have enough approved specs to get through the related code > changes", and that's fair, I've said the same before, but those are two > different things. Yes, the last paragraph is what I meant. I said that in response to the idea of putting unapproved specs into the runways right now to get review on them. I was saying we probably already have so many approved specs that we're at risk of not being able to review and merge all of the related code in time. (At this moment there are 41 blueprints approved for Rocky). So I'm not sure we need to be getting more focus on reviewing more specs so we can approve more of them than we already have at this point, IMHO. -melanie From rosmaita.fossdev at gmail.com Fri Mar 23 00:55:16 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 22 Mar 2018 20:55:16 -0400 Subject: [openstack-dev] [glance] python-glanceclient release status Message-ID: As promised at today's Glance meeting, here's an update on the release status for the glanceclient bugfix release for stable/queens. There's another bug I think needs to be addressed: https://bugs.launchpad.net/python-glanceclient/+bug/1758149 I've got a patch up so I can get feedback (particularly on the error messages): https://review.openstack.org/555550 I'll be adding more tests to the patch (it needs them). It's already Friday UTC, so we won't be releasing 2.10.0 until Monday at the earliest. Here are the backports that still require approval: https://review.openstack.org/#/c/555277/ https://review.openstack.org/#/c/555390/ https://review.openstack.org/#/c/555436/ and a cherry-pick of 555550 when that's done. cheers, brian From zijian1012 at 163.com Fri Mar 23 01:55:56 2018 From: zijian1012 at 163.com (zijian1012 at 163.com) Date: Fri, 23 Mar 2018 09:55:56 +0800 Subject: [openstack-dev] Is openstacksdk backward compatible? Message-ID: <2018032309545626368630@163.com> Thanks for the answer, openstacksdk provides a more uniform interface, which is better -------------- next part -------------- An HTML attachment was scrubbed... URL: From HoangCX at vn.fujitsu.com Fri Mar 23 02:30:27 2018 From: HoangCX at vn.fujitsu.com (HoangCX at vn.fujitsu.com) Date: Fri, 23 Mar 2018 02:30:27 +0000 Subject: [openstack-dev] [Neutron][vpnaas] In-Reply-To: References: Message-ID: <5f6e1fd0ebf34c3d99719121e704b7df@G07SGEXCMSGPS06.g07.fujitsu.local> Hi, Yes. I think It is possible. You need to create new endpoint groups which include the current subnet and a new subnet. Then just update the ipsec connections with corresponding endpoints. Best regards, Hoang From: vidyadhar reddy [mailto:vidyadharreddy68 at gmail.com] Sent: Thursday, March 22, 2018 4:06 PM To: Cao, Xuan Hoang Subject: Re: [openstack-dev] [Neutron][vpnaas] Hello Hoang, i have tried the subnet grouping seems like its working, but that method is fine when we have n subnets before setting up the vpnaas, but i just wanted to know if there is any method where we can add subnets to the existing vpnaas connection so that the newly entered subnets can communicate using the vpnaas connection which is already there. Best Regards, vidyadhar reddy On Wed, Mar 21, 2018 at 3:54 AM, HoangCX at vn.fujitsu.com > wrote: Hi, IIUC, your use case is to connect 4 subnets from different sites (2 subnets for each site). If so, did you try with endpoint group? If not, please refer the following docs for more detail about how to try and get more understanding [1][2] [1] https://docs.openstack.org/neutron/latest/admin/vpnaas-scenario.html#using-vpnaas-with-endpoint-group-recommended [2] https://docs.openstack.org/neutron-vpnaas/latest/contributor/multiple-local-subnets.html BRs, Cao Xuan Hoang, From: vidyadhar reddy [mailto:vidyadharreddy68 at gmail.com] Sent: Tuesday, March 20, 2018 4:31 PM To: openstack-dev at lists.openstack.org Subject: [openstack-dev] [Neutron][vpnaas] Hello, i have a general question regarding the working of vpnaas, can we setup multiple vpn connections on a single router? my scenario is lets say we have two networks net 1 and net2 in two different sites respectively, each network has two subnets, two sites have one router in each, with three interfaces one for the public network and remaining two for the two subnets, can we setup a two vpnaas connections on the routers in each site to enable communication between the two subnets in each site. i have tried this setup, it didn't work for me. just wanted to know if it is a design constraint or not, i am not sure if this issue is under development, is there any development going on or is it already been solved? BR, Vidyadhar reddy peddireddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Fri Mar 23 03:30:43 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 23 Mar 2018 11:30:43 +0800 Subject: [openstack-dev] [nova] EC2 cleanup ? Message-ID: seems we have a EC2 implementation in api layer and deprecated since Mitaka, maybe eligible to be removed this cycle? Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Fri Mar 23 03:47:36 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Fri, 23 Mar 2018 11:47:36 +0800 Subject: [openstack-dev] [nova] about rebuild instance booted from volume Message-ID: Hi,all This is the spec about rebuild a instance booted from volume, anyone who is interested in booted from volume can help to review this. Any suggestion is welcome.Thank you very much! The link is here. Re:the rebuild spec:https://review.openstack.org/#/c/532407 Best Regards Lijie -------------- next part -------------- An HTML attachment was scrubbed... URL: From hejianle at unitedstack.com Fri Mar 23 04:02:10 2018 From: hejianle at unitedstack.com (=?utf-8?B?5L2V5YGl5LmQ?=) Date: Fri, 23 Mar 2018 12:02:10 +0800 Subject: [openstack-dev] [nova] Does Cell v2 support for muti-cell deployment in Pike? Message-ID: Hi, Does Cell v2 support for multi-cell deployment in pike? Is there any good document about the deployment? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Fri Mar 23 04:27:23 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 22 Mar 2018 21:27:23 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: References: Message-ID: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> Hi all,     There seems to be a possibility of a race condition in the Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to the proposed Cyborg/Nova spec for details.) Consider the scenario where the flavor specifies a resource class for a device type, and also specifies a function (e.g. encrypt) in the extra specs. The Nova scheduler would only track the device type as a resource, and Cyborg needs to track the availability of functions. Further, to keep it simple, say all the functions exist all the time (no reprogramming involved). To recap, here is the scheduler flow for this case: * A request spec with a flavor comes to Nova conductor/scheduler. The flavor has a device type as a resource class, and a function in the extra specs. * Placement API returns the list of RPs (compute nodes) which contain the requested device types (but not necessarily the function). * Cyborg will provide a custom filter which queries Cyborg DB. This needs to check which hosts contain the needed function, and filter out the rest. * The scheduler selects one node from the filtered list, and the request goes to the compute node. For the filter to work, the Cyborg DB needs to maintain a table with triples of (host, function type, #free units). The filter checks if a given host has one or more free units of the requested function type. But, to keep the # free units up to date, Cyborg on the selected compute node needs to notify the Cyborg API to decrement the #free units when an instance is spawned, and to increment them when resources are released. Therein lies the catch: this loop from the compute node to controller is susceptible to race conditions. For example, if two simultaneous requests each ask for function A, and there is only one unit of that available, the Cyborg filter will approve both, both may land on the same host, and one will fail. This is because Cyborg on the controller does not decrement resource usage due to one request before processing the next request. This is similar to this previous Nova scheduling issue . That was solved by having the scheduler claim a resource in Placement for the selected node. I don't see an analog for Cyborg, since it would not know which node is selected. Thanks in advance for suggestions and solutions. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From 935540343 at qq.com Fri Mar 23 07:36:12 2018 From: 935540343 at qq.com (=?ISO-8859-1?B?X18gbWFuZ28u?=) Date: Fri, 23 Mar 2018 15:36:12 +0800 Subject: [openstack-dev] [nova] Refused to connect port 8774. Message-ID: I run the openstack compute service list with the following error: # openstack compute service list Unable to establish connection to http://controller:8774/v2.1/os-services: HTTPConnectionPool(host='controller', port=8774): Max retries exceeded with url: /v2.1/os-services (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] \xe6\x8b\x92\xe7\xbb\x9d\xe8\xbf\x9e\xe6\x8e\xa5',)) My port 8774 didn't work and restart the nova- API doesn't work. Is there any way to solve this problem? thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Mar 23 09:36:49 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 23 Mar 2018 10:36:49 +0100 Subject: [openstack-dev] [tc] Technical Committee Status update, March 23rd Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * Resolution about stable branch EOL and "extended maintenance" [1] * Clarify testing for interop programs [2] * Rename and clarify scope for PowerStackers [3] * New repos: oslo.limit, cyborg-specs, ansible-config_template, ansible-role-systemd* + several xstatic-* repositories * Removed repos: ironic-inspector-tempest-plugin, zuul-*, nodepool [1] https://review.openstack.org/#/c/548916/ [2] https://review.openstack.org/#/c/550571/ [3] https://review.openstack.org/#/c/551413/ Two major items were finally approved this week. The first one is the creation of an "extended maintenance" concept. Stable branches will no longer be automatically terminated once the maintenance period is over. They will be kept around for as long as they are maintained by volunteers. This should enable new resources to step up if they want to continue maintaining stable branches beyond the minimal period guaranteed by the stable maintenance team: https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html The second major item approved this week is the long-awaited clarification on acceptable location for interoperability tests, in the age of add-on trademark programs. The adopted resolution relaxes potential locations and got support from all parties involved: https://governance.openstack.org/tc/resolutions/20180307-trademark-program-test-location.html In addition to those two, another noticeable change this week is the removal of Zuul and Nodepool from OpenStack project governance, as they are looking into forming a separately-branded project supported by the OpenStack Foundation. For more details, you can read Jim Blair's email at: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html == Voting in progress == Melvin and I posted a resolution proposing a minimal governance model for SIGs. It proposes that any escalated conflict inside a SIG or between SIGs should be arbitrated by a SIGs admin group formed of one TC member and one UC member, with the Foundation executive director breaking ties in case of need. A similar resolution was adopted by the UC. This resolution has majority support already and will be approved on Tuesday, unless new objections are posted. So please review at: https://review.openstack.org/#/c/554254/ Tony proposed an update to the new projects requirements list, to match our current guidelines in terms of IRC meetings. This resolution also has majority support already and will be approved on Tuesday, unless new objections are posted: https://review.openstack.org/#/c/552728/ == Under discussion == Jeffrey Zhang's proposal about splitting the Kolla-kubernetes team out of the Kolla/Kolla-ansible team is still under discussion, with questions about the effect of the change on the Kolla-k8s team. A thread on the mailing-list would be good to make sure this discussion gets wider input from affected parties. Please chime in on: https://review.openstack.org/#/c/552531/ The discussion is also on-going the Adjutant project team addition. Concerns about the scope of Adjutant, as well as fears that it would hurt interoperability between OpenStack deployments were raised. A deeper analysis and discussion needs to happen befoer the TC can make a final call on this one. You can jump in the discussion here: https://review.openstack.org/#/c/553643/ == TC member actions/focus/discussions for the coming week(s) == For the coming week I expect discussions to continue around the Kolla split and the Adjutant team addition. I'll be spending time organizing the Kubernetes/OpenStack community discussions that will happen around KubeCon EU in Copenhagen in May. We'll continue brainstorming ideas of topics for the Forum in Vancouver. You can add ideas to: https://etherpad.openstack.org/p/YVR-forum-TC-sessions == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From manoj_ms at hotmail.com Fri Mar 23 09:42:45 2018 From: manoj_ms at hotmail.com (manoj kumar) Date: Fri, 23 Mar 2018 09:42:45 +0000 Subject: [openstack-dev] [nova] Refused to connect port 8774. In-Reply-To: References: Message-ID: I would check for: 1) Telnet to controller on port 8774. 2) check if controller service is listening on 8774 Sent from my iPhone > On 23-Mar-2018, at 1:07 PM, __ mango. <935540343 at qq.com> wrote: > > > I run the openstack compute service list with the following error: > > # openstack compute service list > Unable to establish connection to http://controller:8774/v2.1/os-services: > HTTPConnectionPool(host='controller', port=8774): > Max retries exceeded with url: /v2.1/os-services (Caused by NewConnectionError(': > Failed to establish a new connection: [Errno 111] \xe6\x8b\x92\xe7\xbb\x9d\xe8\xbf\x9e\xe6\x8e\xa5',)) > > My port 8774 didn't work and restart the nova- API doesn't work. > Is there any way to solve this problem? thank you. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Fri Mar 23 10:12:04 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 23 Mar 2018 19:12:04 +0900 Subject: [openstack-dev] [sdk] git repo rename and storyboard migration In-Reply-To: References: <2104b95d-fb8b-7486-6c1c-8330296fd23b@inaugust.com> Message-ID: As we talked in #opensatck-sdks channel yesterday, I can help storyboard migration on OSC bugs. Dean and Monty looks fine with the migration. We can migrate OSC bugs to storyboard along with openstack SDK storyboard migration. Thanks, Akihiro 2018-03-23 1:28 GMT+09:00 Kendall Nelson : > I can run test migrations today for the rest of the OSC launchpad projects > just to make sure it all goes smoothly and report back. > > -Kendall (diablo_rojo) > > > On Thu, 22 Mar 2018, 5:54 am Dean Troyer, wrote: > >> On Thu, Mar 22, 2018 at 7:42 AM, Akihiro Motoki >> wrote: >> > 2018-03-22 21:29 GMT+09:00 Monty Taylor : >> >> I could see waiting until we move python-openstackclient. However, >> we've >> >> got the issue already with shade bugs being in storyboard already and >> sdk >> >> bugs being in launchpad. With shade moving to having its >> implementation be >> >> in openstacksdk, over this cycle I expect the number of bugs people >> report >> >> against shade wind up actually being against openstacksdk to increase >> quite >> >> a bit. >> >> >> >> Maybe we should see if the python-openstackclient team wants to migrate >> >> too? >> > >> > Although I have limited experience on storyboard, I think it is ready >> for >> > our bug tracking. >> > As Jens mentioned, not a small number of bugs are referred to from both >> OSC >> > and SDK. >> > One good news on OSC launchpad bug is that we do not use tag >> aggressively. >> > If Dean is okay, I believe we can migrate to storyboard. >> >> I am all in favor of migrating OSC to use to Storyboard, however I am >> totally unable to give it any time in the near future. If Akhiro or >> anyone else wants to take on that task, you will have my support and >> as much help as I am able to give. >> >> dt >> >> -- >> >> Dean Troyer >> dtroyer at gmail.com >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Mar 23 10:33:32 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 23 Mar 2018 11:33:32 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180322153921.GA29460@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180322133712.GA19274@sm-xps> <20180322153921.GA29460@sm-xps> Message-ID: <9f31cf5a-afa6-6148-87cf-d6745c11ac0c@redhat.com> On 03/22/2018 04:39 PM, Sean McGinnis wrote: >>> >>> That's unfortunate. What we really need is a migration path from the >>> 'pbr' way of doing things to something else. I see three possible >>> avenues at this point in time: >>> >>> 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar >>> things to 'sphinx-apidoc' but it takes the form of an extension. >>> From my brief experiments, the output generated from this is >>> radically different and far less comprehensive than what 'sphinx- >>> apidoc' generates. However, it supports templating so we could >>> probably configure this somehow and add our own special directive >>> somewhere like 'openstackdocstheme' >>> 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back >>> against upstream Sphinx [1]. This essentially does what the PBR >>> extension does but moves configuration into 'conf.py'. However, this >>> is currently held up as I can't adequately explain the differences >>> between this and 'sphinx.ext.autosummary' (there's definite overlap >>> but I don't understand 'autosummary' well enough to compare them). >>> 3. Modify the upstream jobs that detect the pbr integration and have >>> them run 'sphinx-apidoc' before 'sphinx-build'. This is the least >>> technically appealing approach as it still leaves us unable to build >>> stuff locally and adds yet more "magic" to the gate, but it does let >>> us progress. >>> >>> Try as I may, I don't really have the bandwidth to work on this for >>> another few weeks so I'd appreciate help from anyone with sufficient >>> Sphinx-fu to come up with a long-term solution to this issue. >>> >>> Cheers, >>> Stephen >>> >> >> I think we could probably go with 1 until and if 2 becomes an option. It does >> change output quite a bit. >> >> I played around with 3, but I think we will have enough differences between >> projects as to _where_ specifically this generated content needs to be placed >> that it will make that approach a little more brittle. >> > > One other things that comes to mind - I think most service projects, if they > are even using this, could probably just drop it. I've found the generated > "API" documentation for service modules to be of very limited use. > > That would at least narrow things down to lib projects. So this would still be > an issue for the oslo libs for sure. In that case, you do what that module API > documentation in most cases. This is also an issue for clients. I would kindly ask people doing this work to stop proposing patches that just remove the API reference without any replacement. > > But personally, I would encourage service projects to get around this issue by > just not doing it. It would appear that would take care of a large chunk of the > current usage: > > http://codesearch.openstack.org/?q=autodoc_index_modules&i=nope&files=setup.cfg&repos= > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mordred at inaugust.com Fri Mar 23 13:03:22 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 23 Mar 2018 08:03:22 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521715425.17048.8.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> Message-ID: On 03/22/2018 05:43 AM, Stephen Finucane wrote: > On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote: >> On Wed, Mar 21, 2018 at 10:49:02AM +0000, Stephen Finucane wrote: >>> tl;dr: Make sure you stop using pbr's autodoc feature before converting >>> them to the new PTI for docs. >>> >>> [snip] >>> >>> I've gone through and proposed a couple of reverts to fix projects >>> we've already broken. However, going forward, there are two things >>> people should do to prevent issues like this popping up. >> >> Unfortunately this will not work to just revert the changes. That may fix >> things locally, but they will not pass in gate by going back to the old way. >> >> Any cases of this will have to actually be updated to not use the unsupported >> pieces you point out. But the doc builds will still need to be done the way >> they are now, as that is what the PTI requires at this point. > > That's unfortunate. What we really need is a migration path from the > 'pbr' way of doing things to something else. I see three possible > avenues at this point in time: > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > things to 'sphinx-apidoc' but it takes the form of an extension. > From my brief experiments, the output generated from this is > radically different and far less comprehensive than what 'sphinx- > apidoc' generates. However, it supports templating so we could > probably configure this somehow and add our own special directive > somewhere like 'openstackdocstheme' > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > against upstream Sphinx [1]. This essentially does what the PBR > extension does but moves configuration into 'conf.py'. However, this > is currently held up as I can't adequately explain the differences > between this and 'sphinx.ext.autosummary' (there's definite overlap > but I don't understand 'autosummary' well enough to compare them). > 3. Modify the upstream jobs that detect the pbr integration and have > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > technically appealing approach as it still leaves us unable to build > stuff locally and adds yet more "magic" to the gate, but it does let > us progress. I'd suggest a #4: Take the sphinx.ext.apidoc extension and make it a standalone extension people can add to doc/requirements.txt and conf.py. That way we don't have to convince the sphinx folks to land it. I'd been thinking for a while "we should just write a sphinx extension with the pbr logic in it" - but hadn't gotten around to doing anything about it. If you've already written that extension - I think we're in potentially great shape! > Try as I may, I don't really have the bandwidth to work on this for > another few weeks so I'd appreciate help from anyone with sufficient > Sphinx-fu to come up with a long-term solution to this issue. > Cheers, > Stephen > > [1] https://github.com/sphinx-doc/sphinx/pull/4101/files > >>> * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections >>> from 'setup.cfg' in any patches that aim to convert a project to use >>> the new PTI. This will ensure the gate catches any potential >>> issues. >>> * In addition, if your project uses the pbr autodoc feature, you >>> should either (a) remove these docs from your documentation tree or >>> (b) migrate to something else like the 'sphinx.ext.autosummary' >>> extension [5]. I aim to post instructions on the latter shortly. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Fri Mar 23 13:43:59 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 23 Mar 2018 06:43:59 -0700 Subject: [openstack-dev] [nova] Does Cell v2 support for muti-cell deployment in Pike? In-Reply-To: (=?utf-8?B?IuS9lQ==?= =?utf-8?B?5YGl5LmQIidz?= message of "Fri, 23 Mar 2018 12:02:10 +0800") References: Message-ID: > Does Cell v2 support for multi-cell deployment in pike? Is there any > good document about the deployment? In the release notes of Pike: https://docs.openstack.org/releasenotes/nova/pike.html is this under 16.0.0 Prelude: Nova now supports a Cells v2 multi-cell deployment. The default deployment is a single cell. There are known limitations with multiple cells. Refer to the Cells v2 Layout page for more information about deploying multiple cells. There are some links to documentation in that paragraph which should be helpful. --Dan From alee at redhat.com Fri Mar 23 14:05:39 2018 From: alee at redhat.com (Ade Lee) Date: Fri, 23 Mar 2018 10:05:39 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180314223600.GA7757@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> <859c08e739614d2b89ac44087e6df8fa@G07SGEXCMSGPS06.g07.fujitsu.local> <20180314192040.GA22694@localhost.localdomain> <20180314204407.GA31026@localhost.localdomain> <20180314223600.GA7757@localhost.localdomain> Message-ID: <1521813939.3755.14.camel@redhat.com> The failing tests have been addressed in a dependent patch. As soon as that patch merges, we'll merge your patch. Ade On Wed, 2018-03-14 at 18:36 -0400, Paul Belanger wrote: > On Wed, Mar 14, 2018 at 04:44:07PM -0400, Paul Belanger wrote: > > On Wed, Mar 14, 2018 at 03:20:40PM -0400, Paul Belanger wrote: > > > On Wed, Mar 14, 2018 at 03:53:59AM +0000, namnh at vn.fujitsu.com > > > wrote: > > > > Hello Paul, > > > > > > > > I am Nam from Barbican team. I would like to notify a problem > > > > when using fedora-27. > > > > > > > > Currently, fedora-27 is using mariadb at 10.2.12. But there is > > > > a bug in this version and it is the main reason for failure > > > > Barbican database upgrading [1], the bug was fixed at 10.2.13 > > > > [2]. Would you mind updating the version of mariadb before > > > > removing fedora-26. > > > > > > > > [1] https://bugs.launchpad.net/barbican/+bug/1734329 > > > > [2] https://jira.mariadb.org/browse/MDEV-13508 > > > > > > > > > > Looking at https://apps.fedoraproject.org/packages/mariadb seems > > > 10.2.13 has > > > already been updated. Let me recheck the patch and see if it will > > > use the newer > > > version. > > > > > > > Okay, it looks like our AFS mirrors for fedora our out of sync, > > I've proposed a > > patch to fix that[3]. Once landed, I'll recheck the job. > > > > Okay, database looks to be fixed, but there are tests failing[4]. > I'll defer > back to you to continue work on the migration. > > [4] http://logs.openstack.org/20/547120/2/check/barbican-dogtag-devst > ack-functional-fedora-27/4cd64e0/job-output.txt.gz#_2018-03- > 14_22_29_49_400822 > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Mar 23 14:15:55 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 23 Mar 2018 14:15:55 +0000 (GMT) Subject: [openstack-dev] [nova] [placement] placement update 18-12 Message-ID: Another week, another pile of code and specs to write and review. This week will be a "contract" style update: No new links to code and specs in the listings sections. Next week will be an "expand", when there will be. Perhaps this can help make sure that in progress stuff doesn't get lost in the face of the latest thing? Dunno, worth trying. # Most Important While work has started on some of the already approved specs, there are still a fair few under review, and a couple yet to be written. Given the number of specs we've got going it's entirely likely we've bitten off more than we can chew, but we'll see. Getting specs landed early makes it easier to get the functionality merged sooner, so: review some specs. In active code reviews, the update provider tree and nested providers in allocation candidates work remains crucial foundations for nearly everything required on the nova side. # What's Changed 'member_of' has been added to GET /allocation_candidates making it possible to do a sort of pre-filter based on aggregate membership. There are some potential improvements to this, being discussed in an ammendment to the spec: https://review.openstack.org/#/c/555413/ Placement service exceptions have been moved to nova/api/openstack/placement/exception.py # Questions [Add yours here?] # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 15, same as last week * In progress placement bugs: https://goo.gl/vzGGDQ 13, +2 on last week # Specs (There are more than this, but this is a contract week.) * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/418393/ Provide error codes for placement API * https://review.openstack.org/#/c/545057/ mirror nova host aggregates to placement API * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552927/ Spec for isolating configuration of placement database * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers # Main Themes ## Update Provider Tree The ability of virt drivers to represent what resource providers they know about--whether that be numa, or clustered resources--is supported by the update_provider_tree method. Part of it is done, but some details remain: https://review.openstack.org/#/q/topic:bp/update-provider-tree There's new stuff in here for the add/remove traits and aggregates stuff discussed above. ## Nested providers in allocation candidates This is making progress but during review we identified potential inconsistencies of the semantics of the various filtering mechanisms. Jay soldiers on at: https://review.openstack.org/#/q/topic:bp/nested-resource-providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates ## Request Filters A generic mechanism to allow the scheduler to futher refine the query made to /allocation_candidates to account for things like aggregates. https://review.openstack.org/#/q/topic:bp/placement-req-filter ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates It's part of what will make the req filters above useful. ## Forbidden Traits A way of expressing "I'd like resources that do _not_ have trait X". I've started this, but it is mostly just feeling around at this point: https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits ## Consumer Generations Edleafe will start the ball rolling on this and I (cdent) will be his virtual pair. # Extraction There's now a specless blueprint on which to hang extraction related changes: https://blueprints.launchpad.net/nova/+spec/placement-extract See: https://review.openstack.org/#/q/topic:bp/placement-extract for changes in progress, including moving some tests. A spec was requested to explain the issues surrounding the optional placement database connection. That's here: https://review.openstack.org/#/c/552927/ This is _very_ useful when doing experiments for finding the important boundaries between nova and placement and also has no impact on a configuration that doesn't use it. Some of those experiments are in a blog post series ending with https://anticdent.org/placement-container-playground-5.html The other major need for the extraction work is creating an os-resources-classes library. Are there any volunteers for this? # Other This is not everything, because this is contract week. That is: considering reviewing this older stuff, get it out of the way and done, off the radar, etc. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/c/547812/ Migrate legacy-osc-placement-dsvm-functional job in-tree * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/533195/ Fix comments in get_all_with_shared() * https://review.openstack.org/#/q/topic:bug/1732731 Fixes related to shared providers * https://review.openstack.org/#/c/513264/ Add more functional test for placement.usage * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent # End Next week will be an expand. We win a prize if we can add things without making this message longer. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dougal at redhat.com Fri Mar 23 14:55:08 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 23 Mar 2018 14:55:08 +0000 Subject: [openstack-dev] [mistral] [ptl] PTL vacation from March 26 - March 30 Message-ID: I'll be out for the dates in the subject, so all of next week. Renat Akhmerov (rakhmerov on IRC) will be standing in for anything that comes up. Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Mar 23 15:13:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Mar 2018 10:13:43 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: Message-ID: <77c3e0b1-1d98-9f9d-6ecc-74b56f394ba7@gmail.com> On 3/22/2018 10:47 PM, 李杰 wrote: >             This is the spec about  rebuild a instance booted from > volume, anyone who is interested in >       booted from volume can help to review this. Any suggestion is > welcome.Thank you very much! >       The link is here. >       Re:the rebuild spec:https://review.openstack.org/#/c/532407 Once again, there are already existing threads about this topic, please don't continue to try and start new threads or send new reminders about it. You can reply on the existing discussion thread if you have new info. -- Thanks, Matt From mriedemos at gmail.com Fri Mar 23 15:16:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Mar 2018 10:16:08 -0500 Subject: [openstack-dev] [nova] EC2 cleanup ? In-Reply-To: References: Message-ID: On 3/22/2018 10:30 PM, Chen CH Ji wrote: > seems we have a EC2 implementation in api layer and deprecated since > Mitaka, maybe eligible to be removed this cycle? That is easier said than done. There have been a couple of related attempts in the past: https://review.openstack.org/#/c/266425/ https://review.openstack.org/#/c/282872/ I don't remember exactly where those fell down, but it's worth looking at this first before trying to do this again. -- Thanks, Matt From mriedemos at gmail.com Fri Mar 23 15:35:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 23 Mar 2018 10:35:15 -0500 Subject: [openstack-dev] [nova] about rebuild instance booted from volume In-Reply-To: References: <6E229F29-BAFE-480A-A359-4BECEFE47B65@cern.ch> <93666d2a-c543-169c-fe07-499e5340622b@gmail.com> Message-ID: On 3/21/2018 6:34 AM, 李杰 wrote: > So what should we do then about rebuild the volume backed server?Until > the cinder could re-image a volume? I've added the spec to the 'stuck reviews' section of the nova meeting agenda so it can at least get some discussion there next week. https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting -- Thanks, Matt From emilien at redhat.com Fri Mar 23 15:40:41 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 23 Mar 2018 08:40:41 -0700 Subject: [openstack-dev] [tripleo] storyboard evaluation In-Reply-To: References: <20180116162932.urmfaviw7b3ihnel@yuggoth.org> <0e787b3e-22f2-6ffd-6c1b-b95c51349302@openstack.org> <1516189284-sup-1775@fewbar.com> Message-ID: On Thu, Mar 22, 2018 at 12:11 PM, Kendall Nelson wrote: > Sounds like we have fungi set to run the migration of tripleO bugs with > the 'ui' tag for tomorrow after he gets done with the ironic migration. So > excited to have you guys start moving over! > Cool, please let us know (ping me as well) when it's done. > Any idea what squad will want to go next/when they might want to go? No > rush, I'm curious more than anything. > Good question! TBH I don't know yet, we'll see how it goes with UI squad. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Mar 23 16:23:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 23 Mar 2018 12:23:49 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> Message-ID: <1521822188-sup-9616@lrrr.local> Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500: > On 03/22/2018 05:43 AM, Stephen Finucane wrote: > > > > That's unfortunate. What we really need is a migration path from the > > 'pbr' way of doing things to something else. I see three possible > > avenues at this point in time: > > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > > things to 'sphinx-apidoc' but it takes the form of an extension. > > From my brief experiments, the output generated from this is > > radically different and far less comprehensive than what 'sphinx- > > apidoc' generates. However, it supports templating so we could > > probably configure this somehow and add our own special directive > > somewhere like 'openstackdocstheme' > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > > against upstream Sphinx [1]. This essentially does what the PBR > > extension does but moves configuration into 'conf.py'. However, this > > is currently held up as I can't adequately explain the differences > > between this and 'sphinx.ext.autosummary' (there's definite overlap > > but I don't understand 'autosummary' well enough to compare them). > > 3. Modify the upstream jobs that detect the pbr integration and have > > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > > technically appealing approach as it still leaves us unable to build > > stuff locally and adds yet more "magic" to the gate, but it does let > > us progress. > > I'd suggest a #4: > > Take the sphinx.ext.apidoc extension and make it a standalone extension > people can add to doc/requirements.txt and conf.py. That way we don't > have to convince the sphinx folks to land it. > > I'd been thinking for a while "we should just write a sphinx extension > with the pbr logic in it" - but hadn't gotten around to doing anything > about it. If you've already written that extension - I think we're in > potentially great shape! That also has the benefit that we don't have to wait for a new sphinx release to start using it. Doug From wdec.ietf at gmail.com Fri Mar 23 16:43:35 2018 From: wdec.ietf at gmail.com (Wojciech Dec) Date: Fri, 23 Mar 2018 17:43:35 +0100 Subject: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat Message-ID: Hi All, I'm converting a few heat service templates that have been working ok with puppet3 modules to run with Puppet 4, and am wondering if there is a way to pass an "undefined" default via heat to allow "default" values (eg params.pp) of the puppet modules to be used? The previous (puppet 3 working) way of passing an empty string in heat doesn't work, since Puppet 4 interprets this now as the actual setting. Thanks, Wojciech. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Fri Mar 23 16:54:40 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 23 Mar 2018 17:54:40 +0100 Subject: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat In-Reply-To: References: Message-ID: <7dade586-6e98-7721-b2b1-1605d0e0875d@redhat.com> On 03/23/2018 05:43 PM, Wojciech Dec wrote: > Hi All, > > I'm converting a few heat service templates that have been working ok > with puppet3 modules to run with Puppet 4, and am wondering if there is > a way to pass an "undefined" default via heat to allow "default" values > (eg params.pp) of the puppet modules to be used? > The previous (puppet 3 working) way of passing an empty string in heat > doesn't work, since Puppet 4 interprets this now as the actual setting. yaml allows use of ~ to represent null it looks like in a hiera lookup that is resolved as the "nil" value, not sure if that is enough to make the default values for a class to apply -- Giulio Fidente GPG KEY: 08D733BA From sfinucan at redhat.com Fri Mar 23 17:25:42 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 23 Mar 2018 17:25:42 +0000 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521822188-sup-9616@lrrr.local> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <1521822188-sup-9616@lrrr.local> Message-ID: <1521825942.4447.26.camel@redhat.com> On Fri, 2018-03-23 at 12:23 -0400, Doug Hellmann wrote: > Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500: > > On 03/22/2018 05:43 AM, Stephen Finucane wrote: > > > > > > That's unfortunate. What we really need is a migration path from the > > > 'pbr' way of doing things to something else. I see three possible > > > avenues at this point in time: > > > > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > > > things to 'sphinx-apidoc' but it takes the form of an extension. > > > From my brief experiments, the output generated from this is > > > radically different and far less comprehensive than what 'sphinx- > > > apidoc' generates. However, it supports templating so we could > > > probably configure this somehow and add our own special directive > > > somewhere like 'openstackdocstheme' > > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > > > against upstream Sphinx [1]. This essentially does what the PBR > > > extension does but moves configuration into 'conf.py'. However, this > > > is currently held up as I can't adequately explain the differences > > > between this and 'sphinx.ext.autosummary' (there's definite overlap > > > but I don't understand 'autosummary' well enough to compare them). > > > 3. Modify the upstream jobs that detect the pbr integration and have > > > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > > > technically appealing approach as it still leaves us unable to build > > > stuff locally and adds yet more "magic" to the gate, but it does let > > > us progress. > > > > I'd suggest a #4: > > > > Take the sphinx.ext.apidoc extension and make it a standalone extension > > people can add to doc/requirements.txt and conf.py. That way we don't > > have to convince the sphinx folks to land it. > > > > I'd been thinking for a while "we should just write a sphinx extension > > with the pbr logic in it" - but hadn't gotten around to doing anything > > about it. If you've already written that extension - I think we're in > > potentially great shape! > > That also has the benefit that we don't have to wait for a new sphinx > release to start using it. I can do this. Where will it live? pbr? openstackdocstheme? Somewhere else? Stephen From doug at doughellmann.com Fri Mar 23 17:32:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 23 Mar 2018 13:32:52 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521825942.4447.26.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <1521822188-sup-9616@lrrr.local> <1521825942.4447.26.camel@redhat.com> Message-ID: <1521826340-sup-7513@lrrr.local> Excerpts from Stephen Finucane's message of 2018-03-23 17:25:42 +0000: > On Fri, 2018-03-23 at 12:23 -0400, Doug Hellmann wrote: > > Excerpts from Monty Taylor's message of 2018-03-23 08:03:22 -0500: > > > On 03/22/2018 05:43 AM, Stephen Finucane wrote: > > > > > > > > That's unfortunate. What we really need is a migration path from the > > > > 'pbr' way of doing things to something else. I see three possible > > > > avenues at this point in time: > > > > > > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > > > > things to 'sphinx-apidoc' but it takes the form of an extension. > > > > From my brief experiments, the output generated from this is > > > > radically different and far less comprehensive than what 'sphinx- > > > > apidoc' generates. However, it supports templating so we could > > > > probably configure this somehow and add our own special directive > > > > somewhere like 'openstackdocstheme' > > > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > > > > against upstream Sphinx [1]. This essentially does what the PBR > > > > extension does but moves configuration into 'conf.py'. However, this > > > > is currently held up as I can't adequately explain the differences > > > > between this and 'sphinx.ext.autosummary' (there's definite overlap > > > > but I don't understand 'autosummary' well enough to compare them). > > > > 3. Modify the upstream jobs that detect the pbr integration and have > > > > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > > > > technically appealing approach as it still leaves us unable to build > > > > stuff locally and adds yet more "magic" to the gate, but it does let > > > > us progress. > > > > > > I'd suggest a #4: > > > > > > Take the sphinx.ext.apidoc extension and make it a standalone extension > > > people can add to doc/requirements.txt and conf.py. That way we don't > > > have to convince the sphinx folks to land it. > > > > > > I'd been thinking for a while "we should just write a sphinx extension > > > with the pbr logic in it" - but hadn't gotten around to doing anything > > > about it. If you've already written that extension - I think we're in > > > potentially great shape! > > > > That also has the benefit that we don't have to wait for a new sphinx > > release to start using it. > > I can do this. Where will it live? pbr? openstackdocstheme? Somewhere > else? > > Stephen > I think the idea is to make a new thing. If you put it in the sphinx-contrib org on github it will be easy for other people to contribute and use it. Doug From mordred at inaugust.com Fri Mar 23 17:38:13 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 23 Mar 2018 12:38:13 -0500 Subject: [openstack-dev] [sdk] Repo rename complete Message-ID: The openstack/python-openstacksdk repo has been renamed to openstack/openstacksdk. The following patch: https://review.openstack.org/#/c/555875 Updates the .gitreview file (and other things) to point at the new repo. You'll want to update your local git remotes to pull from and submit to the correct location. There are git commands you can use - I personally just edit the .git/config file in the repo. :) Monty PS. Gerrit will not show lists of openstacksdk reviews until its online reindex has completed, which may take a few hours. From openstack at nemebean.com Fri Mar 23 17:46:43 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 23 Mar 2018 12:46:43 -0500 Subject: [openstack-dev] [TripleO] Alternative to empty string for default values in Heat In-Reply-To: <7dade586-6e98-7721-b2b1-1605d0e0875d@redhat.com> References: <7dade586-6e98-7721-b2b1-1605d0e0875d@redhat.com> Message-ID: <1205a07b-d746-ed4a-0fcb-c891ea4511a1@nemebean.com> On 03/23/2018 11:54 AM, Giulio Fidente wrote: > On 03/23/2018 05:43 PM, Wojciech Dec wrote: >> Hi All, >> >> I'm converting a few heat service templates that have been working ok >> with puppet3 modules to run with Puppet 4, and am wondering if there is >> a way to pass an "undefined" default via heat to allow "default" values >> (eg params.pp) of the puppet modules to be used? >> The previous (puppet 3 working) way of passing an empty string in heat >> doesn't work, since Puppet 4 interprets this now as the actual setting. > > yaml allows use of ~ to represent null > > it looks like in a hiera lookup that is resolved as the "nil" value, not > sure if that is enough to make the default values for a class to apply > Interesting. That would be simpler than what we've been doing, which is to use a Heat conditional to determine whether a particular piece of hieradata is populated. At least that's the method I'm aware of. The workers settings are an example of this: https://github.com/openstack/tripleo-heat-templates/blob/c9310097027ed2448f721c7be1f6350ca3117d23/puppet/services/nova-metadata.yaml#L75 From arxcruz at redhat.com Fri Mar 23 17:55:56 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 23 Mar 2018 18:55:56 +0100 Subject: [openstack-dev] [tripleo] TripleO CI Sprint end Message-ID: *Hello,On March 21 we came the end of sprint using our new team structure, and here’s the highlights.Sprint Review:Due the outage in our infra a few weeks ago, we decided to work on the automation on all the servers used in our CI, in this way, in case of a any outage, we are able to teardown all the servers, and bring it up again without the need of manually configure anything.One can see the results of the sprint via https://tinyurl.com/yd8wmqxz Ruck and RoverWhat is Ruck and RoverOne person in our team is designated Ruck and another Rover, one is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. The other person, is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1]List of bugs that Ruck and Rover were working on: - https://bugs.launchpad.net/tripleo/+bug/1756892 - dlrnapi promoter: promotion of older link > newer link- causing ocata rdo2 incorrect promotion- https://bugs.launchpad.net/tripleo/+bug/1754036 - fs020, tempest, image corrupted after upload to glance (checksum mismatch)- MTU values were not being passed to UC/OC- https://bugs.launchpad.net/tripleo/+bug/1755485 - Barbican tempest test failing to ssh to cirros image- LP + merge skip list- https://bugs.launchpad.net/tripleo/+bug/1755891 - OVB based jobs are not collecting logs from OC nodes- changes made in past weeks to move log collection outside upstream jobs had negative side effects- https://bugs.launchpad.net/tripleo/+bug/1755865 - BMU job(s) failing on installation of missing package (ceph-ansible)- https://bugs.launchpad.net/tripleo/+bug/1755478 - all BM jobs are not able to be hand edited, JJB contains deprecated element- https://bugs.launchpad.net/tripleo/+bug/1753580 - newton, cache image script is looking in the wrong place.We also have our new Ruck and Rover for this week: - Ruck- Rafael Folco - rfolco|ruck- Rover- Arx Cruz - arxcruz|roverIf you have any questions and/or suggestions, please contact us[1] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html * -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Mar 23 19:44:56 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 23 Mar 2018 14:44:56 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> Message-ID: <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> Sundar- First thought is to simplify by NOT keeping inventory information in the cyborg db at all. The provider record in the placement service already knows the device (the provider ID, which you can look up in the cyborg db) the host (the root_provider_uuid of the provider representing the device) and the inventory, and (I hope) you'll be augmenting it with traits indicating what functions it's capable of. That way, you'll always get allocation candidates with devices that *can* load the desired function; now you just have to engage your weigher to prioritize the ones that already have it loaded so you can prefer those. Am I missing something? efried On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: > Hi all, >     There seems to be a possibility of a race condition in the > Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to > the proposed Cyborg/Nova spec > > for details.) > > Consider the scenario where the flavor specifies a resource class for a > device type, and also specifies a function (e.g. encrypt) in the extra > specs. The Nova scheduler would only track the device type as a > resource, and Cyborg needs to track the availability of functions. > Further, to keep it simple, say all the functions exist all the time (no > reprogramming involved). > > To recap, here is the scheduler flow for this case: > > * A request spec with a flavor comes to Nova conductor/scheduler. The > flavor has a device type as a resource class, and a function in the > extra specs. > * Placement API returns the list of RPs (compute nodes) which contain > the requested device types (but not necessarily the function). > * Cyborg will provide a custom filter which queries Cyborg DB. This > needs to check which hosts contain the needed function, and filter > out the rest. > * The scheduler selects one node from the filtered list, and the > request goes to the compute node. > > For the filter to work, the Cyborg DB needs to maintain a table with > triples of (host, function type, #free units). The filter checks if a > given host has one or more free units of the requested function type. > But, to keep the # free units up to date, Cyborg on the selected compute > node needs to notify the Cyborg API to decrement the #free units when an > instance is spawned, and to increment them when resources are released. > > Therein lies the catch: this loop from the compute node to controller is > susceptible to race conditions. For example, if two simultaneous > requests each ask for function A, and there is only one unit of that > available, the Cyborg filter will approve both, both may land on the > same host, and one will fail. This is because Cyborg on the controller > does not decrement resource usage due to one request before processing > the next request. > > This is similar to this previous Nova scheduling issue > . > That was solved by having the scheduler claim a resource in Placement > for the selected node. I don't see an analog for Cyborg, since it would > not know which node is selected. > > Thanks in advance for suggestions and solutions. > > Regards, > Sundar > > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Fri Mar 23 20:11:11 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 23 Mar 2018 21:11:11 +0100 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 19 March 2018 Message-ID: <1521835871.1619755.1314019792.16D49523@webmail.messagingengine.com> # Keystone Team Update - Week of 19 March 2018 ## News ### Spec review meeting During our Tuesday office hours we had a call to discuss some of our open specs. We were able to reduce some of the scope creep that had arisen in the application credentials enhancement spec[1], iron out some details in the MFA enhancement spec[2], and reaffirm our mission to keep the default roles spec[3] as simple as possible for this round of RBAC improvements. [1] https://review.openstack.org/396331 [2] https://review.openstack.org/553670 [3] https://review.openstack.org/523973 ### oslo.limit library created The oslo.limit repository has been created[4] and an Oslo spec was merged[5] to outline the purpose of the new library. [4] https://review.openstack.org/#/c/550496/ [5] https://review.openstack.org/#/c/552907/ ## Open Specs Search query: https://goo.gl/eyTktx Since last week, a new spec has been proposed to add a new static catalog backend[6]. This is work that was started last cycle but that we still need to flesh out properly. [6] https://review.openstack.org/554320 ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged a whopping 6 changes this week. In fairness a lot of our energy has been spent reviewing our awesome spec proposals. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 36 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Among these are a few changes to add a lower-constraints job to our repos as part of a plan to eventually stop syncing global requirements[7], which we might want to have a quick chat about before merging. [7] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html The next deadline is the Rocky-1 milestone spec proposal freeze. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From ed at leafe.com Fri Mar 23 20:37:32 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 23 Mar 2018 15:37:32 -0500 Subject: [openstack-dev] [nova] EC2 cleanup ? In-Reply-To: References: Message-ID: <37312C30-D814-4BFB-A41F-019D56C401B4@leafe.com> On Mar 23, 2018, at 10:16 AM, Matt Riedemann wrote: > >> seems we have a EC2 implementation in api layer and deprecated since Mitaka, maybe eligible to be removed this cycle? > > That is easier said than done. There have been a couple of related attempts in the past: > > https://review.openstack.org/#/c/266425/ > > https://review.openstack.org/#/c/282872/ > > I don't remember exactly where those fell down, but it's worth looking at this first before trying to do this again. If we do, let’s also remove the unnecessary extra directory level in nova/api/openstack. There is only one Nova API, so the extra ‘openstack’ level is no longer needed. -- Ed Leafe From melwittt at gmail.com Fri Mar 23 20:57:16 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 23 Mar 2018 13:57:16 -0700 Subject: [openstack-dev] [nova] Review runways this cycle In-Reply-To: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> References: <0d35a544-8fb5-701d-f0a0-96f1a672da88@gmail.com> Message-ID: <7867e7f0-bfd8-7644-6744-4baa2dae543f@gmail.com> On Tue, 20 Mar 2018 16:44:57 -0700, Melanie Witt wrote: > As mentioned in the earlier "Rocky PTG summary - miscellaneous topics > from Friday" email, this cycle we're going to experiment with a > "runways" system for focusing review on approved blueprints in > time-boxes. The goal here is to use a bit more structure and process in > order to focus review and complete merging of approved work more quickly > and reliably. > > We were thinking of starting the runways process after the spec review > freeze (which is April 19) so that reviewers won't be split between spec > reviews and reviews of work in runways. > > The process and instructions are explained in detail on this etherpad, > which will also serve as the place we queue and track blueprints for > runways: > > https://etherpad.openstack.org/p/nova-runways-rocky > > Please bear with us as this is highly experimental and we will be giving > it a go knowing it's imperfect and adjusting the process iteratively as > we learn from it. Okay, based on the responses on the discussion in the last nova meeting and this ML thread, I think we have consensus to go ahead and start using runways next week after the spec review day, so: Spec review day: Tuesday March 27 Start using runways: Wednesday March 28 Please add your blueprints to the Queue if the requirements explained on the etherpad are met. And please ask questions, in #openstack-nova or on this thread, if you have any questions about the process. We will be moving spec freeze out to r-2 (June 7) to lessen pressure on spec review while runways are underway, to get more time to review the current queue of approved implementations via runways, and to give ourselves the chance to approve more specs along the way if we find we're reducing the queue enough by completing blueprints. Thanks, -melanie From zbitter at redhat.com Fri Mar 23 21:04:53 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 23 Mar 2018 17:04:53 -0400 Subject: [openstack-dev] [openstack-ops][heat][PTG] Heat PTG Summary In-Reply-To: References: Message-ID: On 15/03/18 04:01, Rico Lin wrote: > Hi Heat devs and ops > > It's a great PTG plus SnowpenStack experience. Now Rocky started. We > really need all kind of input and effort to make sure we're heading > toward the right way. > > Here is what we been discussed during PTG: > > * Future strategy for heat-tempest-plugin & functional tests > * Multi-cloud support > * Next plan for Heat Dashboard > * Race conditions for clients updating/deleting stacks > * Swift Template/file object support > * heat dashboard needs of clients > * Resuming after an engine failure > * Moving SyncPoints from DB to DLM > * toggle the debug option at runtime > * remove mox > * Allow partial success in ASG > * Client Plugins and OpenStackSDK > * Global Request Id support > * Heat KeyStone Credential issue > * (How we going to survive on the island) (No developers were eaten to bring you this summary.) > You can find *all Etherpads links* in > *https://etherpad.openstack.org/p/heat-rocky-ptg* > > We try to document down as much as we can(Thanks Zane for picking it > up), including discussion and actions. *Will try to target all actions > in Rocky*. > If you do like to input on any topic (or any topic you think we > missing), *please try to provide inputs to the etherpad* (and be kind to > leave messages in ML or meeting so we won't miss it.) > > *Use Cases* > If you have any use case for us (What's your usecase, what's not > working/ what's working well), > please help us and input to*https://etherpad.openstack.org/p/heat-usecases* > > > Here are *Team photos* we took: > *https://www.dropbox.com/sh/dtei3ovfi7z74vo/AADX_s3PXFiC3Fod8Yj_RO4na/Heat?dl=0* > > > > -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From hongbin034 at gmail.com Sat Mar 24 02:46:04 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 23 Mar 2018 22:46:04 -0400 Subject: [openstack-dev] [Zun] Removal inactive cores Message-ID: Hi all, This is an announcement about change of Zun's core team membership. The people below were removed from the Zun's core reviewer team due to their inactiveness at the last 180 days [1]. This change was voted by the existing core team and was unanimously approved. I would like to thanks for their contributions to the Zun team. They are welcomed to re-join the core team once they become active in the future. - Eli Qiao - Motohiro/Yuanying Otsuka - Qiming Teng - Shubham Kumar Sharma - Sudipta Biswas - Wenzhi Yu [1] http://stackalytics.com/report/contribution/zun-group/180 Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Sun Mar 25 08:00:37 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Sun, 25 Mar 2018 08:00:37 +0000 Subject: [openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core In-Reply-To: References: Message-ID: <9D07C1F8-56A7-40C2-9BD1-2DB8298749E8@nokia.com> I added Dong Wenjuan to the list of core contributors. Welcome ☺ From: Eyal B Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 21 March 2018 at 16:57 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [vitrage] Nominating Dong Wenjuan for Vitrage core +2 On 21 March 2018 at 16:37, Afek, Ifat (Nokia - IL/Kfar Sava) > wrote: Hi, I would like to nominate Dong Wenjuan for Vitrage core. Wenjuan has been contributing to Vitrage for a long time, since Newton version. She implemented several important features and has a deep knowledge of Vitrage architecture. I’m sure she can be a great addition to our team. Thanks, Ifat. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Sun Mar 25 19:02:53 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sun, 25 Mar 2018 15:02:53 -0400 Subject: [openstack-dev] [glance] python-glanceclient release status In-Reply-To: References: Message-ID: Adding another review to the list: https://review.openstack.org/#/c/556292/ Also, https://review.openstack.org/555550 has been updated with more tests. Once those are approved, they'll need to be backported to stable/queens so we can release 2.10.0 On Thu, Mar 22, 2018 at 8:55 PM, Brian Rosmaita wrote: > As promised at today's Glance meeting, here's an update on the release > status for the glanceclient bugfix release for stable/queens. > > There's another bug I think needs to be addressed: > https://bugs.launchpad.net/python-glanceclient/+bug/1758149 > > I've got a patch up so I can get feedback (particularly on the error messages): > https://review.openstack.org/555550 > > I'll be adding more tests to the patch (it needs them). > > It's already Friday UTC, so we won't be releasing 2.10.0 until Monday > at the earliest. Here are the backports that still require approval: > > https://review.openstack.org/#/c/555277/ > https://review.openstack.org/#/c/555390/ > https://review.openstack.org/#/c/555436/ > > and a cherry-pick of 555550 when that's done. > > > cheers, > brian From emilien at redhat.com Sun Mar 25 19:04:52 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 25 Mar 2018 12:04:52 -0700 Subject: [openstack-dev] [tripleo] Updates on containerized undercloud In-Reply-To: References: <1518518420.15968.6.camel@redhat.com> Message-ID: This is an update on what has been achieved the last month with the regard of Containerized Undercloud efforts in TripleO: ## CI - Running OVB (ovs-ha, fs001) with a containerized undercloud: it finally works, with some workarounds, all work in progress. Results can be seen here: https://logs.rdoproject.org/56/542556/79/openstack-check/ gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master/ Ze1390e6e0df54a88836d75316da4b206/console.txt.gz#_2018-03-24_06_07_23_171 List of workarounds/blockers: * we need a new release of python-openstackclient that includes https://review.openstack.org/#/c/553374/ and therefore we need https://review.openstack.org/#/c/553026/ * container workflow to be finished (sbaker is on it) (in the meantime we're loading envs in quickstart). * masquerading workaround: https://review.openstack.org/#/c/553620 (long term solution will be https://review.openstack.org/#/c/553427/ but still WIP) Once we clear the workarounds/blockers and have a clean / stable deployment, we'll switch featureset001 (ovb-ha) to deploy a containerized undercloud. The target was end of rocky-m1 and we still aim for it. - Running an CI job that test upgrades from a non containerized undercloud on Queens to a containerized undercloud on Rocky. Work is in progress and can be monitored here: https://review.openstack.org/#/c/553633/ Special thanks to the Upgrade squads who help a lot on that front! ## Upgrades support We said we would provide a way to upgrade a non containerized undercloud to a containerized undercloud by rocky-m1 and we still aim for it. This is a demo of an upgrade from Queens (non containerized) to Rocky (containerized): https://youtu.be/5gLKL3YkC2c We'll wait a bit for feedback from the demo and start documenting. Note that most of the workflow remains the same as before (we still use openstack undercloud upgrade command). We'll also continue to push efforts to have this workflow tested by the CI job in progress. ## Other items - TripleO UI has been containerized. - Routed networks is still in progress by Harald (we probably aim for rocky-m2 now). - We're investigating some way to validate than an upgrade to a containerized undercloud worked fine (with Ansible?). More to come. - Containerization of Tempest so we can run Tempest against a containerized undercloud and also investigate how we could switch CI scenarios to be deployed on one node. - Port the TLS by default done in instack-undercloud. Any feedback or help on testing is very welcome. All efforts can be seen here: https://trello.com/b/nmGSNPoQ/containerized- undercloud Thanks everyone who helped in these efforts so far! On Sun, Feb 18, 2018 at 10:18 AM, Emilien Macchi wrote: > This is an update on what has been achieved this week with the regard of > Containerized Undercloud efforts in TripleO: > > TL;DR: really good efforts have been made and we can now deploy a full > (multinode) overcloud in CI. OVB testing in progress and lot of remaining > items! > > ## Bugfixes > docker-registry: add missing firewall rules - https://review.openstack. > org/#/c/545185/ > mistral-executor: mount /var/lib/mistral - https://review.openstack. > org/#/c/545143/ > docker: configure group/user for deployment_user - > https://review.openstack.org/#/c/544761/ + dependencies > Fix PublicVirtualFixedIPs in envs - https://review.openstack. > org/#/c/544744/ > Align zaqar max_messages_post_size with undercloud - > https://review.openstack.org/#/c/544756/ > undercloud_post: fix subnet name - https://review.openstack. > org/#/c/544587/ > > ## CI > We manage to run a containerized overcloud deployed by a containerized > undercloud in CI, results can be seen here: https://review. > openstack.org/#/c/542906/ > The job is running on featureser010 now (for testing purpose) but as James > mentioned in the review, we won't switch this job to run a containerized > undercloud. Note there is no impact on the job runtime. > We'll need to properly deprecate the non-containerized undercloud first > but we'll need to find a CI job that we can use for gating, so we avoid > regression during the cycle. > Now we're working on deploying featureset001 (ovb-ha), with TLS, net-iso, > Ironic/Nova/Neutron (baremetal bits) from a containerized undercloud: > https://review.openstack.org/#/c/542556/ > It's not working yet but we're working toward the blockers as they come > during testing. > > # TLS Support > All patches that were in progress have been merged, and now under testing > in ovb-ha + containerized u/c (see above). > > # UI Support > Work is still in progress, patches are ready for review, but some one them > don't pass pep8 yet. We'll hopefully fix it soon. > > # Other items > routed ctlplane networking: Harald is currently making progress on the > items, some patches are ready for review. > Create temp copy of tripleo-heat-templates before processing them: Bogdan > is working on https://review.openstack.org/#/c/542875 - the patch is > under review! > Upgrades: no work has been started so far but we'll probably discuss about > this topic during the PTG. > > As usual please comment or add anything that I missed. > Thanks all for your help/reviews/efforts so far, > > Emilien > > > On Tue, Feb 13, 2018 at 6:41 AM, Emilien Macchi > wrote: > >> >> >> On Tue, Feb 13, 2018 at 2:40 AM, Harald Jensås >> wrote: >> >>> On Fri, 2018-02-09 at 14:39 -0800, Emilien Macchi wrote: >>> > On Fri, Feb 9, 2018 at 2:30 PM, James Slagle >>> > wrote: >>> > [...] >>> > >>> > > You may want to add an item for the routed ctlplane work that >>> > > landed >>> > > at the end of Queens. Afaik, that will need to be supported with >>> > > the >>> > > containerized undercloud. >>> > >>> > Done: https://trello.com/c/kFtIkto1/17-routed-ctlplane-networking >>> > >>> >>> Tanks Emilien, >>> >>> >>> I added several work items to the Trello card, and a few patches. Still >>> WiP. >>> >>> Do we have any CI that use containerized undercloud with actual Ironic >>> deployement? Or are they all using deployed-server? >>> >>> E.g do we have anything actually testing this type of change? >>> https://review.openstack.org/#/c/543582 >>> >>> I belive that would have to be an ovb job with containerized undercloud? >>> >> >> I'm working on it since last week: https://trello.com/c/uLq >> bHTip/13-switch-other-jobs-to-run-a-containerized-undercloud >> But currently trying to make things stable again, we introduce >> regressions and this is high prio now. >> -- >> Emilien Macchi >> > > > > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Sun Mar 25 19:43:50 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Sun, 25 Mar 2018 12:43:50 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> Message-ID: <15ca6afc-994d-6940-c8ec-3cc762dfd245@intel.com> On 3/23/2018 12:44 PM, Eric Fried wrote: > Sundar- > > First thought is to simplify by NOT keeping inventory information in > the cyborg db at all. The provider record in the placement service > already knows the device (the provider ID, which you can look up in the > cyborg db) the host (the root_provider_uuid of the provider representing > the device) and the inventory, and (I hope) you'll be augmenting it with > traits indicating what functions it's capable of. That way, you'll > always get allocation candidates with devices that *can* load the > desired function; now you just have to engage your weigher to prioritize > the ones that already have it loaded so you can prefer those. Eric,    Thanks for the response.    Traits only indicate whether a qualitative capability exists. To check if a free instance of the requested function exists in the host, we have to count the total count and free count of the needed function. Otherwise, we may pick a host because it *can* host a function, though it doesn't have a free instance of the function. IIUC, your reply seems to expect that we can always reprogram a function as needed. The specific case we are looking at here is one where no reprogramming is involved. In the terminology of Cyborg/Nova rescheduling spec , this is the pre-programmed scenario (reasons why an operator may want this are stated in the spec). However, even if reprogramming is allowed, to prioritize hosts with free instances of the needed function, we will need to count how many free instances there are. Since we said that only device types will be tracked as resource classes, and not functions, the scheduler will count available instances of device types, and Cyborg would have to count the functions separately. Please let me know if I missed something. Thanks & Regards, Sundar > > Am I missing something? > > efried > > On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >> Hi all, >>     There seems to be a possibility of a race condition in the >> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to >> the proposed Cyborg/Nova spec >> >> for details.) >> >> Consider the scenario where the flavor specifies a resource class for a >> device type, and also specifies a function (e.g. encrypt) in the extra >> specs. The Nova scheduler would only track the device type as a >> resource, and Cyborg needs to track the availability of functions. >> Further, to keep it simple, say all the functions exist all the time (no >> reprogramming involved). >> >> To recap, here is the scheduler flow for this case: >> >> * A request spec with a flavor comes to Nova conductor/scheduler. The >> flavor has a device type as a resource class, and a function in the >> extra specs. >> * Placement API returns the list of RPs (compute nodes) which contain >> the requested device types (but not necessarily the function). >> * Cyborg will provide a custom filter which queries Cyborg DB. This >> needs to check which hosts contain the needed function, and filter >> out the rest. >> * The scheduler selects one node from the filtered list, and the >> request goes to the compute node. >> >> For the filter to work, the Cyborg DB needs to maintain a table with >> triples of (host, function type, #free units). The filter checks if a >> given host has one or more free units of the requested function type. >> But, to keep the # free units up to date, Cyborg on the selected compute >> node needs to notify the Cyborg API to decrement the #free units when an >> instance is spawned, and to increment them when resources are released. >> >> Therein lies the catch: this loop from the compute node to controller is >> susceptible to race conditions. For example, if two simultaneous >> requests each ask for function A, and there is only one unit of that >> available, the Cyborg filter will approve both, both may land on the >> same host, and one will fail. This is because Cyborg on the controller >> does not decrement resource usage due to one request before processing >> the next request. >> >> This is similar to this previous Nova scheduling issue >> . >> That was solved by having the scheduler claim a resource in Placement >> for the selected node. I don't see an analog for Cyborg, since it would >> not know which node is selected. >> >> Thanks in advance for suggestions and solutions. >> >> Regards, >> Sundar >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sun Mar 25 20:04:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 25 Mar 2018 16:04:11 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521749386-sup-1944@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> Message-ID: <1522007989-sup-4653@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-22 16:16:06 -0400: > Excerpts from Doug Hellmann's message of 2018-03-21 16:02:06 -0400: > > Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: > > > > > > TL;DR > > > ----- > > > > > > Let's stop copying exact dependency specifications into all our > > > projects to allow them to reflect the actual versions of things > > > they depend on. The constraints system in pip makes this change > > > safe. We still need to maintain some level of compatibility, so the > > > existing requirements-check job (run for changes to requirements.txt > > > within each repo) will change a bit rather than going away completely. > > > We can enable unit test jobs to verify the lower constraint settings > > > at the same time that we're doing the other work. > > > > The new job definition is in https://review.openstack.org/555034 and I > > have updated the oslo.config patch I mentioned before to use the new job > > instead of one defined in the oslo.config repo (see > > https://review.openstack.org/550603). > > > > I'll wait for that job patch to be reviewed and approved before I start > > adding the job to a bunch of other repositories. > > > > Doug > > The job definition for openstack-tox-lower-constraints [1] was approved > today (thanks AJaegar and pabelenger). > > I have started proposing the patches to add that job to the repos listed > in openstack/requirements/projects.txt using the topic > "requirements-stop-syncing" [2]. I hope to have the rest of those > proposed by the end of the day tomorrow, but since they have to run in > batches I don't know if that will be possible. > > The patch to remove the update proposal job is ready for review [3]. > > As is the patch to allow project requirements to diverge by changing the > rules in the requirements-check job [4]. > > We ran into a snag with a few of the jobs for projects that rely on > having service projects installed. There have been a couple of threads > about that recently, but Monty has promised to start another one to > provide all of the necessary context so we can fix the issues and move > ahead. > > Doug > All of the patches to define the lower-constraints test jobs have been proposed [1], and many have already been approved and merged (thank you for your quick reviews). A few of the jobs are failing because the projects depend on installing some other service from source. We will work out what to do with those when we solve that problem in a more general way. A few of the jobs failed because the dependencies were wrong. In a few cases I was able to figure out what was wrong, but I can use some help from project teams more familiar with the code bases to debug the remaining failures. In a few cases projects didn't have python 3 unit test jobs, so I configured the new job to use python 2. Teams should add a step to their python 3 migration plan to update the version of python used in the new job, when that is possible. I believe we are now ready to proceed with updating the requirements-check job to relax the rules about which changes are allowed [2]. Doug [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open [2] https://review.openstack.org/555402 From doug at doughellmann.com Sun Mar 25 20:08:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 25 Mar 2018 16:08:45 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1522007989-sup-4653@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> Message-ID: <1522008425-sup-1150@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400: > A few of the jobs failed because the dependencies were wrong. In a few > cases I was able to figure out what was wrong, but I can use some help > from project teams more familiar with the code bases to debug the > remaining failures. If you need to raise the lower bounds in a requirements file, please update that file as well as lower-constraints.txt in the patch. You may need to add a Depends-On for https://review.openstack.org/555402 in order to have a version specifier that is different from the value in the global requirements list. Doug From robertc at robertcollins.net Sun Mar 25 21:46:22 2018 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 26 Mar 2018 10:46:22 +1300 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1522008425-sup-1150@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> <1522008425-sup-1150@lrrr.local> Message-ID: On 26 March 2018 at 09:08, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400: >> A few of the jobs failed because the dependencies were wrong. In a few >> cases I was able to figure out what was wrong, but I can use some help >> from project teams more familiar with the code bases to debug the >> remaining failures. > > If you need to raise the lower bounds in a requirements file, please > update that file as well as lower-constraints.txt in the patch. You may > need to add a Depends-On for https://review.openstack.org/555402 in > order to have a version specifier that is different from the value in > the global requirements list. Nice stuff; I'm so glad to see this evolution happening. -Rob From doug at doughellmann.com Sun Mar 25 23:21:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sun, 25 Mar 2018 19:21:45 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> <1522008425-sup-1150@lrrr.local> Message-ID: <1522020089-sup-9597@lrrr.local> Excerpts from Robert Collins's message of 2018-03-26 10:46:22 +1300: > On 26 March 2018 at 09:08, Doug Hellmann wrote: > > Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400: > >> A few of the jobs failed because the dependencies were wrong. In a few > >> cases I was able to figure out what was wrong, but I can use some help > >> from project teams more familiar with the code bases to debug the > >> remaining failures. > > > > If you need to raise the lower bounds in a requirements file, please > > update that file as well as lower-constraints.txt in the patch. You may > > need to add a Depends-On for https://review.openstack.org/555402 in > > order to have a version specifier that is different from the value in > > the global requirements list. > > Nice stuff; I'm so glad to see this evolution happening. > > -Rob > Thanks for laying such a firm foundation for us, Robert! Doug From 540649168 at qq.com Mon Mar 26 03:43:20 2018 From: 540649168 at qq.com (=?gb18030?B?RnJhbmtlbkZ1bmM=?=) Date: Mon, 26 Mar 2018 11:43:20 +0800 Subject: [openstack-dev] An authentication error occurred while openstack dashboard logged in Message-ID: Dear Openstack-dev, I installed the openstack according to your official documents, I found the following problems after installing the Horizon at the end. The openstack I installed is in the Newton version, and the system used is centos 7. Here is the error log I found./var/log/httpd/keystone.conf 2018-03-19 03:47:58.856414 mod_wsgi (pid=5286): Target WSGI script '/usr/bin/keystone-wsgi-public' cannot be loaded as Python module. 2018-03-19 03:47:58.856448 mod_wsgi (pid=5286): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-public'. 2018-03-19 03:47:58.856479 Traceback (most recent call last): 2018-03-19 03:47:58.856507 File "/usr/bin/keystone-wsgi-public", line 51, in 2018-03-19 03:47:58.856579 application = initialize_public_application() 2018-03-19 03:47:58.856604 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 137, in initialize_public_application 2018-03-19 03:47:58.856635 config_files=_get_config_files()) 2018-03-19 03:47:58.856650 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 56, in initialize_application 2018-03-19 03:47:58.856673 common.configure(config_files=config_files) 2018-03-19 03:47:58.856686 File "/usr/lib/python2.7/site-packages/keystone/server/common.py", line 30, in configure 2018-03-19 03:47:58.856710 keystone.conf.configure() 2018-03-19 03:47:58.856724 File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 126, in configure 2018-03-19 03:47:58.856748 help='Do not monkey-patch threading system modules.')) 2018-03-19 03:47:58.856762 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2193, in __inner 2018-03-19 03:47:58.856786 result = f(self, *args, **kwargs) 2018-03-19 03:47:58.856800 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2378, in register_cli_opt 2018-03-19 03:47:58.856823 raise ArgsAlreadyParsedError("cannot register CLI option") 2018-03-19 03:47:58.856853 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option Please help.Thanks! Best regards Franken -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4DF0CC94 at 93C2EF65.586CB85A.jpg Type: image/jpeg Size: 31294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B85A952F at 68EEA42E.586CB85A.jpg Type: image/jpeg Size: 28996 bytes Desc: not available URL: From vinhnt at vn.fujitsu.com Mon Mar 26 08:21:48 2018 From: vinhnt at vn.fujitsu.com (vinhnt at vn.fujitsu.com) Date: Mon, 26 Mar 2018 08:21:48 +0000 Subject: [openstack-dev] [telemetry][aodh][panko][oslo][performance] OSprofiler in Aodh & Panko Message-ID: <241160ae567845aa8e04311f06d43c7e@G07SGEXCMSGPS06.g07.fujitsu.local> Hello folks, Just a reminder. I have some patches related to OSProfiler that ready to review in Panko and Aodh. Hope that you guys can leave a comment. These patches are: 1. Aodh: https://review.openstack.org/#/c/483268/ 2. Aodh client: https://review.openstack.org/#/c/484295/ 3. Panko: https://review.openstack.org/#/c/483848/ 4. Panko client: https://review.openstack.org/#/c/484294/ Thank you! Best regards, Vinh Nguyen Trong PODC – Fujitsu Vietnam Ltd. > -----Original Message----- > From: Nguyen, Trong Vinh > Sent: Monday, 14 August, 2017 08:24 > To: OpenStack Development Mailing List (not for usage questions) dev at lists.openstack.org>; Trong Vinh Nguyen (VinhNT at vn.Fujitsu.com) > > Subject: [openstack-dev][telemetry][aodh][panko][oslo][performance] OSprofiler in > Aodh & Panko > > Hello, > > I’m sending this email for asking about the work in integrating OSprofiler into Aodh & > Panko. > Currently, there are some patches related to this work, and they are waiting for review: > 1. Aodh: https://review.openstack.org/#/c/483268/ > 2. Aodh client: https://review.openstack.org/#/c/484295/ > 3. Panko: https://review.openstack.org/#/c/483848/ > 4. Panko client: https://review.openstack.org/#/c/484294/ > > FYI, OSprofiler provides functionality to generate a trace per request, that goes > through all involve services. > This trace can visualize flow of a request [1] [2]. > A trace from OSprofiler can help us know these things: > - Performance bottle-neck of a service > - Trouble-shooting issue in a service > - Understanding flow of a request (from cli client or other client) > - Trace can be store in persistent storage > - Visualization trace flow in many OpenTracing compatible tracer [2] (will be done > soon) > - Head, tail-based sampling for reducing overhead [3] > - Asynchronous tracing [4] > > OSprofiler has already been in most of main OpenStack services such as: Nova, > Neutron, Keystone, Glance, and Cinder... > > Hope that it will receive reviews from you all. > > Thanks! > > [1] Demo with current OSprofiler patch set in Swift: > https://tovin07.github.io/swift/swift-object-create.html > [2] A demo with OpenTracing compatible (using Uber Jaeger): > https://tovin07.github.io/opentracing/jaeger-openstack-image-list.png > [3] Tail-based coherent sampling: https://blueprints.launchpad.net/osprofiler/+spec/tail- > based-coherent-sampling > [4] Asynchronous tracing: > https://blueprints.launchpad.net/osprofiler/+spec/asynchronous-trace-collection > [5] OSprofiler documentation: https://docs.openstack.org/osprofiler/latest/ > > Best regards, > > Vinh Nguyen Trong > PODC – Fujitsu Vietnam Ltd. > From jean-philippe at evrard.me Mon Mar 26 09:36:28 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 26 Mar 2018 10:36:28 +0100 Subject: [openstack-dev] [OpenStack-Ansible] Vancouver forum etherpad Message-ID: Dear OpenStack-Ansiblers, We've got an etherpad here [1], to track all the things we want to discuss at the forum. if you're an OpenStack-Ansible user, don't hesitate to add your session ideas, list what you liked or disliked in the last release, and share your experience! Thank you in advance. [1]: https://etherpad.openstack.org/p/YVR-openstack-ansible-brainstorming Best regards, Jean-Philippe. From scheuran at linux.vnet.ibm.com Mon Mar 26 09:41:38 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Mon, 26 Mar 2018 11:41:38 +0200 Subject: [openstack-dev] [nova][ThirdParty-CI] Nova s390x CI back online In-Reply-To: <7DDB6DAF-C75A-4AC4-8AC3-C4994311912F@linux.vnet.ibm.com> References: <7DDB6DAF-C75A-4AC4-8AC3-C4994311912F@linux.vnet.ibm.com> Message-ID: Hi, with the latest release of pyroute2 I could bring the s390x CI is back online again. I now start rechecking all the missed patches… There introduction of pyroute2 in Neutron broke Neutron DHCP and L3 agent on s390x. pyroute2 was simply missing s390x enablement. It got added with patch [1]. [1] https://github.com/svinota/pyroute2/commit/02328faf0de10073bc9e71d593b1656db62f6f6b --- Andreas Scheuring (andreas_s) On 14. Mar 2018, at 17:29, Andreas Scheuring wrote: A brief update: The root cause is that Neutron patch [1] broke Neutron DHCP and L3 agent on s390x (both use pyroute2 for network namespace management now). The issue needs to get fixed in pyroute2 itself. I opened a PR [2]. Ideally a new version gets released soon. [1] https://github.com/openstack/neutron/commit/c4d4336 [2] https://github.com/svinota/pyroute2/pull/469 --- Andreas Scheuring (andreas_s) On 13. Mar 2018, at 17:14, Andreas Scheuring > wrote: Hello, the s390x CI for nova is currently broken again. The reason seems to be a recent change that merged in neutron. I’m looking into it... --- Andreas Scheuring (andreas_s) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Mon Mar 26 09:49:23 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 26 Mar 2018 11:49:23 +0200 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? Message-ID: <20180326094923.aidowcambav26aop@eukaryote> Hi there, I was looking at oslo_config/cfg.py[*], and the StrOpt() class has 'choices' parameter, to allow a sequence of valid values / tuples of valid values for descriptions. However, I don't see the same 'choices' parameter for the ListOpt() class. Out of curiosity, is there a reason to not add it for ListOpt() too? [*] https://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/cfg.py#n1271 -- /kashyap From glongwave at gmail.com Mon Mar 26 11:24:11 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Mon, 26 Mar 2018 19:24:11 +0800 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: <20180326094923.aidowcambav26aop@eukaryote> References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: There is no special reasons that ListOpt doesn't allow parameter, PortOpt also uses parameter 'choices'. PortOpt and StrOpt only accept one value and use this parameter to double check value is valid which is in the 'choices'. What's your use case for ListOpt, just make sure the value(a list) is part of 'choices' ? Maybe we need another parameter to distinguish 2018-03-26 17:49 GMT+08:00 Kashyap Chamarthy : > Hi there, > > I was looking at oslo_config/cfg.py[*], and the StrOpt() class has > 'choices' parameter, to allow a sequence of valid values / tuples of > valid values for descriptions. > > However, I don't see the same 'choices' parameter for the ListOpt() > class. Out of curiosity, is there a reason to not add it for ListOpt() > too? > > [*] https://git.openstack.org/cgit/openstack/oslo.config/ > tree/oslo_config/cfg.py#n1271 > > -- > /kashyap > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Mon Mar 26 12:15:53 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Mon, 26 Mar 2018 20:15:53 +0800 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: Message-ID: 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > Hi, > > I took care of implementation of [1] in Neutron and I have couple > questions to about this goal. > > 1. Should we only change "restart_method" to mutate as is described in [2] > ? I did already something like that in [3] - is it what is expected? > Yes , let's the only thing. we need test if that if it works . > > 2. How I can check if this change is fine and config option are mutable > exactly? For now when I change any config option for any of neutron agents > and send SIGHUP to it it is in fact "restarted" and config is reloaded even > with this old restart method. > good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. > > 3. Should we add any automatic tests for such change also? Any examples of > such tests in other projects maybe? > There is no example for tests now, we only have some unit tests in oslo.service . > > [1] https://governance.openstack.org/tc/goals/rocky/enable- > mutable-configuration.html > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html > [3] https://review.openstack.org/#/c/554259/ > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From slawek at kaplonski.pl Mon Mar 26 12:35:47 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdvbWlyIEthcMWCb8WEc2tp?=) Date: Mon, 26 Mar 2018 14:35:47 +0200 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: Message-ID: <530670DA-DAFB-4734-B7A1-C97F991FD9E2@kaplonski.pl> Hi, > Wiadomość napisana przez ChangBo Guo w dniu 26.03.2018, o godz. 14:15: > > > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > Hi, > > I took care of implementation of [1] in Neutron and I have couple questions to about this goal. > > 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? > > Yes , let's the only thing. we need test if that if it works . Ok, so please take a look at my patch for neutron if that is what we should do :) > > 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. > > good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. I was asking rather about some manual test instead of automatic one. > > 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? > There is no example for tests now, we only have some unit tests in oslo.service . > > [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html > [3] https://review.openstack.org/#/c/554259/ > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski slawek at kaplonski.pl From major at mhtx.net Mon Mar 26 13:04:01 2018 From: major at mhtx.net (Major Hayden) Date: Mon, 26 Mar 2018 08:04:01 -0500 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core Message-ID: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hey there, As promised, I am stepping down from being an OpenStack-Ansible core reviewer since I am unable to meet the obligations of the role with my new job. :( Thanks to everyone who has mentored me along the way and put up with my gate job breakages. I have learned an incredible amount about OpenStack, Ansible, complex software deployments, and open source communities. I appreciate everyone's support as I worked through the creation of the ansible-hardening role as well as adding CentOS support for OpenStack-Ansible. - -- Major Hayden -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlq4774ACgkQc3BR4MEB H7E+gA/9HJEDibsQhdy191NbxbhF75wUup3gRDHhGPI6eFqHo/Iz8Q5Kv9Z9CXbo rkBGMebbGzoKwiLnKbFWr448azMJkj5/bTRLHb1eDQg2S2xaywP2L4e0CU+Gouto DucmGT6uLg+LKdQByYTB8VAHelub4DoxV2LhwsH+uYgWp6rZ2tB2nEIDTYQihhGx /WukfG+3zA99RZQjWRHmfnb6djB8sONzGIM8qY4qDUw9Xjp5xguHOU4+lzn4Fq6B cEpsJnztuEYnEpeTjynu4Dc8g+PX8y8fcObhcj+1D0NkZ1qW7sdX6CA64wuYOqec S552ej/fR5FPRKLHF3y8rbtNIlK5qfpNPE4UFKuVLjGSTSBz4Kp9cGn2jNCzyw5c aDQs/wQHIiUECzY+oqU1RHZJf9/Yq1VVw3vio+Dye1IMgkoaNpmX9lTcNw9wb1i7 lac+fm0e438D+c+YZAttmHBCCaVWgKdGxH7BY84FoQaXRcaJ9y3ZoDEx6Rr8poBQ pK4YjUzVP9La2f/7S1QemX2ficisCbX+MVmAX9G4Yr9U2n98aXVWFMaF4As1H+OS zm9r9saoAZr6Z8BxjROjoClrg97RN1zkPseUDwMQwlJwF3V33ye3ib1dYWRr7BSm zAht+Jih/JE6Xtp+5UEF+6TBCYFVtXO8OHzCcac14w9dy1ur900= =fx64 -----END PGP SIGNATURE----- From thierry at openstack.org Mon Mar 26 13:33:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 26 Mar 2018 15:33:03 +0200 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases Message-ID: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> Hi! TL;DR: We used to do complex things with ACLs for stable/* branches around releases. Let's stop doing that as it's not really useful anyway, and just trust the $project-stable-maint teams to do the right thing. Current situation: As we get close to the end of a release cycle, we start creating stable/$series branches to refine what is likely to become a part of the coordinated release at the end of the cycle. After release, that same stable/$series branch is used to backport fixes and issue further point releases. The rules to apply for approving changes to stable/$series differ slightly depending on whether you are pre-release or post-release. To reflect that, we use two different groups. Pre-release the branch is controlled by the $project-release group (and Release Managers) and post-release the branch is controlled by the $project-stable-maint group (and stable-maint-core). To switch between the two without blocking on an infra ACL change, the release team enters a complex dance where we initially create an ACL for stable/$series, giving control of it to a $project-release-branch group, whose membership is reset at every cycle to contain $project-release. At release time, we update $project-release-branch Gerrit group membership to contain $project-stable-maint instead. Then we get rid of the stable/$series ACL altogether. This process is a bit complex and error-prone (and we tend to have to re-learn it every cycle). It's also designed for a time when we expected completely-different people to be in -release and -stable-maint groups, while those are actually, most of the time, the same people. Furthermore, with more and more deliverables being released under the cycle-with-intermediary model, pre-release and post-release approval rules are actually more and more of the same. Proposal: By default, let's just have $project-stable-maint control stable/*. We no longer create new ACLs for stable/$series every cycle, we no longer switch from $project-release control to $project-stable-maint control. The release team no longer does anything around stable branch ACLs or groups during the release cycle. That way, the same group ends up being used to control stable/* pre-release and post-release. They were mostly the same people already: Release managers are a part of stable-maint-core, which is included in every $project-stable-maint anyway, so they retain control. What that changes for you: If you are part of $project-release but not part of $project-stable-maint, you'll probably want to join that team. If you review pre-release changes on a stable branch for a cycle-with-milestones deliverable, you will have to remember that the rules there are slightly different from stable branch approval rules. In doubt, do not approve, and ask. But I don't like that! I prefer tight ACLs! While we do not recommend it, every team can still specify more complex ACLs to control their stable branches. As long as the "Release Managers" group retains ability to approve changes pre-release (and stable-maint-core retains ability to approve changes post-release), more specific ACLs are fine. Let me know if you have any comment, otherwise we'll start using that new process for the Rocky cycle (stable/rocky branch). Thanks ! -- Thierry Carrez (ttx) From thierry at openstack.org Mon Mar 26 13:59:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 26 Mar 2018 15:59:19 +0200 Subject: [openstack-dev] [all] Vancouver Forum - Topic submission tool is now open! Message-ID: <4e784d97-3540-262d-9be6-9a9e5bf1ebac@openstack.org> Hi everyone, The Forum in Vancouver is getting closer! As a reminder, the Forum is where we take advantage of having a large community presence at the Summit to discuss and get wide feedback on a variety of topics around the future of OpenStack and adjacent projects. Starting today, our submission tool is open for you to submit abstracts for the most popular sessions that came out of your brainstorming. All teams, work groups and SIGs should now submit their abstracts at: http://forumtopics.openstack.org/ before 11:59PM UTC on Sunday April 15th! We are looking for a good mix of project-specific, cross-project or strategic/whole-of-community discussions, and sessions that emphasis collaboration between our various types of contributors are most welcome! We assume that anything submitted to the system has achieved a good amount of discussion and consensus that it's a worthwhile topic. After submissions close, a team of representatives from the User Committee, the Technical Committee and Foundation staff will take the sessions proposed by the community and fill out the schedule. You can expect the draft schedule to be released around April 22nd. Further details about the Forum can be found at: https://wiki.openstack.org/wiki/Forum Regards, -- Thierry Carrez (ttx) From mathieu.goessens at imt-atlantique.fr Mon Mar 26 14:36:00 2018 From: mathieu.goessens at imt-atlantique.fr (Mathieu Goessens) Date: Mon, 26 Mar 2018 14:36:00 +0000 Subject: [openstack-dev] [kolla] Kolla-Ansible pip packages vulnerable to CVE-2018-1000115 Message-ID: <989a46eb-d85e-73ed-9742-1a207d7a8975@imt-atlantique.fr> Hi folks, I initially sent this mail privately, resending it to the list on request : Kolla-Ansible https://docs.openstack.org/kolla-ansible/ pip packages (recommended in the doc) are vulnerable to CVE-2018-1000115. The patch have been commit, merged in stable/queens, stable/pike, stable/ocata https://review.openstack.org/#/c/550686/. However, the pip stable packages are still based on 5.0.1 which do not contain the fix (6.0.0.0rc2 which contains the fix is available in pip, but won't be installed by default because its a prerelease). While I understand that good security practices would recommend to firewall etc, and that the fixes are available, I believe having vulnerable packages in the default, recommend install, is an important issue. Moreover, I would like to suggest issuing a Security Advisory when updated packages would be available, because : - pip/system won't propose upgrades by default, users may not be aware they are vulnerable. - users can actually being hit by CVE-2018-1000115 and participate to DDOS. - DDOS traffic pattern observed in my cloud are not big burst ones, but follow some classic daily pattern that could looks legitimate and so could stay unnoticeable for a long time (see graph, http://pix.toile-libre.org/?img=1522070903.png, mostly if not only DDOS traffic in) ------------------------------------- How to verify : git clone https://github.com/openstack/kolla-ansible ; cd kolla-ansible git checkout tags/6.0.0.0rc2 ; git log | grep "Security memcached" git checkout tags/5.0.1 ; git log | grep "Security memcached" wget https://pypi.python.org/packages/cc/f2/27d9e75f2fe142b2a73c57023b055aa9a50e49ba69d7da9c7808c4f25ac1/kolla-ansible-5.0.1.tar.gz#md5=6456618318b58d844ae57b47e34ee569 tar xvzf kolla-ansible-5.0.1.tar.gz cat kolla-ansible-5.0.1/ansible/roles/memcached/templates/memcached.json.j2 (compare with https://review.openstack.org/#/c/550686/ if needed) Cheers, -- Mathieu Goessens Research Engineer IMT Atlantique -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zhang.lei.fly at gmail.com Mon Mar 26 15:40:58 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 26 Mar 2018 23:40:58 +0800 Subject: [openstack-dev] [kolla] Kolla-Ansible pip packages vulnerable to CVE-2018-1000115 In-Reply-To: <989a46eb-d85e-73ed-9742-1a207d7a8975@imt-atlantique.fr> References: <989a46eb-d85e-73ed-9742-1a207d7a8975@imt-atlantique.fr> Message-ID: Hi Mathieu, Thanks for raising this issue. The patch is merged on all branches but not released[0].We will release the next release ASAP. But on the other hand, if you build an OpenStack cloud through kolla and be accessible through the internet, you'd better use an external network(interface) for internet access. There are lots of port enabled on the internal network. like MariaDB, Memcached. [0] https://review.openstack.org/#/q/I30acb41f1209c0d07eb58f4feec91bc53146dcea On Mon, Mar 26, 2018 at 10:36 PM, Mathieu Goessens < mathieu.goessens at imt-atlantique.fr> wrote: > Hi folks, > > I initially sent this mail privately, resending it to the list on request : > > Kolla-Ansible https://docs.openstack.org/kolla-ansible/ pip packages > (recommended in the doc) are vulnerable to CVE-2018-1000115. > > The patch have been commit, merged in stable/queens, stable/pike, > stable/ocata https://review.openstack.org/#/c/550686/. However, the pip > stable packages are still based on 5.0.1 which do not contain the fix > (6.0.0.0rc2 which contains the fix is available in pip, but won't be > installed by default because its a prerelease). > > While I understand that good security practices would recommend to > firewall etc, and that the fixes are available, I believe having > vulnerable packages in the default, recommend install, is an important > issue. > > Moreover, I would like to suggest issuing a Security Advisory when > updated packages would be available, because : > - pip/system won't propose upgrades by default, users may not be aware > they are vulnerable. > - users can actually being hit by CVE-2018-1000115 and participate to DDOS. > - DDOS traffic pattern observed in my cloud are not big burst ones, but > follow some classic daily pattern that could looks legitimate and so > could stay unnoticeable for a long time (see graph, > http://pix.toile-libre.org/?img=1522070903.png, mostly if not only DDOS > traffic in) > > ------------------------------------- > How to verify : > > git clone https://github.com/openstack/kolla-ansible ; cd kolla-ansible > > git checkout tags/6.0.0.0rc2 ; git log | grep "Security memcached" > > git checkout tags/5.0.1 ; git log | grep "Security memcached" > > > wget > https://pypi.python.org/packages/cc/f2/27d9e75f2fe142b2a73c57023b055a > a9a50e49ba69d7da9c7808c4f25ac1/kolla-ansible-5.0.1.tar.gz#md5= > 6456618318b58d844ae57b47e34ee569 > > tar xvzf kolla-ansible-5.0.1.tar.gz > > cat kolla-ansible-5.0.1/ansible/roles/memcached/templates/ > memcached.json.j2 > > (compare with https://review.openstack.org/#/c/550686/ if needed) > > > Cheers, > -- > Mathieu Goessens > Research Engineer > IMT Atlantique > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Mon Mar 26 15:51:42 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 26 Mar 2018 11:51:42 -0400 Subject: [openstack-dev] [all][oslo] Notice to users of the ZeroMQ transport in oslo.messaging Message-ID: Folks, It's been over a year since the last commit was made to the ZeroMQ driver in oslo.messaging. It is at the point where some of the related unit tests are beginning to fail due to bit rot. None of the current oslo.messaging contributors have a good enough understanding of the codebase to effectively fix it. Personally I'm not sure the driver will work in production at all. Given this it was decided in Dublin that the ZeroMQ driver no longer meets the official policy for in tree driver support [0] and will be deprecated in Rocky. However it would be insincere for the team to give the impression that the driver is maintained for the normal 2 cycle deprecation process. Therefore the driver code will be removed in 'S'. The ZeroMQ driver is the largest body of code of any driver in the oslo.messaging repo, weighing in at over 5k lines of code. For comparison, the rabbitmq kombu driver consists of only about 2K lines of code. If any individuals are willing to commit to ownership of this codebase and keep the driver compliant with policy (see [0]), please follow up with bnemec or myself (kgiusti) on #openstack-oslo. Thanks, [0] https://docs.openstack.org/oslo.messaging/latest/contributor/supported-messaging-drivers.html -- Ken Giusti (kgiusti at gmail.com) From doug at doughellmann.com Mon Mar 26 15:52:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Mar 2018 11:52:49 -0400 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core Message-ID: <1522079471-sup-7587@lrrr.local> Ken has been managing oslo.messaging for ages now but his participation in the team has gone far beyond that single library. He regularly attends meetings, including the PTG, and has provided input into several of our team decisions recently. I think it's time we make him a full member of the oslo-core group. Please respond here with a +1 or -1 to indicate your opinion. Thanks, Doug From jaypipes at gmail.com Mon Mar 26 15:53:16 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 26 Mar 2018 11:53:16 -0400 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: <1522079471-sup-7587@lrrr.local> References: <1522079471-sup-7587@lrrr.local> Message-ID: Big +1. On 03/26/2018 11:52 AM, Doug Hellmann wrote: > Ken has been managing oslo.messaging for ages now but his participation > in the team has gone far beyond that single library. He regularly > attends meetings, including the PTG, and has provided input into several > of our team decisions recently. > > I think it's time we make him a full member of the oslo-core group. > > Please respond here with a +1 or -1 to indicate your opinion. > > Thanks, > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From davanum at gmail.com Mon Mar 26 16:02:02 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 26 Mar 2018 12:02:02 -0400 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: References: <1522079471-sup-7587@lrrr.local> Message-ID: w00t! yes please +1 On Mon, Mar 26, 2018 at 11:53 AM, Jay Pipes wrote: > Big +1. > > > On 03/26/2018 11:52 AM, Doug Hellmann wrote: >> >> Ken has been managing oslo.messaging for ages now but his participation >> in the team has gone far beyond that single library. He regularly >> attends meetings, including the PTG, and has provided input into several >> of our team decisions recently. >> >> I think it's time we make him a full member of the oslo-core group. >> >> Please respond here with a +1 or -1 to indicate your opinion. >> >> Thanks, >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From ekuvaja at redhat.com Mon Mar 26 16:02:50 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 26 Mar 2018 17:02:50 +0100 Subject: [openstack-dev] [glance] Priorities for WC 26th of March Message-ID: Hi all, Time for the priority update: The python-glanceclient release was postponed til this week due to some bugs Brian found and is in process of fixing. Lets review these and see what we need for the release, so we can get it out this week. The R-1 Milestone is approaching quickly so I would like everyone to look at our roadmap etherpad and have our R-1 targets fresh in mind: https://etherpad.openstack.org/p/glance-rocky-priorities Give your feedback to the rest open specs we have in flight, specially the OSSN-0075 one https://review.openstack.org/#/c/468179 Remember that Easter is at the coming weekend for those of us who are affected by it, meaning that this and next week may be short for many. Have a amazing and effective week! - Erno jokke Kuvaja From openstack at nemebean.com Mon Mar 26 16:18:17 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 26 Mar 2018 11:18:17 -0500 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: <1522079471-sup-7587@lrrr.local> References: <1522079471-sup-7587@lrrr.local> Message-ID: +1! On 03/26/2018 10:52 AM, Doug Hellmann wrote: > Ken has been managing oslo.messaging for ages now but his participation > in the team has gone far beyond that single library. He regularly > attends meetings, including the PTG, and has provided input into several > of our team decisions recently. > > I think it's time we make him a full member of the oslo-core group. > > Please respond here with a +1 or -1 to indicate your opinion. > > Thanks, > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Mon Mar 26 16:56:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 26 Mar 2018 11:56:47 -0500 Subject: [openstack-dev] [All] New PBR release coming soon Message-ID: Hi, Since this will potentially affect the majority of OpenStack projects, I wanted to give everyone some advance notice. PBR[1] hasn't been released since last summer, and as a result none of the bug fixes or new features that have gone in since then are available to users. Because of some feature removals that have happened, this will be a major release and due to the number of changes since the last release there's a higher probability of issues. We want to get this potentially painful release out of the way early in the cycle and then resume regular releases going forward. If you know of any reason we shouldn't do this right now please respond ASAP. Thanks. -Ben 1: https://docs.openstack.org/pbr/latest/ From harlowja at fastmail.com Mon Mar 26 16:58:00 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Mon, 26 Mar 2018 09:58:00 -0700 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: References: <1522079471-sup-7587@lrrr.local> Message-ID: <5AB92698.6090908@fastmail.com> +1 Ben Nemec wrote: > +1! > > On 03/26/2018 10:52 AM, Doug Hellmann wrote: >> Ken has been managing oslo.messaging for ages now but his participation >> in the team has gone far beyond that single library. He regularly >> attends meetings, including the PTG, and has provided input into several >> of our team decisions recently. >> >> I think it's time we make him a full member of the oslo-core group. >> >> Please respond here with a +1 or -1 to indicate your opinion. >> >> Thanks, >> Doug >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From muroi.masahito at lab.ntt.co.jp Mon Mar 26 18:00:22 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Tue, 27 Mar 2018 03:00:22 +0900 Subject: [openstack-dev] [Blazar] skip weekly meeting Message-ID: Hi Blazar folks, As we talked in last meeting, this weekly meeting is skipped because most of the team members are out of office. best regards, Masahito From ramamani.yeleswarapu at intel.com Mon Mar 26 18:07:57 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 26 Mar 2018 18:07:57 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Deploy Steps - https://review.openstack.org/#/c/549493/ 1x+2 and 1x-1 - Remaining Rescue patches - https://review.openstack.org/#/c/546919/ - Fix a bug for unrescuiing with whole disk image - better fix: https://review.openstack.org/#/c/499050/ - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` Updated 19-Mar-2018. - https://review.openstack.org/#/c/538119/ - Rescue mode standalone tests - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bios interface support - https://review.openstack.org/#/c/511162/ - https://review.openstack.org/#/c/528609/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - note: the increase in bug count is probably because now the dashboard tracks virtualbmc and networking-baremetal - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - status as of 26 March 2018: - spec for deployment steps framework: https://review.openstack.org/#/c/549493/ - ready for reviews, 1 +2 already (but might need another update) BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 26 March 2018: - Spec has merged: https://review.openstack.org/#/c/496481/ - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 - Add bios_interface db field https://review.openstack.org/528609 - BIOS Settings: Add DB API: https://review.openstack.org/511402 - BIOS Settings: Add RPC object https://review.openstack.org/511714 - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - no update, will write spec soonish Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - status as of 26 Mar 2018: - Dublin PTG consensus was to start with small architectural building blocks. - basic architecture explanation: https://review.openstack.org/554284 MERGED - list of cases from the Denver PTG - Admin-only provisioner - small and/or rare: TODO - non-HA acceptable, noop/flat network acceptable - large and/or frequent: TODO - HA required, neutron network or noop (static) network - Bare metal cloud for end users - smaller single-site: TODO - non-HA, ironic conductors on controllers and noop/flat network acceptable - larger single-site: TODO - HA, split out ironic conductors, neutron networking, virtual media > iPXE > PXE/TFTP - split out TFTP servers if you need them? - larger multi-site: TODO - cells v2 - ditto as single-site otherwise? Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 19 Mar 2018: - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 19 Mar 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Updating nova virt to use REST API (TheJulia) --------------------------------------------- Status as of 26 Mar 2018: Attempting to determine a new path of least resistance. Multiple cached clients is gaining resistance in nova. Changing the microversion mid-flight is problematic. Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of March 26th Done! TheJulia to propose patches to docs where appropriate. dtantsur to rewrite the bug dashboard Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of March 26th: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ needs review Getting clean steps (rloo, TheJulia) ------------------------------------ - Status as of March 26th: - Cleanhold specification updated - https://review.openstack.org/#/c/507910/ Project vision (jroll, TheJulia) -------------------------------- - Status as of March 26th: - jroll to send email detailing the session this week, failed last week, ENOTENOUGHTIME SIGHUP support (rloo) --------------------- - Proposed for ironic by rloo -- this is done: https://review.openstack.org/474331 MERGED\o/ - TODO: - ironic-inspector - networking-baremetal Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- Zuul v3 playbook refactoring (sambetts, pas-ha) ----------------------------------------------- Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Mon Mar 26 18:30:24 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 26 Mar 2018 14:30:24 -0400 Subject: [openstack-dev] [nova] EC2 cleanup ? In-Reply-To: References: Message-ID: > That is easier said than done. There have been a couple of related attempts > in the past: > > https://review.openstack.org/#/c/266425/ > > https://review.openstack.org/#/c/282872/ > > I don't remember exactly where those fell down, but it's worth looking at > this first before trying to do this again. Interesting. [1] exists, and I'm pretty sure that we ship it as part of Red Hat OpenStack (but I'm not a PM and this is not an official Red Hat stance, just me and my memory), so it works well enough. If we have things that depend on our in-tree ec2 api, maybe we need to get them moved over to [1]? [1] https://github.com/openstack/ec2-api From ansmith at redhat.com Mon Mar 26 18:58:15 2018 From: ansmith at redhat.com (Andy Smith) Date: Mon, 26 Mar 2018 14:58:15 -0400 Subject: [openstack-dev] [tripleo][oslo] messaging services update Message-ID: Hi, I wanted to provide an update on the work that has been ongoing to enhance the deployment of messaging system backends for RPC and Notify communications. https://blueprints.launchpad.net/tripleo/+spec/tripleo-messaging The basis of the change is to introduce oslo messaging RPC and Notify services in place of the rabbitmq server settings. This will enable tripleo to continue to deploy a single messaging backend (e.g. clustered rabbitmq server) but will also provide the ability to configure separate messaging backends for each oslo messaging service. The ability to separate the messaging services has been supported since the newton release and can result in increased performance, scalability and reliability of an OpenStack deployment. In addition, it facilitates the use of alternative messaging backend systems supported by oslo messaging. Summary of the changes: 1. https://review.openstack.org/#/c/508259/ tripleo-common changes to introduce separate RPC and Notify users and passwords for distinct messaging transports 2. https://review.openstack.org/#/c/522406/ This first puppet-tripleo patch supports both current master and tripleo-heat-templates update below. Patch only works with a single messaging backend and is needed to transition through CI properly. 3. https://review.openstack.org/#/c/507963/ tripleo-heat-templates update to introduce oslo messaging services in place of rabbitmq server settings. The patch supports separate RPC and Notify communications via the full set of parameters needed to define independent transports. 4. https://review.openstack.org/#/c/510684/ This second puppet-tripleo patch supports separate RPC and Notify oslo messaging services. In addition to CI, we have performed numerous local test deployments across the combination of patch sets and messaging backends. The deployments include single rabbitmq backends as well as hybrid deployments that use qdrouterd for RPC and rabbitmq for Notifications. We seek as much comments and feedback as possible to ensure that the changes introduced are transparent to the core services and that there is no impact on the universal deployment and operation of the rabbitmq backend server. The goal is to land these changes in Rocky-1 so that we can maximize the amout of testing during the release cycle. Thanks, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Mar 26 19:12:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Mar 2018 14:12:52 -0500 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: On 3/26/2018 6:24 AM, ChangBo Guo wrote: > What's your use case for ListOpt, just make sure the value(a list) is > part of  'choices' ?   Maybe we need another parameter to distinguish It came up because of this change in nova: https://review.openstack.org/#/c/534384/ We want to backport that as a bug fix which is a mitigation for performance degradation issues with QEMU patches for Spectre and Meltdown. However, in the backport we wanted to restrict the ListOpt to a single value, "pcid". The idea is to restrict the new option to a single value in stable branches. Then in master, we could remove the 'choices' kwarg so operators can define their list as they wish. If we were do implement this generically in ListOpt, I suppose 'choices' would mean that the specified list must be a subset of the defined choices list. So in the backport patch, we'd just have choices=[None, 'pcid'] and you can either specify None or 'pcied' for that option (default to None). Right now the code that's relying on this option just has a hard-coded check for the value which is OK. -- Thanks, Matt From melwittt at gmail.com Mon Mar 26 19:28:02 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 26 Mar 2018 12:28:02 -0700 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: On Mon, 26 Mar 2018 14:12:52 -0500, Matt Riedemann wrote: > On 3/26/2018 6:24 AM, ChangBo Guo wrote: >> What's your use case for ListOpt, just make sure the value(a list) is >> part of  'choices' ?   Maybe we need another parameter to distinguish > > It came up because of this change in nova: > > https://review.openstack.org/#/c/534384/ > > We want to backport that as a bug fix which is a mitigation for > performance degradation issues with QEMU patches for Spectre and Meltdown. > > However, in the backport we wanted to restrict the ListOpt to a single > value, "pcid". The idea is to restrict the new option to a single value > in stable branches. > > Then in master, we could remove the 'choices' kwarg so operators can > define their list as they wish. > > If we were do implement this generically in ListOpt, I suppose 'choices' > would mean that the specified list must be a subset of the defined > choices list. So in the backport patch, we'd just have choices=[None, > 'pcid'] and you can either specify None or 'pcied' for that option > (default to None). > > Right now the code that's relying on this option just has a hard-coded > check for the value which is OK. I'm not sure if this helps, but we do already have some example of a ListOpt with 'choices' for the VNC auth_schemes: https://github.com/openstack/nova/blob/cd15c3d/nova/conf/vnc.py#L229 Could we do something similar for the backport of the CPU flags patch? -melanie From openstack at fried.cc Mon Mar 26 20:02:23 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 26 Mar 2018 15:02:23 -0500 Subject: [openstack-dev] [nova][placement] Upgrade placement first! Message-ID: <2215df39-49fc-c756-eb11-f44c565803dc@fried.cc> Since forever [0], nova has gently recommended [1] that the placement service be upgraded first. However, we've not made any serious effort to test scenarios where this isn't done. For example, we don't have grenade tests running placement at earlier levels. After a(nother) discussion [2] which touched on the impacts - real and imagined - of running new nova against old placement, we finally decided to turn the recommendation into a hard requirement [3]. This gives admins a crystal clear guideline, this lets us simplify our support statement, and also means we don't have to do 406 fallback code anymore. So we can do stuff like [4], and also avoid having to write (and subsequently remove) code like that in the future. Please direct any questions to #openstack-nova Your Faithful Scribe, efried [0] Like, since upgrading placement was a thing. [1] https://docs.openstack.org/nova/latest/user/upgrade.html#rolling-upgrade-process (#2, first bullet) [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-26.log.html#t2018-03-26T17:35:11 [3] https://review.openstack.org/556631 [4] https://review.openstack.org/556633 From doug at doughellmann.com Mon Mar 26 20:26:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Mar 2018 16:26:21 -0400 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: <1522095876-sup-3308@lrrr.local> Excerpts from Matt Riedemann's message of 2018-03-26 14:12:52 -0500: > On 3/26/2018 6:24 AM, ChangBo Guo wrote: > > What's your use case for ListOpt, just make sure the value(a list) is > > part of  'choices' ?   Maybe we need another parameter to distinguish > > It came up because of this change in nova: > > https://review.openstack.org/#/c/534384/ > > We want to backport that as a bug fix which is a mitigation for > performance degradation issues with QEMU patches for Spectre and Meltdown. > > However, in the backport we wanted to restrict the ListOpt to a single > value, "pcid". The idea is to restrict the new option to a single value > in stable branches. > > Then in master, we could remove the 'choices' kwarg so operators can > define their list as they wish. So we would have to backport a feature to oslo.config to allow a list to have choices? > > If we were do implement this generically in ListOpt, I suppose 'choices' > would mean that the specified list must be a subset of the defined > choices list. So in the backport patch, we'd just have choices=[None, > 'pcid'] and you can either specify None or 'pcied' for that option > (default to None). What does it mean if they set the option to a list containing both values? > > Right now the code that's relying on this option just has a hard-coded > check for the value which is OK. > From doug at doughellmann.com Mon Mar 26 20:27:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Mar 2018 16:27:47 -0400 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: <1522096005-sup-1680@lrrr.local> Excerpts from melanie witt's message of 2018-03-26 12:28:02 -0700: > On Mon, 26 Mar 2018 14:12:52 -0500, Matt Riedemann wrote: > > On 3/26/2018 6:24 AM, ChangBo Guo wrote: > >> What's your use case for ListOpt, just make sure the value(a list) is > >> part of  'choices' ?   Maybe we need another parameter to distinguish > > > > It came up because of this change in nova: > > > > https://review.openstack.org/#/c/534384/ > > > > We want to backport that as a bug fix which is a mitigation for > > performance degradation issues with QEMU patches for Spectre and Meltdown. > > > > However, in the backport we wanted to restrict the ListOpt to a single > > value, "pcid". The idea is to restrict the new option to a single value > > in stable branches. > > > > Then in master, we could remove the 'choices' kwarg so operators can > > define their list as they wish. > > > > If we were do implement this generically in ListOpt, I suppose 'choices' > > would mean that the specified list must be a subset of the defined > > choices list. So in the backport patch, we'd just have choices=[None, > > 'pcid'] and you can either specify None or 'pcied' for that option > > (default to None). > > > > Right now the code that's relying on this option just has a hard-coded > > check for the value which is OK. > > I'm not sure if this helps, but we do already have some example of a > ListOpt with 'choices' for the VNC auth_schemes: > > https://github.com/openstack/nova/blob/cd15c3d/nova/conf/vnc.py#L229 > > Could we do something similar for the backport of the CPU flags patch? > > -melanie > Oh, look, another feature that's already in the library. :-) Doug From doug at doughellmann.com Mon Mar 26 20:33:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 26 Mar 2018 16:33:37 -0400 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/instack-undercloud failed In-Reply-To: References: Message-ID: <1522096324-sup-430@lrrr.local> Excerpts from zuul's message of 2018-03-26 18:11:49 +0000: > Build failed. > > - publish-openstack-releasenotes http://logs.openstack.org/94/94bb28ae46bd263314c9d846069ca913d225e625/tag/publish-openstack-releasenotes/9440894/ : POST_FAILURE in 3m 29s > This release notes build failure is probably not a problem, but I don't recognize the cause of the error so I wanted to bring it up in case someone else did. Is the ".pike.html.AEKeun" a lock file of some sort? Or a temporary file created for some other purpose? Doug rsync: failed to set permissions on "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun": No such file or directory (2) rsync: rename "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun" -> "pike.html": No such file or directory (2) rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1] Traceback (most recent call last): File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 115, in main() File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 110, in main output = afs_sync(p['source'], p['target']) File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 95, in afs_sync output['output'] = subprocess.check_output(shell_cmd, shell=True) File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '/bin/bash -c "mkdir -p /afs/.openstack.org/docs/releasenotes/instack-undercloud/ && /usr/bin/rsync -rtp --safe-links --delete-after --out-format='<>%i %n%L' --filter='merge /tmp/tmpcoywd87i' /var/lib/zuul/builds/9440894ee812414bb2ae813da1bbdfdd/work/artifacts/ /afs/.openstack.org/docs/releasenotes/instack-undercloud/"' returned non-zero exit status 23 From mriedemos at gmail.com Mon Mar 26 20:40:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 26 Mar 2018 15:40:11 -0500 Subject: [openstack-dev] Fwd: [nova][placement] Upgrade placement first! In-Reply-To: <2215df39-49fc-c756-eb11-f44c565803dc@fried.cc> References: <2215df39-49fc-c756-eb11-f44c565803dc@fried.cc> Message-ID: <856f0bfd-8e28-fa05-de0c-297a3e49a843@gmail.com> FYI -------- Forwarded Message -------- Subject: [openstack-dev] [nova][placement] Upgrade placement first! Date: Mon, 26 Mar 2018 15:02:23 -0500 From: Eric Fried Reply-To: OpenStack Development Mailing List (not for usage questions) Organization: IBM To: OpenStack Development Mailing List (not for usage questions) Since forever [0], nova has gently recommended [1] that the placement service be upgraded first. However, we've not made any serious effort to test scenarios where this isn't done. For example, we don't have grenade tests running placement at earlier levels. After a(nother) discussion [2] which touched on the impacts - real and imagined - of running new nova against old placement, we finally decided to turn the recommendation into a hard requirement [3]. This gives admins a crystal clear guideline, this lets us simplify our support statement, and also means we don't have to do 406 fallback code anymore. So we can do stuff like [4], and also avoid having to write (and subsequently remove) code like that in the future. Please direct any questions to #openstack-nova Your Faithful Scribe, efried [0] Like, since upgrading placement was a thing. [1] https://docs.openstack.org/nova/latest/user/upgrade.html#rolling-upgrade-process (#2, first bullet) [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-26.log.html#t2018-03-26T17:35:11 [3] https://review.openstack.org/556631 [4] https://review.openstack.org/556633 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Mon Mar 26 21:36:11 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 26 Mar 2018 15:36:11 -0600 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/instack-undercloud failed In-Reply-To: <1522096324-sup-430@lrrr.local> References: <1522096324-sup-430@lrrr.local> Message-ID: On Mon, Mar 26, 2018 at 2:33 PM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-03-26 18:11:49 +0000: >> Build failed. >> >> - publish-openstack-releasenotes http://logs.openstack.org/94/94bb28ae46bd263314c9d846069ca913d225e625/tag/publish-openstack-releasenotes/9440894/ : POST_FAILURE in 3m 29s >> > > This release notes build failure is probably not a problem, but I > don't recognize the cause of the error so I wanted to bring it up > in case someone else did. > > Is the ".pike.html.AEKeun" a lock file of some sort? Or a temporary > file created for some other purpose? > I think that's part of the rsync process. From https://rsync.samba.org/how-rsync-works.html > The receiver will read from the sender data for each file identified by the file index number. It will open the local file (called the basis) and will create a temporary file. > > The receiver will expect to read non-matched data and/or to match records all in sequence for the final file contents. When non-matched data is read it will be written to the temp-file. When a block match record is received the > receiver will seek to the block offset in the basis file and copy the block to the temp-file. In this way the temp-file is built from beginning to end. > > The file's checksum is generated as the temp-file is built. At the end of the file, this checksum is compared with the file checksum from the sender. If the file checksums do not match the temp-file is deleted. If the file fails once it will > be reprocessed in a second phase, and if it fails twice an error is reported. > > After the temp-file has been completed, its ownership and permissions and modification time are set. It is then renamed to replace the basis file. Thanks, -Alex > Doug > > rsync: failed to set permissions on "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun": No such file or directory (2) > rsync: rename "/afs/.openstack.org/docs/releasenotes/instack-undercloud/.pike.html.AEKeun" -> "pike.html": No such file or directory (2) > rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1] > Traceback (most recent call last): > File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 115, in > main() > File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 110, in main > output = afs_sync(p['source'], p['target']) > File "/tmp/ansible_b5fr54k3/ansible_module_zuul_afs.py", line 95, in afs_sync > output['output'] = subprocess.check_output(shell_cmd, shell=True) > File "/usr/lib/python3.5/subprocess.py", line 626, in check_output > **kwargs).stdout > File "/usr/lib/python3.5/subprocess.py", line 708, in run > output=stdout, stderr=stderr) > subprocess.CalledProcessError: Command '/bin/bash -c "mkdir -p /afs/.openstack.org/docs/releasenotes/instack-undercloud/ && /usr/bin/rsync -rtp --safe-links --delete-after --out-format='<>%i %n%L' --filter='merge /tmp/tmpcoywd87i' /var/lib/zuul/builds/9440894ee812414bb2ae813da1bbdfdd/work/artifacts/ /afs/.openstack.org/docs/releasenotes/instack-undercloud/"' returned non-zero exit status 23 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mjturek at linux.vnet.ibm.com Mon Mar 26 21:43:09 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 26 Mar 2018 17:43:09 -0400 Subject: [openstack-dev] [ironic] Bug Day April 6th poll Message-ID: <60ded09d-9e38-d630-dc91-bf9fd47d7c48@linux.vnet.ibm.com> Hey everyone, I set up a doodle to vote on when we should hold the bug day on April 6th https://doodle.com/poll/xa999rx653pb58t6 I'm not sure how long we want the session to be so I provided 24 1 hour windows. Please vote with what blocks of time would work best for you. Thanks! Mike Turek From melwittt at gmail.com Mon Mar 26 22:06:36 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 26 Mar 2018 15:06:36 -0700 Subject: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing Message-ID: <6d0bc1ef-6995-bbb6-50e8-af883e1a9b8c@gmail.com> Hey everyone, This cycle there is a community goal to remove the use of mox/mox3 for testing [0]. In nova, we're tracking our work at this blueprint: https://blueprints.launchpad.net/nova/+spec/mox-removal If you propose patches contributing to this goal, please be sure to add something like "Part of blueprint mox-removal" in the commit message of your patch so it will be tracked as part of the blueprint for Rocky. NOTE: Please avoid converting any tests related to cells v1 or nova-network as these two legacy features are either in-progress of being removed or on the road map to being removed within the next two cycles. Tests to *avoid* converting are located: nova/tests/unit/cells/ nova/nova/tests/unit/compute/test_compute_cells.py nova/tests/unit/network/test_manager.py Please reply with other cells v1 or nova-network test locations to avoid if I've missed any. Thanks, -melanie [0] https://storyboard.openstack.org/#!/story/2001546 From jimmy at tipit.net Mon Mar 26 22:51:41 2018 From: jimmy at tipit.net (Jimmy Mcarthur) Date: Mon, 26 Mar 2018 17:51:41 -0500 Subject: [openstack-dev] Thank you TryStack!! Message-ID: <5AB9797D.1090209@tipit.net> Hi everyone, We recently made the tough decision, in conjunction with the dedicated volunteers that run TryStack, to end the service as of March 29, 2018. For those of you that used it, thank you for being part of the TryStack community. The good news is that you can find more resources to try OpenStack at http://www.openstack.org/start, including the Passport Program , where you can test on any participating public cloud. If you are looking to test different tools or application stacks with OpenStack clouds, you should check out Open Lab . Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the many other volunteers who have managed this valuable service for the last several years! Your contribution to OpenStack was noticed and appreciated by many in the community. Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Mar 27 02:00:06 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 26 Mar 2018 19:00:06 -0700 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core Message-ID: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Howdy everyone, I'd like to propose that we add Eric Fried to the nova-core team. Eric has been instrumental to the placement effort with his work on nested resource providers and has been actively contributing to many other areas of openstack [0] like project-config, gerritbot, keystoneauth, devstack, os-loganalyze, and so on. He's an active reviewer in nova [1] and elsewhere in openstack and reviews in-depth, asking questions and catching issues in patches and working with authors to help get code into merge-ready state. These are qualities I look for in a potential core reviewer. In addition to all that, Eric is an active participant in the project in general, helping people with questions in the #openstack-nova IRC channel, contributing to design discussions, helping to write up outcomes of discussions, reporting bugs, fixing bugs, and writing tests. His contributions help to maintain and increase the health of our project. To the existing core team members, please respond with your comments, +1s, or objections within one week. Cheers, -melanie [0] https://review.openstack.org/#/q/owner:efried [1] http://stackalytics.com/report/contribution/nova/90 From soulxu at gmail.com Tue Mar 27 02:14:21 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 27 Mar 2018 10:14:21 +0800 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: +1 2018-03-27 10:00 GMT+08:00 melanie witt : > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on nested > resource providers and has been actively contributing to many other areas > of openstack [0] like project-config, gerritbot, keystoneauth, devstack, > os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and reviews > in-depth, asking questions and catching issues in patches and working with > authors to help get code into merge-ready state. These are qualities I look > for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project in > general, helping people with questions in the #openstack-nova IRC channel, > contributing to design discussions, helping to write up outcomes of > discussions, reporting bugs, fixing bugs, and writing tests. His > contributions help to maintain and increase the health of our project. > > To the existing core team members, please respond with your comments, +1s, > or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue Mar 27 02:19:28 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 27 Mar 2018 10:19:28 +0800 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: <5AB92698.6090908@fastmail.com> References: <1522079471-sup-7587@lrrr.local> <5AB92698.6090908@fastmail.com> Message-ID: +1 2018-03-27 0:58 GMT+08:00 Joshua Harlow : > +1 > > > Ben Nemec wrote: > >> +1! >> >> On 03/26/2018 10:52 AM, Doug Hellmann wrote: >> >>> Ken has been managing oslo.messaging for ages now but his participation >>> in the team has gone far beyond that single library. He regularly >>> attends meetings, including the PTG, and has provided input into several >>> of our team decisions recently. >>> >>> I think it's time we make him a full member of the oslo-core group. >>> >>> Please respond here with a +1 or -1 to indicate your opinion. >>> >>> Thanks, >>> Doug >>> >>> ____________________________________________________________ >>> ______________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Tue Mar 27 03:04:35 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 27 Mar 2018 11:04:35 +0800 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: +1, Eric is very active and thorough reviewer ~ Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: "OpenStack Development Mailing List (not for usage questions)" Date: 03/27/2018 10:00 AM Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core Howdy everyone, I'd like to propose that we add Eric Fried to the nova-core team. Eric has been instrumental to the placement effort with his work on nested resource providers and has been actively contributing to many other areas of openstack [0] like project-config, gerritbot, keystoneauth, devstack, os-loganalyze, and so on. He's an active reviewer in nova [1] and elsewhere in openstack and reviews in-depth, asking questions and catching issues in patches and working with authors to help get code into merge-ready state. These are qualities I look for in a potential core reviewer. In addition to all that, Eric is an active participant in the project in general, helping people with questions in the #openstack-nova IRC channel, contributing to design discussions, helping to write up outcomes of discussions, reporting bugs, fixing bugs, and writing tests. His contributions help to maintain and increase the health of our project. To the existing core team members, please respond with your comments, +1s, or objections within one week. Cheers, -melanie [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_owner-3Aefried&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=rVAEN3iJ6T8bfl-T2cr0r2V9b3Y77SNCG-f-5mRS14M&s=w_QMVh424GtoPTNns-WQXjrX6VTVPNAQBw__J_Sxiiw&e= [1] https://urldefense.proofpoint.com/v2/url?u=http-3A__stackalytics.com_report_contribution_nova_90&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=rVAEN3iJ6T8bfl-T2cr0r2V9b3Y77SNCG-f-5mRS14M&s=AE65CYdFeQnyWsUN20WoNI-VujdaNjfn2RiAmFz9BWc&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=rVAEN3iJ6T8bfl-T2cr0r2V9b3Y77SNCG-f-5mRS14M&s=QRzghstztOBBcjk4bouT3z0yudqvu_2Su9Vmmree3SM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From johnsomor at gmail.com Tue Mar 27 03:33:04 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 26 Mar 2018 20:33:04 -0700 Subject: [openstack-dev] [octavia] Proposing Jacky Hu (dayou) as an Octavia core reviewer Message-ID: Hello Octavia community, I would like to propose Jacky Hu (dayou) as a core reviewer on the Octavia project. Jacky has done amazing work on Octavia dashboard, specifically updating the look and feel of our details pages to be more user friendly. Recently he has contributed support for L7 policies in the dashboard and caught us up with the wider Horizon framework advances. Jacky has also contributed thoughtful reviews on the main Octavia project as well as contributed to the L3 Active/Active work in progress. Jacky's review statistics are in line with the other core reviewers [1] and I feel Jacky would make a great addition to the Octavia core reviewer team. Existing Octavia core reviewers, please reply to this email with your support or concerns with adding Jacky to the core team. Michael [1] http://stackalytics.com/report/contribution/octavia-group/90 From jaypipes at gmail.com Tue Mar 27 04:40:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 27 Mar 2018 00:40:21 -0400 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: <4f3b29de-8bf3-347c-b500-967e28523371@gmail.com> +1 On 03/26/2018 10:00 PM, melanie witt wrote: > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on > nested resource providers and has been actively contributing to many > other areas of openstack [0] like project-config, gerritbot, > keystoneauth, devstack, os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and > reviews in-depth, asking questions and catching issues in patches and > working with authors to help get code into merge-ready state. These are > qualities I look for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project in > general, helping people with questions in the #openstack-nova IRC > channel, contributing to design discussions, helping to write up > outcomes of discussions, reporting bugs, fixing bugs, and writing tests. > His contributions help to maintain and increase the health of our project. > > To the existing core team members, please respond with your comments, > +1s, or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From German.Eichberger at rackspace.com Tue Mar 27 05:09:10 2018 From: German.Eichberger at rackspace.com (German Eichberger) Date: Tue, 27 Mar 2018 05:09:10 +0000 Subject: [openstack-dev] [octavia] Proposing Jacky Hu (dayou) as an Octavia core reviewer In-Reply-To: References: Message-ID: +1 Really excited to work with Jacky -- German On 3/26/18, 8:33 PM, "Michael Johnson" wrote: Hello Octavia community, I would like to propose Jacky Hu (dayou) as a core reviewer on the Octavia project. Jacky has done amazing work on Octavia dashboard, specifically updating the look and feel of our details pages to be more user friendly. Recently he has contributed support for L7 policies in the dashboard and caught us up with the wider Horizon framework advances. Jacky has also contributed thoughtful reviews on the main Octavia project as well as contributed to the L3 Active/Active work in progress. Jacky's review statistics are in line with the other core reviewers [1] and I feel Jacky would make a great addition to the Octavia core reviewer team. Existing Octavia core reviewers, please reply to this email with your support or concerns with adding Jacky to the core team. Michael [1] http://stackalytics.com/report/contribution/octavia-group/90 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From flux.adam at gmail.com Tue Mar 27 05:31:52 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Tue, 27 Mar 2018 05:31:52 +0000 Subject: [openstack-dev] [octavia] Proposing Jacky Hu (dayou) as an Octavia core reviewer In-Reply-To: References: Message-ID: +1, definitely a good contributor! Thanks especially for your work on the dashboard! On Tue, Mar 27, 2018 at 2:09 PM German Eichberger < German.Eichberger at rackspace.com> wrote: > +1 > > Really excited to work with Jacky -- > > German > > On 3/26/18, 8:33 PM, "Michael Johnson" wrote: > > Hello Octavia community, > > I would like to propose Jacky Hu (dayou) as a core reviewer on the > Octavia project. > > Jacky has done amazing work on Octavia dashboard, specifically > updating the look and feel of our details pages to be more user > friendly. Recently he has contributed support for L7 policies in the > dashboard and caught us up with the wider Horizon framework advances. > > Jacky has also contributed thoughtful reviews on the main Octavia > project as well as contributed to the L3 Active/Active work in > progress. > > Jacky's review statistics are in line with the other core reviewers > [1] and I feel Jacky would make a great addition to the Octavia core > reviewer team. > > Existing Octavia core reviewers, please reply to this email with your > support or concerns with adding Jacky to the core team. > > Michael > > [1] http://stackalytics.com/report/contribution/octavia-group/90 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Tue Mar 27 05:52:33 2018 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 27 Mar 2018 07:52:33 +0200 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: <1522079471-sup-7587@lrrr.local> References: <1522079471-sup-7587@lrrr.local> Message-ID: <20180327055233.fqrv7mjvjealwrjm@redhat.com> On 26/03/18 11:52 -0400, Doug Hellmann wrote: >Ken has been managing oslo.messaging for ages now but his participation >in the team has gone far beyond that single library. He regularly >attends meetings, including the PTG, and has provided input into several >of our team decisions recently. > >I think it's time we make him a full member of the oslo-core group. > >Please respond here with a +1 or -1 to indicate your opinion. YAY! +1 -- @flaper87 Flavio Percoco -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 862 bytes Desc: not available URL: From chdzsp at 163.com Tue Mar 27 07:28:16 2018 From: chdzsp at 163.com (=?GBK?B?1tPJ+sa9?=) Date: Tue, 27 Mar 2018 15:28:16 +0800 (CST) Subject: [openstack-dev] [puppet] retiring puppet-ganesha Message-ID: <13ad35dd.b2bc.162665b8411.Coremail.chdzsp@163.com> Hi, puppet-ganesha module has been abandoned[1]. ceph-ansible will install/configure nfs-ganesha. I suggest we retire the repo toavoid confusion and do some cleanup.I'll propose a patch that retire openstack/puppet-ganesha, feelfree to give feedback on this proposal.[1]https://review.openstack.org/#/c/476899/-- Zhong Shengping -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Mar 27 08:21:06 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 27 Mar 2018 10:21:06 +0200 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: +1 On Tue, Mar 27, 2018 at 4:00 AM, melanie witt wrote: > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on nested > resource providers and has been actively contributing to many other areas > of openstack [0] like project-config, gerritbot, keystoneauth, devstack, > os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and reviews > in-depth, asking questions and catching issues in patches and working with > authors to help get code into merge-ready state. These are qualities I look > for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project in > general, helping people with questions in the #openstack-nova IRC channel, > contributing to design discussions, helping to write up outcomes of > discussions, reporting bugs, fixing bugs, and writing tests. His > contributions help to maintain and increase the health of our project. > > To the existing core team members, please respond with your comments, +1s, > or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Tue Mar 27 08:22:38 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 27 Mar 2018 10:22:38 +0200 Subject: [openstack-dev] [oslo] Any reason why not have 'choices' parameter for ListOpt()? In-Reply-To: References: <20180326094923.aidowcambav26aop@eukaryote> Message-ID: <20180327082238.opxg464hrdu7dadp@eukaryote> On Mon, Mar 26, 2018 at 12:28:02PM -0700, melanie witt wrote: > On Mon, 26 Mar 2018 14:12:52 -0500, Matt Riedemann wrote: > > On 3/26/2018 6:24 AM, ChangBo Guo wrote: > > > What's your use case for ListOpt, just make sure the value(a list) is > > > part of  'choices' ?   Maybe we need another parameter to distinguish > > > > It came up because of this change in nova: > > > > https://review.openstack.org/#/c/534384/ > > > > We want to backport that as a bug fix which is a mitigation for > > performance degradation issues with QEMU patches for Spectre and Meltdown. > > > > However, in the backport we wanted to restrict the ListOpt to a single > > value, "pcid". The idea is to restrict the new option to a single value > > in stable branches. > > > > Then in master, we could remove the 'choices' kwarg so operators can > > define their list as they wish. > > > > If we were do implement this generically in ListOpt, I suppose 'choices' > > would mean that the specified list must be a subset of the defined > > choices list. So in the backport patch, we'd just have choices=[None, > > 'pcid'] and you can either specify None or 'pcied' for that option > > (default to None). > > > > Right now the code that's relying on this option just has a hard-coded > > check for the value which is OK. > > I'm not sure if this helps, but we do already have some example of a ListOpt > with 'choices' for the VNC auth_schemes: > > https://github.com/openstack/nova/blob/cd15c3d/nova/conf/vnc.py#L229 > > Could we do something similar for the backport of the CPU flags patch? Ah, interesting pointer. It seems to work locally, and I updated the patch with it[*]: [...] cfg.ListOpt( 'cpu_model_extra_flags', item_type=types.String( choices=['pcid'] ), default=[], help=""" ... """ [...] Thanks, Melanie. [*] https://review.openstack.org/#/c/534384/ -- /kashyap From balazs.gibizer at ericsson.com Tue Mar 27 09:01:18 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 27 Mar 2018 11:01:18 +0200 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: <1522141278.9496.12@smtp.office365.com> +1 On Tue, Mar 27, 2018 at 4:00 AM, melanie witt wrote: > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on > nested resource providers and has been actively contributing to many > other areas of openstack [0] like project-config, gerritbot, > keystoneauth, devstack, os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and > reviews in-depth, asking questions and catching issues in patches and > working with authors to help get code into merge-ready state. These > are qualities I look for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project > in general, helping people with questions in the #openstack-nova IRC > channel, contributing to design discussions, helping to write up > outcomes of discussions, reporting bugs, fixing bugs, and writing > tests. His contributions help to maintain and increase the health of > our project. > > To the existing core team members, please respond with your comments, > +1s, or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From vidyadharreddy68 at gmail.com Tue Mar 27 09:05:28 2018 From: vidyadharreddy68 at gmail.com (vidyadhar reddy) Date: Tue, 27 Mar 2018 11:05:28 +0200 Subject: [openstack-dev] [Neutron][vpnaas] Message-ID: Hello, Can we setup multiple vpnaas site to site connections on a single router? can a single router handle two or more site to site connections using vpnaas? i have tried this setup using three clouds and interconnecting them via star topology using vpnaas, the newly created vpn site connections is going to down state. https://bugs.launchpad.net/neutron/+bug/1318550 in the above link they mentioned that it is a design constraint with vpnaas, though the scenario which i tested is not similar to the one described in the link above, the basic idea is to have multiple site to site connections on a single router, kindly let me know if this is fixed already. thanks in advance BR vidyadhar reddy -------------- next part -------------- An HTML attachment was scrubbed... URL: From hugh at wherenow.org Tue Mar 27 09:48:12 2018 From: hugh at wherenow.org (Hugh Saunders) Date: Tue, 27 Mar 2018 09:48:12 +0000 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> References: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> Message-ID: All the best Major, thanks for all your work on OSA :) On Mon, 26 Mar 2018 at 14:05 Major Hayden wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hey there, > > As promised, I am stepping down from being an OpenStack-Ansible core > reviewer since I am unable to meet the obligations of the role with my new > job. :( > > Thanks to everyone who has mentored me along the way and put up with my > gate job breakages. I have learned an incredible amount about OpenStack, > Ansible, complex software deployments, and open source communities. I > appreciate everyone's support as I worked through the creation of the > ansible-hardening role as well as adding CentOS support for > OpenStack-Ansible. > > - -- > Major Hayden > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlq4774ACgkQc3BR4MEB > H7E+gA/9HJEDibsQhdy191NbxbhF75wUup3gRDHhGPI6eFqHo/Iz8Q5Kv9Z9CXbo > rkBGMebbGzoKwiLnKbFWr448azMJkj5/bTRLHb1eDQg2S2xaywP2L4e0CU+Gouto > DucmGT6uLg+LKdQByYTB8VAHelub4DoxV2LhwsH+uYgWp6rZ2tB2nEIDTYQihhGx > /WukfG+3zA99RZQjWRHmfnb6djB8sONzGIM8qY4qDUw9Xjp5xguHOU4+lzn4Fq6B > cEpsJnztuEYnEpeTjynu4Dc8g+PX8y8fcObhcj+1D0NkZ1qW7sdX6CA64wuYOqec > S552ej/fR5FPRKLHF3y8rbtNIlK5qfpNPE4UFKuVLjGSTSBz4Kp9cGn2jNCzyw5c > aDQs/wQHIiUECzY+oqU1RHZJf9/Yq1VVw3vio+Dye1IMgkoaNpmX9lTcNw9wb1i7 > lac+fm0e438D+c+YZAttmHBCCaVWgKdGxH7BY84FoQaXRcaJ9y3ZoDEx6Rr8poBQ > pK4YjUzVP9La2f/7S1QemX2ficisCbX+MVmAX9G4Yr9U2n98aXVWFMaF4As1H+OS > zm9r9saoAZr6Z8BxjROjoClrg97RN1zkPseUDwMQwlJwF3V33ye3ib1dYWRr7BSm > zAht+Jih/JE6Xtp+5UEF+6TBCYFVtXO8OHzCcac14w9dy1ur900= > =fx64 > -----END PGP SIGNATURE----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Mar 27 09:57:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 27 Mar 2018 04:57:21 -0500 Subject: [openstack-dev] [Release-job-failures] Tag of openstack/instack-undercloud failed In-Reply-To: References: <1522096324-sup-430@lrrr.local> Message-ID: <20180327095721.GA18167@sm-xps> On Mon, Mar 26, 2018 at 03:36:11PM -0600, Alex Schultz wrote: > On Mon, Mar 26, 2018 at 2:33 PM, Doug Hellmann wrote: > > Excerpts from zuul's message of 2018-03-26 18:11:49 +0000: > >> Build failed. > >> > >> - publish-openstack-releasenotes http://logs.openstack.org/94/94bb28ae46bd263314c9d846069ca913d225e625/tag/publish-openstack-releasenotes/9440894/ : POST_FAILURE in 3m 29s > >> > > > > This release notes build failure is probably not a problem, but I > > don't recognize the cause of the error so I wanted to bring it up > > in case someone else did. > > > > Is the ".pike.html.AEKeun" a lock file of some sort? Or a temporary > > file created for some other purpose? > > > > I think that's part of the rsync process. From > https://rsync.samba.org/how-rsync-works.html > Sorry, I commented in the #openstack-release channel yesterday, but I should have followed up on this email. This is indeed from the rsync job. There is a long standing issue that if multiple jobs are overlapping, we occasionally get these types of errors when two of them are trying to upload files at the same time. It is mostly harmless, and release notes will get refreshed every merge, so in this case it is safe to ignore this failure. Sean From lvmxhster at gmail.com Tue Mar 27 10:38:08 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Tue, 27 Mar 2018 18:38:08 +0800 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> Message-ID: As I know placement and nova scheduler dedicate to filter and weight. Placement and nova scheduler is responsible for avoiding race. Nested provider + traits should cover most scenarios. Any special case please let the nova developer and cyborg developer know, let work together to get a solution. I re-paste our design (for a POC) I have send it before as follow, hopeful it can helpful. We do not let cyborg do any scheduler function( include filter and weight). It just responsible to do binding for FPGA device and vm instance ( or call it FPGA devices assignment) =============================================== hi all IMHO, we can consider the upstream of image management and resource provider management, even scheduler weight. 1. image management For image management, I miss one things in the meeting. We have discussed it before. And Li Liu suggested to add a cyborg wrapper to upload the FPGA image. This is a good ideas. For example: PUT /cyborg/v1/images/{image_id}/file It will call glance upload API to upload the image. This is helpful for us to normalize the tags of image and properties. To Dutch, Li Liu, Dolpher, Sunder and other FPGA experts: How about get agreement on the standardization of glance image metadata, especially, tags and property. For the tags: IMHO, the "FPGA" is necessary, for there maybe many images managed by glance, not only fpga image but also VM image. This tag can be a filter help us to get only fpga images. The vendor name is necessary as a tag? Such as "INTEL" or "XILINX" The product model is necessary as a tag? Such as "STRATIX10" Any others should be in the image tags? For the properties : It should include the function name(this means the accelerator type). Should it also include stream id and vendor name? such as: --property vendor=xilinx --property type=crypto,transcoding Any others should be in the image properties? Li Liu is working on the spec. 2. provider management. resource class, maybe the nested provider supported. we can define them as fellow: level 1 provider resource class is CUSTOM_FPGA_, and level 2 is CUSTOM_FPGA__, level 3 is CUSTOM_FPGA___ { "CUSTOM_FPGA_VF": { "num": 3 "CUSTOM_FPGA_ XILINX _VF": { "num": 1 } "CUSTOM_FPGA_INTEL_VF": { "CUSTOM_FPGA_INTEL_STRATIX10_VF": "num": 1 } { "CUSTOM_FPGA_INTEL_STRATIX11_VF": "num": 1 } } } Not sure I understand correctly. And traits should include: CUSTOM__FUNCTION_ domain means which project to consume these traits. CYBORG or ACCELERATOR which is better? Here it means cyborg care these traits. Nova, neutron, cinder can ignore them. function, can be CRYPTO, TRANSCODING. To Jay Pipes, Dutch, Li Liu, Dolpher, Sunder and other FPGA/placement experts: Any suggestion on it? 3. scheduler weight. I think this is not the high priority at present for cyborg. Zhipeng, Li Liu, Zhuli, Dopher and I have discussed them before for the deployable model implementation. We need to add steaming or image information for deployable. Li Liu and Zhuli's design, they do have add extra info for deployable. So it can be used for steaming or image information. And cyborg API had better support filters for scheduler weighting. Such as: GET /cyborg/v1/accelerators?hosts=cyborg-1, cyborg-2, cyborg-3&function=crypto,transcoding It query all the hosts cyborg-1, cyborg-2, cyborg-3 to get all accelerators support crypto and transcoding function. Cyborg API call conductor to get the accelerators information from by these filters scheduler can leverage the the accelerators information for weighting. Maybe Cyborg API can also help to do the weighting. But I think this is not a good idea. To Sunder: I know you are interested in scheduler weight and you have some other weighting solutions. Hopeful this can useful for you. REF: https://etherpad.openstack.org/p/cyborg-nova-poc 2018-03-23 12:27 GMT+08:00 Nadathur, Sundar : > Hi all, > There seems to be a possibility of a race condition in the Cyborg/Nova > flow. Apologies for missing this earlier. (You can refer to the proposed > Cyborg/Nova spec > > for details.) > > Consider the scenario where the flavor specifies a resource class for a > device type, and also specifies a function (e.g. encrypt) in the extra > specs. The Nova scheduler would only track the device type as a resource, > and Cyborg needs to track the availability of functions. Further, to keep > it simple, say all the functions exist all the time (no reprogramming > involved). > > To recap, here is the scheduler flow for this case: > > - A request spec with a flavor comes to Nova conductor/scheduler. The > flavor has a device type as a resource class, and a function in the extra > specs. > - Placement API returns the list of RPs (compute nodes) which contain > the requested device types (but not necessarily the function). > - Cyborg will provide a custom filter which queries Cyborg DB. This > needs to check which hosts contain the needed function, and filter out the > rest. > - The scheduler selects one node from the filtered list, and the > request goes to the compute node. > > For the filter to work, the Cyborg DB needs to maintain a table with > triples of (host, function type, #free units). The filter checks if a given > host has one or more free units of the requested function type. But, to > keep the # free units up to date, Cyborg on the selected compute node needs > to notify the Cyborg API to decrement the #free units when an instance is > spawned, and to increment them when resources are released. > > Therein lies the catch: this loop from the compute node to controller is > susceptible to race conditions. For example, if two simultaneous requests > each ask for function A, and there is only one unit of that available, the > Cyborg filter will approve both, both may land on the same host, and one > will fail. This is because Cyborg on the controller does not decrement > resource usage due to one request before processing the next request. > > This is similar to this previous Nova scheduling issue > . > That was solved by having the scheduler claim a resource in Placement for > the selected node. I don't see an analog for Cyborg, since it would not > know which node is selected. > > Thanks in advance for suggestions and solutions. > > Regards, > Sundar > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Pretorius at rackspace.co.uk Tue Mar 27 11:11:01 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Tue, 27 Mar 2018 11:11:01 +0000 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> References: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> Message-ID: <74A47E01-C07A-41B2-A29D-6E50195AFFEF@rackspace.co.uk> Ah Major, we shall definitely miss your readiness to help, positive attitude and deep care for setenforce 1. Oh, and then there're the gifs... so many gifs... While I am inclined to [1], I shall instead wish you well while you [2]. ( [1] https://media.giphy.com/media/1BXa2alBjrCXC/giphy.gif [2] https://media.giphy.com/media/G6if3AWViiNdC/giphy.gif On 3/26/18, 2:07 PM, "Major Hayden" wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hey there, As promised, I am stepping down from being an OpenStack-Ansible core reviewer since I am unable to meet the obligations of the role with my new job. :( Thanks to everyone who has mentored me along the way and put up with my gate job breakages. I have learned an incredible amount about OpenStack, Ansible, complex software deployments, and open source communities. I appreciate everyone's support as I worked through the creation of the ansible-hardening role as well as adding CentOS support for OpenStack-Ansible. - -- Major Hayden -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlq4774ACgkQc3BR4MEB H7E+gA/9HJEDibsQhdy191NbxbhF75wUup3gRDHhGPI6eFqHo/Iz8Q5Kv9Z9CXbo rkBGMebbGzoKwiLnKbFWr448azMJkj5/bTRLHb1eDQg2S2xaywP2L4e0CU+Gouto DucmGT6uLg+LKdQByYTB8VAHelub4DoxV2LhwsH+uYgWp6rZ2tB2nEIDTYQihhGx /WukfG+3zA99RZQjWRHmfnb6djB8sONzGIM8qY4qDUw9Xjp5xguHOU4+lzn4Fq6B cEpsJnztuEYnEpeTjynu4Dc8g+PX8y8fcObhcj+1D0NkZ1qW7sdX6CA64wuYOqec S552ej/fR5FPRKLHF3y8rbtNIlK5qfpNPE4UFKuVLjGSTSBz4Kp9cGn2jNCzyw5c aDQs/wQHIiUECzY+oqU1RHZJf9/Yq1VVw3vio+Dye1IMgkoaNpmX9lTcNw9wb1i7 lac+fm0e438D+c+YZAttmHBCCaVWgKdGxH7BY84FoQaXRcaJ9y3ZoDEx6Rr8poBQ pK4YjUzVP9La2f/7S1QemX2ficisCbX+MVmAX9G4Yr9U2n98aXVWFMaF4As1H+OS zm9r9saoAZr6Z8BxjROjoClrg97RN1zkPseUDwMQwlJwF3V33ye3ib1dYWRr7BSm zAht+Jih/JE6Xtp+5UEF+6TBCYFVtXO8OHzCcac14w9dy1ur900= =fx64 -----END PGP SIGNATURE----- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From delightwook at ssu.ac.kr Tue Mar 27 11:45:55 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Tue, 27 Mar 2018 20:45:55 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. Message-ID: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the 'Add action list panel for entity click action'. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: use_case1.JPG Type: image/jpeg Size: 179511 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: user_case2.JPG Type: image/jpeg Size: 157211 bytes Desc: not available URL: From sfinucan at redhat.com Tue Mar 27 12:04:58 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 27 Mar 2018 13:04:58 +0100 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: <1522152298.10229.0.camel@redhat.com> +1 On Mon, 2018-03-26 at 19:00 -0700, melanie witt wrote: > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on > nested resource providers and has been actively contributing to many > other areas of openstack [0] like project-config, gerritbot, > keystoneauth, devstack, os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and > reviews in-depth, asking questions and catching issues in patches and > working with authors to help get code into merge-ready state. These are > qualities I look for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project in > general, helping people with questions in the #openstack-nova IRC > channel, contributing to design discussions, helping to write up > outcomes of discussions, reporting bugs, fixing bugs, and writing tests. > His contributions help to maintain and increase the health of our project. > > To the existing core team members, please respond with your comments, > +1s, or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dms at danplanet.com Tue Mar 27 13:39:32 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 27 Mar 2018 06:39:32 -0700 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> (melanie witt's message of "Mon, 26 Mar 2018 19:00:06 -0700") References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: > To the existing core team members, please respond with your comments, > +1s, or objections within one week. +1. --Dan From zigo at debian.org Tue Mar 27 13:53:34 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 27 Mar 2018 15:53:34 +0200 Subject: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports Message-ID: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> Hi, As some of you already know, after some difficult time after I left my past employer, I'm back! And I don't plan on giving-up, ever... :) The repositories: ================= Today, it's my pleasure to announce today the general availability of Debian packages for the Queens OpenStack release. These are available in official Debian Sid (as usual), and also as a Stretch (unofficial) backports. These packages have been tested successfully with Tempest. Here's the address of the (unofficial) backport repositories: deb http://stretch-queens.debian.net/debian stretch-queens-backports main deb-src http://stretch-queens.debian.net/debian stretch-queens-backports main deb http://stretch-queens.debian.net/debian stretch-queens-backports-nochange main deb-src http://stretch-queens.debian.net/debian stretch-queens-backports-nochange main The repository key is here: wget -O - http://stretch-queens.debian.net/debian/dists/pubkey.gpg | \ apt-key add Please note that stretch-queens.debian.net is just a IN CNAME pointer to the server of my new employer, Infomaniak, and that the real server name is: stretch-queens.infomaniak.ch So, that server is of course located in Geneva, Switzerland. Thanks to my employer for sponsoring that server, and allowing me to build these packages during my work time. What's new in this release ========================== 1/ Python 3 ----------- The new stuff is ... the full switch Python 3! As much as I understand, apart from Gentoo, no other distribution switched to Python 3 yet. Both RDO and Ubuntu are planning to do it for Rocky (at least that's what I've been told). So once more, Debian is on the edge. :) While there is still dual Python 2/3 support for clients (with priority to Python 3 for binaries in /usr/bin), all services have been switched to Py3. Building the packages worked surprisingly well. I was secretly expecting more failures. The only real collateral damage is: - manila-ui (no Py3 support upstream) As the Horizon package switched to Python 3, it's unfortunately impossible to keep these plugins to use Python 2, and therefore, manila-ui is now (from a Debian packaging standpoint) RC buggy, and shall be removed from Debian Testing. Also, Django 2 will sooner or later be the only option in Debian Sid. It'd be great if Horizon's patches could be merged, and plugins adapt ASAP. Also, a Neutron plugins isn't released upstream yet for Queens, and since the Neutron package switched to Python 3, the old Pike plugin packages are also considered RC buggy (and it doesn't build with Queens anyway): - networking-mlnx The faith of the above packages is currently unknown. Hopefully, there's going to be upstream work to make them in a packageable state (which means, for today's Debian, Python 3.6 compatible), if not, there will be no choice but to remove them from Debian. As for networking-ovs-dpdk, it needs more work on OVS itself to support dpdk, and I still haven't found the time for it yet. As a more general thing, it'd be nice if there was Python 3.6 in the gate. Hopefully, this will happen with Bionic release and the infra switching to it. It's been a reoccurring problem though, that Debian Sid is always experiencing issues before the other distros (ie: before Ubuntu, for example), because it gets updates first. So I'd really love to have Sid as a possible image in the infra, so we could use it for (non-voting) gate. 2/ New debconf unified templates -------------------------------- The Debconf templates used to be embedded within each packages. This isn't the case anymore, all of them are now stored in openstack-pkg-tools if they are not service specific. Hopefully, this will help having a better coverage for translations. The postinst scripts can also optionally create the service tenant and user automatically. The system also does less by default (ie: it wont even read your configuration files if the user doesn't explicitly asks for config handling), API endpoint can now use FQDN and https as well. 3/ New packages/services ------------------------ We've added Cloudkitty and Vitrage. Coming soon: Octavia and Vitrage. Unfortunately, at this point, cloudkitty-dashboard still contains non-free files (ie: embedded minified javascripts). Worse, some of them cannot even be identified (I couldn't find out what version from upstream it was). So even if this package is ready, I can't upload it to Debian in such state. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Tue Mar 27 14:20:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 27 Mar 2018 14:20:44 +0000 Subject: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports In-Reply-To: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> References: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> Message-ID: <20180327142044.kgbyin4fze7ldkyf@yuggoth.org> On 2018-03-27 15:53:34 +0200 (+0200), Thomas Goirand wrote: [...] > I'd really love to have Sid as a possible image in the infra [...] This has unfortunately been rotting too far down my to do list for me to get to it. I'd love to have debian-sid nodes to test some stuff on as well--especially clients/libraries/utilities--since a lot of my (workstation and portable) systems are running it. If someone is interested in and has time to work on this, I'm happy to provide guidance and review their changes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tpb at dyncloud.net Tue Mar 27 14:39:56 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 27 Mar 2018 10:39:56 -0400 Subject: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports In-Reply-To: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> References: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> Message-ID: <20180327143956.pbnzlbn267pozr7r@barron.net> On 27/03/18 15:53 +0200, Thomas Goirand wrote: >Hi, > >As some of you already know, after some difficult time after I left my >past employer, I'm back! And I don't plan on giving-up, ever... :) > >The repositories: >================= >Today, it's my pleasure to announce today the general availability of >Debian packages for the Queens OpenStack release. These are available in >official Debian Sid (as usual), and also as a Stretch (unofficial) >backports. These packages have been tested successfully with Tempest. > >Here's the address of the (unofficial) backport repositories: > >deb http://stretch-queens.debian.net/debian > stretch-queens-backports main >deb-src http://stretch-queens.debian.net/debian > stretch-queens-backports main >deb http://stretch-queens.debian.net/debian > stretch-queens-backports-nochange main >deb-src http://stretch-queens.debian.net/debian > stretch-queens-backports-nochange main > >The repository key is here: >wget -O - http://stretch-queens.debian.net/debian/dists/pubkey.gpg | \ > apt-key add > >Please note that stretch-queens.debian.net is just a IN CNAME pointer to >the server of my new employer, Infomaniak, and that the real server name is: > >stretch-queens.infomaniak.ch > >So, that server is of course located in Geneva, Switzerland. Thanks to >my employer for sponsoring that server, and allowing me to build these >packages during my work time. > >What's new in this release >========================== >1/ Python 3 >----------- >The new stuff is ... the full switch Python 3! > >As much as I understand, apart from Gentoo, no other distribution >switched to Python 3 yet. Both RDO and Ubuntu are planning to do it for >Rocky (at least that's what I've been told). So once more, Debian is on >the edge. :) > >While there is still dual Python 2/3 support for clients (with priority >to Python 3 for binaries in /usr/bin), all services have been switched >to Py3. > >Building the packages worked surprisingly well. I was secretly expecting >more failures. The only real collateral damage is: > >- manila-ui (no Py3 support upstream) Just a note of thanks for calling our attention to this issue. manila-ui had been rather neglected and is getting TLC now. We'll certainly get back to you when we've got it working with Python 3. -- Tom Barron > >As the Horizon package switched to Python 3, it's unfortunately >impossible to keep these plugins to use Python 2, and therefore, >manila-ui is now (from a Debian packaging standpoint) RC buggy, and >shall be removed from Debian Testing. > >Also, Django 2 will sooner or later be the only option in Debian Sid. >It'd be great if Horizon's patches could be merged, and plugins adapt ASAP. > >Also, a Neutron plugins isn't released upstream yet for Queens, and >since the Neutron package switched to Python 3, the old Pike plugin >packages are also considered RC buggy (and it doesn't build with Queens >anyway): >- networking-mlnx > >The faith of the above packages is currently unknown. Hopefully, there's >going to be upstream work to make them in a packageable state (which >means, for today's Debian, Python 3.6 compatible), if not, there will be >no choice but to remove them from Debian. > >As for networking-ovs-dpdk, it needs more work on OVS itself to support >dpdk, and I still haven't found the time for it yet. > >As a more general thing, it'd be nice if there was Python 3.6 in the >gate. Hopefully, this will happen with Bionic release and the infra >switching to it. It's been a reoccurring problem though, that Debian Sid >is always experiencing issues before the other distros (ie: before >Ubuntu, for example), because it gets updates first. So I'd really love >to have Sid as a possible image in the infra, so we could use it for >(non-voting) gate. > >2/ New debconf unified templates >-------------------------------- >The Debconf templates used to be embedded within each packages. This >isn't the case anymore, all of them are now stored in >openstack-pkg-tools if they are not service specific. Hopefully, this >will help having a better coverage for translations. The postinst >scripts can also optionally create the service tenant and user >automatically. The system also does less by default (ie: it wont even >read your configuration files if the user doesn't explicitly asks for >config handling), API endpoint can now use FQDN and https as well. > >3/ New packages/services >------------------------ >We've added Cloudkitty and Vitrage. Coming soon: Octavia and Vitrage. > >Unfortunately, at this point, cloudkitty-dashboard still contains >non-free files (ie: embedded minified javascripts). Worse, some of them >cannot even be identified (I couldn't find out what version from >upstream it was). So even if this package is ready, I can't upload it to >Debian in such state. > >Cheers, > >Thomas Goirand (zigo) > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From nmagnezi at redhat.com Tue Mar 27 14:40:05 2018 From: nmagnezi at redhat.com (Nir Magnezi) Date: Tue, 27 Mar 2018 17:40:05 +0300 Subject: [openstack-dev] [octavia] Proposing Jacky Hu (dayou) as an Octavia core reviewer In-Reply-To: References: Message-ID: +1. Well earned Jacky, Congratulations! On Tue, Mar 27, 2018 at 8:31 AM, Adam Harwell wrote: > +1, definitely a good contributor! Thanks especially for your work on the > dashboard! > > On Tue, Mar 27, 2018 at 2:09 PM German Eichberger < > German.Eichberger at rackspace.com> wrote: > >> +1 >> >> Really excited to work with Jacky -- >> >> German >> >> On 3/26/18, 8:33 PM, "Michael Johnson" wrote: >> >> Hello Octavia community, >> >> I would like to propose Jacky Hu (dayou) as a core reviewer on the >> Octavia project. >> >> Jacky has done amazing work on Octavia dashboard, specifically >> updating the look and feel of our details pages to be more user >> friendly. Recently he has contributed support for L7 policies in the >> dashboard and caught us up with the wider Horizon framework advances. >> >> Jacky has also contributed thoughtful reviews on the main Octavia >> project as well as contributed to the L3 Active/Active work in >> progress. >> >> Jacky's review statistics are in line with the other core reviewers >> [1] and I feel Jacky would make a great addition to the Octavia core >> reviewer team. >> >> Existing Octavia core reviewers, please reply to this email with your >> support or concerns with adding Jacky to the core team. >> >> Michael >> >> [1] http://stackalytics.com/report/contribution/octavia-group/90 >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Mar 27 14:40:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Mar 2018 09:40:44 -0500 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? Message-ID: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> Sylvain has had a spec up for awhile [1] about solving an old issue where admins can rename an AZ (via host aggregate metadata changes) while it has instances in it, which likely results in at least user confusion, but probably other issues later if you try to move those instances, e.g. the request spec probably points at the original AZ name and if that's gone (renamed) the scheduler probably pukes (would need to test this). Anyway, I'm wondering if anyone relies on this behavior, or if they consider it a bug that the API allows admins to do this? I tend to consider this a bug in the API, and should just be fixed without a microversion. In other words, you shouldn't have to opt out of broken behavior using microversions. [1] https://review.openstack.org/#/c/446446/ -- Thanks, Matt From emilien at redhat.com Tue Mar 27 15:23:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 27 Mar 2018 08:23:29 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 14th Edition Message-ID: Note: this is the fourteenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128559.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Deadline for blueprints for Rocky is April 3. +--> Migration to Storyboard is still in progress for TripleO UI bugs, more updates to come... +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Arx and Ruck is Rafael. Please let them know any new CI issue. +--> Master promotion is 21 days, Queens is 21 days, Pike is 2 days and Ocata is 3 days. +--> team is working on helping the upgrade squad with upstream upgrade ci https://trello.com/c/8pbRwBps +--> tempest squad is working on containerizing tempest https://trello.com/c/066JFJjf/537-epic-containerize-tempest +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Good progress on CI jobs (having successful runs of the upgrade workflow), still need reviews though. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> A demo was done to show upgrades on containerized undercloud: https://www.youtube.com/watch?v=5gLKL3YkC2c +--> Good progress in CI with first successful OVB job running with a containerized undercloud. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Working on POC to refactor tasks out of t-h-t service templates into standalone Ansible roles, and then have service templates consume those roles +--> Investigated using overcloud hostnames in the ansible inventory: not possible at this time, see alternatives in the etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> Team is working on config-download integration for ceph and multi-cluster support. +--> puppet-ceph support in THT is being removed (only ceph-ansible is supported). +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Beginning migration from Launchpad to Storyboard for UI bugs. +--> Queens testing is still ongoing. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> New --use-hostnames option for inventory script: https://review.openstack.org/#/c/555052/ +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Testing on routed spine/leaf is going well. +--> Investigations are being done on Ansible Networking for OpenStack. +--> Work in progress to solve some conflicts between the old-style os-apply-config and new-style script for running os-net-config. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Good progress on the standardized messaging. The workflows are looking much neater. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Discussions around Mistral Secret Storage, see https://blueprints.launchpad.net/mistral/+spec/secure-sensitive-data +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ The Northern Hawk Owl can detect (primarily by sight) a vole to eat up to a half a mile away. Source: http://www.audubon.org/news/11-fun-facts-about-owls Stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Mar 27 15:36:16 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 27 Mar 2018 16:36:16 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-13 Message-ID: HTML: https://anticdent.org/tc-report-18-13.html It's a theme of no surprise, but this report, dedicated to ensuring people are just a bit more informed, has a fair bit to report on how staying informed can be difficult. # Global Communication [Last Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-21.log.html#t2018-03-21T13:12:59), there was a wide ranging conversation about some of the difficulties that people living and working in the APAC area, especially China, experience when trying to interact with all of the OpenStack community. It's not solely a problem of time zone or language, there are also institutional limits on access to tools and networks, and different preferences. One important question that was raised is that if an organization that is a member of the OpenStack Foundation (and thus obliged, at least in spirit, to contribute upstream) is knowingly limiting their employees access to IRC, email, and other tools of the OpenStack trade, are they violating the spirit of their agreement to be open? Notably absent from this discussion were representatives of the impacted individuals or companies. So there was a lot of speculation going on. This topic was picked back up [later in the same day](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-21.log.html#t2018-03-21T21:32:00), eventually leading to the idea of extending the [contributors guide](https://docs.openstack.org/contributors/), which is mostly for humans, to include a [Contributing Organization Guide](https://etherpad.openstack.org/p/Contributing_Organization_Guide). Good idea! # Stackalytics Tired Also on Wednesday there was [discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-21.log.html#t2018-03-21T16:54:13) of how [Stackalytics](http://stackalytics.com/) may be having some accuracy problems and whatever shall we do about that? Options include: kill it, fix it, replace it with something else, or switch over to a narrative report per cycle. I'd be inclined to kill it first and see what organically emerges from the void. However we can't really do that because it's not run by OpenStack. # Adjutant Last Thursday [the application](https://review.openstack.org/#/c/553643/) of [Adjutant](https://adjutant.readthedocs.io/) to be an official OpenStack project was [discussed](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-22.log.html#t2018-03-22T15:04:01). There's still a bit of confusion about what it actually does, but to at least some people it sounds pretty cool. Debate will continue on the review and if necessary, time will be set aside at the [Forum](https://wiki.openstack.org/wiki/Forum) to discuss the project in more detail. # Elections [This morning's topic](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-27.log.html#t2018-03-27T09:05:33) was the forthcoming election for half the TC. There are at least two incumbents who will not be running for re-election. The office hours discussion had some interesting things to say about what governance is and to whom it is relevant. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Tue Mar 27 15:37:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 27 Mar 2018 11:37:34 -0400 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> Message-ID: <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> On 03/27/2018 10:40 AM, Matt Riedemann wrote: > Sylvain has had a spec up for awhile [1] about solving an old issue > where admins can rename an AZ (via host aggregate metadata changes) > while it has instances in it, which likely results in at least user > confusion, but probably other issues later if you try to move those > instances, e.g. the request spec probably points at the original AZ name > and if that's gone (renamed) the scheduler probably pukes (would need to > test this). > > Anyway, I'm wondering if anyone relies on this behavior, or if they > consider it a bug that the API allows admins to do this? I tend to > consider this a bug in the API, and should just be fixed without a > microversion. In other words, you shouldn't have to opt out of broken > behavior using microversions. > > [1] https://review.openstack.org/#/c/446446/ Yet another flaw in the "design" of availability zones being metadata key/values on nova host aggregates. If we want to actually fix the issue once and for all, we need to make availability zones a real thing that has a permanent identifier (UUID) and store that permanent identifier in the instance (not the instance metadata). Or we can continue to paper over major architectural weaknesses like this. -jay From alexandre.van-kempen at inria.fr Tue Mar 27 15:47:54 2018 From: alexandre.van-kempen at inria.fr (avankemp) Date: Tue, 27 Mar 2018 17:47:54 +0200 Subject: [openstack-dev] [FEMDC] Wed. 28 Mar. - IRC Meeting 15:00 UTC Message-ID: <3D9A1F41-7B88-46AC-8D0D-432A3B415AB9@inria.fr> Dear all, A gentle reminder for our tomorrow meeting at 15:00 UTC A draft of the agenda is available at line 322 you are very welcome to add any item. https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Mar 27 16:20:28 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 27 Mar 2018 09:20:28 -0700 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: <530670DA-DAFB-4734-B7A1-C97F991FD9E2@kaplonski.pl> References: <530670DA-DAFB-4734-B7A1-C97F991FD9E2@kaplonski.pl> Message-ID: Does anyone know how this will work with services that are using cotyledon instead of oslo.service (for eliminating eventlet)? Michael On Mon, Mar 26, 2018 at 5:35 AM, Sławomir Kapłoński wrote: > Hi, > > >> Wiadomość napisana przez ChangBo Guo w dniu 26.03.2018, o godz. 14:15: >> >> >> 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : >> Hi, >> >> I took care of implementation of [1] in Neutron and I have couple questions to about this goal. >> >> 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? >> >> Yes , let's the only thing. we need test if that if it works . > > Ok, so please take a look at my patch for neutron if that is what we should do :) > >> >> 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. >> >> good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. > > I was asking rather about some manual test instead of automatic one. > >> >> 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? >> There is no example for tests now, we only have some unit tests in oslo.service . >> >> [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html >> [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html >> [3] https://review.openstack.org/#/c/554259/ >> >> — >> Best regards >> Slawek Kaplonski >> slawek at kaplonski.pl >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> -- >> ChangBo Guo(gcb) >> Community Director @EasyStack >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From johnsomor at gmail.com Tue Mar 27 16:39:54 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 27 Mar 2018 09:39:54 -0700 Subject: [openstack-dev] [octavia] Proposing Jacky Hu (dayou) as an Octavia core reviewer In-Reply-To: References: Message-ID: That is quorum. Welcome Jacky our new core reviewer for Octavia. Michael On Tue, Mar 27, 2018 at 7:40 AM, Nir Magnezi wrote: > +1. Well earned Jacky, Congratulations! > > On Tue, Mar 27, 2018 at 8:31 AM, Adam Harwell wrote: >> >> +1, definitely a good contributor! Thanks especially for your work on the >> dashboard! >> >> On Tue, Mar 27, 2018 at 2:09 PM German Eichberger >> wrote: >>> >>> +1 >>> >>> Really excited to work with Jacky -- >>> >>> German >>> >>> On 3/26/18, 8:33 PM, "Michael Johnson" wrote: >>> >>> Hello Octavia community, >>> >>> I would like to propose Jacky Hu (dayou) as a core reviewer on the >>> Octavia project. >>> >>> Jacky has done amazing work on Octavia dashboard, specifically >>> updating the look and feel of our details pages to be more user >>> friendly. Recently he has contributed support for L7 policies in the >>> dashboard and caught us up with the wider Horizon framework advances. >>> >>> Jacky has also contributed thoughtful reviews on the main Octavia >>> project as well as contributed to the L3 Active/Active work in >>> progress. >>> >>> Jacky's review statistics are in line with the other core reviewers >>> [1] and I feel Jacky would make a great addition to the Octavia core >>> reviewer team. >>> >>> Existing Octavia core reviewers, please reply to this email with your >>> support or concerns with adding Jacky to the core team. >>> >>> Michael >>> >>> [1] http://stackalytics.com/report/contribution/octavia-group/90 >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Tue Mar 27 16:42:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Mar 2018 11:42:04 -0500 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> Message-ID: <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> On 3/27/2018 10:37 AM, Jay Pipes wrote: > If we want to actually fix the issue once and for all, we need to make > availability zones a real thing that has a permanent identifier (UUID) > and store that permanent identifier in the instance (not the instance > metadata). Aggregates have a UUID now, exposed in microversion 2.41 (you added it). Is that what you mean by AZs having a UUID, since AZs are modeled as host aggregates? One of the alternatives in the spec is not relying on name as a unique identifier and just make sure everything is held together via the aggregate UUID, which is now possible. -- Thanks, Matt From aakashkt0 at gmail.com Tue Mar 27 16:57:44 2018 From: aakashkt0 at gmail.com (Aakash Kt) Date: Tue, 27 Mar 2018 22:27:44 +0530 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hello, So an update about current status. The charm spec for charm-os-ovn has been merged (queens/backlog). I don't know what the process is after this, but I had a couple of questions for the development of the charm : - I was wondering whether I need to use the charms.openstack package? Or can I just write using the reactive framework as is? - If we do have to use charms.openstack, where can I find good documentation of the package? I searched online and could not find much to go on with. - How much time do you think this will take to develop (not including test cases) ? Do guide me on the further steps to bring this charm to completion :-) Thank you, Aakash On Mon, Mar 19, 2018 at 5:37 PM, Aakash Kt wrote: > Hi James, > > Thank you for the previous code review. > I have pushed another patch. Also, I do not know how to reply to your > review comments on gerrit, so I will reply to them here. > > About the signed-off-message, I did not know that it wasn't a requirement > for OpenStack, I assumed it was. I have removed it from the updated patch. > > Thank you, > Aakash > > > On Thu, Mar 15, 2018 at 11:34 AM, Aakash Kt wrote: > >> Hi James, >> >> Just a small reminder that I have pushed a patch for review, according to >> changes you suggested :-) >> >> Thanks, >> Aakash >> >> On Mon, Mar 12, 2018 at 2:38 PM, James Page >> wrote: >> >>> Hi Aakash >>> >>> On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote: >>> >>>> Hi, >>>> >>>> I had previously put in a mail about the development for openstack-ovn >>>> charm. Sorry it took me this long to get back, was involved in other >>>> projects. >>>> >>>> I have submitted a charm spec for the above charm. >>>> Here is the review link : https://review.openstack.org/#/c/551800/ >>>> >>>> Please look in to it and we can further discuss how to proceed. >>>> >>> >>> I'll feedback directly on the review. >>> >>> Thanks! >>> >>> James >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken1ohmichi at gmail.com Tue Mar 27 16:59:09 2018 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Tue, 27 Mar 2018 09:59:09 -0700 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: +1 2018-03-27 6:39 GMT-07:00 Dan Smith : >> To the existing core team members, please respond with your comments, >> +1s, or objections within one week. > > +1. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ifat.afek at nokia.com Tue Mar 27 17:22:59 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 27 Mar 2018 17:22:59 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> Message-ID: Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Mar 27 17:28:50 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 27 Mar 2018 13:28:50 -0400 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> Message-ID: On 03/27/2018 12:42 PM, Matt Riedemann wrote: > On 3/27/2018 10:37 AM, Jay Pipes wrote: >> If we want to actually fix the issue once and for all, we need to make >> availability zones a real thing that has a permanent identifier (UUID) >> and store that permanent identifier in the instance (not the instance >> metadata). > > Aggregates have a UUID now, exposed in microversion 2.41 (you added it). > Is that what you mean by AZs having a UUID, since AZs are modeled as > host aggregates? Kind of. AZs are not actually a "thing" in Nova, as you know. They are just a hacked-up key/value metadata item on the nova host aggregate that has some "special" meaning and tribal knowledge [0] associated with it. And, of course, nova host aggregates are not visible to normal end users, which makes the coupling of AZ to host aggregate metadata even more hacky. > One of the alternatives in the spec is not relying on name as a unique > identifier and just make sure everything is held together via the > aggregate UUID, which is now possible. A few things are needed: 1) Stop *referring* to the AZ by name in the instance_metadata item corresponding to the availability_zone key. Instead, use the aggregate's UUID. 2) Stop using the instance_metadata to store this information. On nova boot --availability-zone=$ZONE_NAME, nova-api should look up the aggregate having the name of $ZONE_NAME, and store the internal aggregate ID in a new instance_mappings.availability_zone_id column. The UUID of this aggregate should be placed into a new build_requests.availability_zone_uuid column/attribute and passed along through the scheduler to the cell control plane, where it should be stored in the cell DB's instances table as a new field "availability_zone_uuid". In this way, outside of the Nova API database we always refer to the availability zone using the external UUID, never the name or the internal ID. 3) Add an attribute to the nova API database aggregates [1] table called "visibility" that can indicate whether the aggregate is publicly-visible or not. That way, the operator can establish groups of compute resources that can be viewed by a normal end user (availability zones, regions, sub-regions, power-domains, whatever they want). Best, -jay [0] Example of tribal knowledge is the fact that a single host aggregate cannot have the multiple availability zone metadata items associated with it: https://github.com/openstack/nova/blob/97042c0253be345beff3b99d08988cf95f60e759/nova/compute/api.py#L5066-L5083 [1] Or scrap nova host aggregates entirely and just move all this to the placement service/database, adding a name and visibility column to the placement_aggregates table... From gcerami at redhat.com Tue Mar 27 17:48:13 2018 From: gcerami at redhat.com (Gabriele Cerami) Date: Tue, 27 Mar 2018 18:48:13 +0100 Subject: [openstack-dev] [tripleo] modifications to the tech debt policy In-Reply-To: <20180216173543.srg3jh3a2zhv4t33@localhost> References: <20180216173543.srg3jh3a2zhv4t33@localhost> Message-ID: <20180327174813.asvvtxp6bml2d6vl@localhost> On 16 Feb, Gabriele Cerami wrote: > Hi, > > I started circling around technical debts a few months ago, and recently > started to propose changes in my team (CI) process on how to manage > them. > https://review.openstack.org/545392 Hi, I'd like to draw again some attention on this. I'm adding some answers to the common objection that I received after this proposal modification. If someone prefers, we can start the proposal from scratch and build it with the people interested. But I'd like to keep the discussion ongoing, and agree at least on some direction. "Why should I care if a piece of code that works is not optimal and not considering all the cases?" It's all about yours, your team's or another team's future time. "All" is a relative term. "all" the cases someone considered may not be a comprehensive set, and the implementation is really creating unnoticed technical debt if no one else addresses it. The why in general relates closely to Project Management, even if we don't have official PM roles in our projects, there are some tools, techniques, best practices and element that it's best to consider. Technical debts are the developers' contribution to the global risk assessment of a project. Risk management usually deals with "what if"s, calculates the impact of a decision that may make someone lose precious time in the future, and prepares a response in case the "what if" becomes reality. Completely ignoring the TDs may put the project at unassessed risk, and no track will be left of what led to a bad situation. Again, I borrow from project management, or quality management in this case: "Do it right the first time" is a principle that Crosby introduced in 1979. It would be the best approach, and I think we should pursue it. It will not always be possible, but when that happens, at least write it why and what would have been the optimal solution somewhere. "Is it really that big of a problem ? There's still a lot going on without having to take care of this, I don't think TDs impact that much or that often, we don't make so many of them" Since we are not tracking them regularly and with all the informations useful for data collection, we really have no clear data on how much technical debts are making an impact on our capacity. Also, unless a certain technical debt required a rewrite at a later date, representing a serious problem, it's really unlikely that someone remembers it. Hence the need of a policy with a precise workflow and details of what is useful to know at TD creation/detection time. "I agree with the premises, but this is too much work for the contributors" Fixing stuff at a later time may cost a lot more, especially when there is no track of what and why we decided to implement something in a certain way. I reworked the first attempt at the proposal a bit to make it more simple, not sure if less than this will provide enough informations. All the proposal is asking to do more than the exisint is to create a bug with some specific urgency and add two bullet points on it. The original policy was already suggesting to reference the bug in the code, the proposal ask to mark it as TD. Maybe developers can be drawn by more precise indications and workflows, and a detailed motivation. If we start creating TD properly, even if every TD is accepted as it is, with no further processing, we'll have a record of our decisions, and we can start to evaluate better how much really TDs are impacting us. Also these few infos added allow for further handling if need arise. Thanks. From chris.friesen at windriver.com Tue Mar 27 17:49:08 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 27 Mar 2018 11:49:08 -0600 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> <3ce67128-07ee-559c-f54d-a0e62b2e38ee@gmail.com> Message-ID: <5ABA8414.8030407@windriver.com> On 03/27/2018 10:42 AM, Matt Riedemann wrote: > On 3/27/2018 10:37 AM, Jay Pipes wrote: >> If we want to actually fix the issue once and for all, we need to make >> availability zones a real thing that has a permanent identifier (UUID) and >> store that permanent identifier in the instance (not the instance metadata). > > Aggregates have a UUID now, exposed in microversion 2.41 (you added it). Is that > what you mean by AZs having a UUID, since AZs are modeled as host aggregates? > > One of the alternatives in the spec is not relying on name as a unique > identifier and just make sure everything is held together via the aggregate > UUID, which is now possible. If we allow non-unique availability zone names, we'd need to display the availability zone UUID in horizon when selecting an availability zone. I think it'd make sense to still require the availability zone names to be unique, but internally store the availability zone UUID in the instance instead of the name. Chris From jim at jimrollenhagen.com Tue Mar 27 19:25:20 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 27 Mar 2018 15:25:20 -0400 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: <530670DA-DAFB-4734-B7A1-C97F991FD9E2@kaplonski.pl> Message-ID: Glancing at the code[0], it looks like cotyledon has this built in already. Though, you'll probably want to pass in reload_method='mutate' like the oslo docs suggest[1]. [0] https://github.com/sileht/cotyledon/blob/master/cotyledon/oslo_config_glue.py#L68 [1] https://docs.openstack.org/oslo.config/latest/reference/mutable.html#calling-mutate-config-files // jim On Tue, Mar 27, 2018 at 12:20 PM, Michael Johnson wrote: > Does anyone know how this will work with services that are using > cotyledon instead of oslo.service (for eliminating eventlet)? > > Michael > > On Mon, Mar 26, 2018 at 5:35 AM, Sławomir Kapłoński > wrote: > > Hi, > > > > > >> Wiadomość napisana przez ChangBo Guo w dniu > 26.03.2018, o godz. 14:15: > >> > >> > >> 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > >> Hi, > >> > >> I took care of implementation of [1] in Neutron and I have couple > questions to about this goal. > >> > >> 1. Should we only change "restart_method" to mutate as is described in > [2] ? I did already something like that in [3] - is it what is expected? > >> > >> Yes , let's the only thing. we need test if that if it works . > > > > Ok, so please take a look at my patch for neutron if that is what we > should do :) > > > >> > >> 2. How I can check if this change is fine and config option are mutable > exactly? For now when I change any config option for any of neutron agents > and send SIGHUP to it it is in fact "restarted" and config is reloaded even > with this old restart method. > >> > >> good question, we indeed thought this question when we proposal > the goal. But It seems difficult to test that consuming projects like > Neutron automatically. > > > > I was asking rather about some manual test instead of automatic one. > > > >> > >> 3. Should we add any automatic tests for such change also? Any examples > of such tests in other projects maybe? > >> There is no example for tests now, we only have some unit tests > in oslo.service . > >> > >> [1] https://governance.openstack.org/tc/goals/rocky/enable- > mutable-configuration.html > >> [2] https://docs.openstack.org/oslo.config/latest/reference/ > mutable.html > >> [3] https://review.openstack.org/#/c/554259/ > >> > >> — > >> Best regards > >> Slawek Kaplonski > >> slawek at kaplonski.pl > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> > >> -- > >> ChangBo Guo(gcb) > >> Community Director @EasyStack > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > — > > Best regards > > Slawek Kaplonski > > slawek at kaplonski.pl > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Tue Mar 27 21:36:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 27 Mar 2018 16:36:07 -0500 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: <4467e883-5403-94fa-1830-13b437dbd7e4@gmail.com> On 3/26/2018 9:00 PM, melanie witt wrote: > To the existing core team members, please respond with your comments, > +1s, or objections within one week. +1 -- Thanks, Matt From zigo at debian.org Tue Mar 27 21:49:53 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 27 Mar 2018 23:49:53 +0200 Subject: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports In-Reply-To: <20180327142044.kgbyin4fze7ldkyf@yuggoth.org> References: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> <20180327142044.kgbyin4fze7ldkyf@yuggoth.org> Message-ID: <54532d66-c90e-4f9d-f01c-2268f2dac7eb@debian.org> On 03/27/2018 04:20 PM, Jeremy Stanley wrote: > On 2018-03-27 15:53:34 +0200 (+0200), Thomas Goirand wrote: > [...] >> I'd really love to have Sid as a possible image in the infra > [...] > > This has unfortunately been rotting too far down my to do list for > me to get to it. I'd love to have debian-sid nodes to test some > stuff on as well--especially clients/libraries/utilities--since a > lot of my (workstation and portable) systems are running it. If > someone is interested in and has time to work on this, I'm happy to > provide guidance and review their changes. It involves fixing DIB to make it produce a Sid image, right? I tried to make recent DIB to work. I really did. But seriously, it was a horrible experience and I gave up. :( Cheers, Thomas Goirand (zigo) From zigo at debian.org Tue Mar 27 21:54:00 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 27 Mar 2018 23:54:00 +0200 Subject: [openstack-dev] Announcing Queens packages for Debian Sid/Buster and Stretch backports In-Reply-To: <20180327143956.pbnzlbn267pozr7r@barron.net> References: <54dfe8cd-a268-dd67-8542-e91754bdea63@debian.org> <20180327143956.pbnzlbn267pozr7r@barron.net> Message-ID: On 03/27/2018 04:39 PM, Tom Barron wrote: > On 27/03/18 15:53 +0200, Thomas Goirand wrote: >> Building the packages worked surprisingly well. I was secretly expecting >> more failures. The only real collateral damage is: >> >> - manila-ui (no Py3 support upstream) > > Just a note of thanks for calling our attention to this issue.  > manila-ui had been rather neglected and is getting TLC now. > We'll certainly get back to you when we've got it working with Python 3. > > -- Tom Barron Sure, no pb! I do understand it may take time, no worries. I just find it a bit frustrating, as I like the Manila project. Please continue to send me patches to try. Hopefully, you'll get there soon. If you wish, I can also explain to you how to build the Debian package for manila-ui if you want to try yourself in Sid. Cheers, Thomas Goirand (zigo) From MM9745 at att.com Tue Mar 27 23:49:38 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 27 Mar 2018 23:49:38 +0000 Subject: [openstack-dev] [openstack-helm] Core reviewer retirements Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8965B92D9B@MOSTLS1MSGUSRFF.ITServices.sbc.com> A handful of the OpenStack-Helm core reviewers have shifted their focus over the past half year, and have not had the opportunity to maintain the same level of reviews and contributions. We've jointly agreed that it's the right time for them to retire as active core reviewers. Brandon Jozsa Larry Rensing Darla Ahlert These folks helped set the direction of OpenStack-Helm, and I can't thank them enough for their contributions and dedication. They'll always be part of the team, and circumstances permitting in the future, I hope to collaborate closely again! Thanks, Matt McEuen From bluejay.ahn at gmail.com Wed Mar 28 00:02:35 2018 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Wed, 28 Mar 2018 00:02:35 +0000 Subject: [openstack-dev] [openstack-helm] Core reviewer retirements In-Reply-To: <7C64A75C21BB8D43BD75BB18635E4D8965B92D9B@MOSTLS1MSGUSRFF.ITServices.sbc.com> References: <7C64A75C21BB8D43BD75BB18635E4D8965B92D9B@MOSTLS1MSGUSRFF.ITServices.sbc.com> Message-ID: I really appreciate their efforts on openstack-helm. they made this project possible. :) I also hope to collaborate together again!! I will see you around on irc or other places. ;) Thanks!! 2018년 3월 28일 (수) 오전 8:50, MCEUEN, MATT 님이 작성: > A handful of the OpenStack-Helm core reviewers have shifted their focus > over the past half year, and have not had the opportunity to maintain the > same level of reviews and contributions. We've jointly agreed that it's > the right time for them to retire as active core reviewers. > > Brandon Jozsa > Larry Rensing > Darla Ahlert > > These folks helped set the direction of OpenStack-Helm, and I can't thank > them enough for their contributions and dedication. They'll always be part > of the team, and circumstances permitting in the future, I hope to > collaborate closely again! > > Thanks, > Matt McEuen > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Jaesuk Ahn, Team Lead Virtualization SW Lab, SW R&D Center SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Mar 28 02:14:57 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 28 Mar 2018 13:14:57 +1100 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> Message-ID: <20180328021457.GG13389@thor.bakeyournoodle.com> On Mon, Mar 26, 2018 at 03:33:03PM +0200, Thierry Carrez wrote: > Let me know if you have any comment, otherwise we'll start using that > new process for the Rocky cycle (stable/rocky branch). Sounds good to me, Thanks Thierry Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From delightwook at ssu.ac.kr Wed Mar 28 02:21:25 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Wed, 28 Mar 2018 11:21:25 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> Message-ID: <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org " > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 20114 bytes Desc: not available URL: From whayutin at redhat.com Wed Mar 28 04:23:15 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 28 Mar 2018 00:23:15 -0400 Subject: [openstack-dev] [tripleo][ci] FYI.. the full tempest execution removed from promotion criteria temporarily Message-ID: Greetings, The upstream packages for master and queens have not been updated in TripleO in 22 days. We have come very close to a package promotion a number of times, but failed for several different reasons. In this latest issue the full tempest job featureset020 was discussed with both Alex and Emilien and we are temporarily removing it from criteria for a promotion. There are several performance issues atm that we are still getting details on with regards to the number of httpd processes on the controller and the cpu usage of openvswitch agents. The full tempest job is very useful and helpful in discovering issues like this one that may have gone undetected otherwise. Removing it temporarily is a safe operation because none of the upstream tripleo check or gate jobs run full tempest. As soon as the promotion is complete with the containers, images, and repo promoted I will revert the patches that removed the full tempest run from criteria. Note the tempest jobs are still running at the time I'm writing this email and still may pass, however to ensure upstream gets promoted packages it has been removed as a precaution. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Wed Mar 28 06:40:13 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 28 Mar 2018 06:40:13 +0000 Subject: [openstack-dev] [vitrage] Next two IRC meetings are canceled Message-ID: Hi, Most of Vitrage developers will not be available today and next Wednesday, so we’ll skip the next two IRC meetings. We will meet again on Wednesday, April 11. Thanks, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Mar 28 08:31:27 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 28 Mar 2018 09:31:27 +0100 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <20180328021457.GG13389@thor.bakeyournoodle.com> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> <20180328021457.GG13389@thor.bakeyournoodle.com> Message-ID: LGTM From jean-philippe at evrard.me Wed Mar 28 08:42:54 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 28 Mar 2018 09:42:54 +0100 Subject: [openstack-dev] [openstack-ansible] Stepping down from OpenStack-Ansible core In-Reply-To: <74A47E01-C07A-41B2-A29D-6E50195AFFEF@rackspace.co.uk> References: <4fb1218e-2278-691d-287e-60ac10ab1133@mhtx.net> <74A47E01-C07A-41B2-A29D-6E50195AFFEF@rackspace.co.uk> Message-ID: Hello, Ahah, gate job breakages? You were the first to break them, but also willing to step in to fix them as soon as you knew. And that's the part I will remember the most. You will be missed, Major. Your next team is lucky to have you! It was a pleasure working with you. And the gifs, omagad! :) JP On 27 March 2018 at 12:11, Jesse Pretorius wrote: > Ah Major, we shall definitely miss your readiness to help, positive attitude and deep care for setenforce 1. Oh, and then there're the gifs... so many gifs... > > While I am inclined to [1], I shall instead wish you well while you [2]. ( > > [1] https://media.giphy.com/media/1BXa2alBjrCXC/giphy.gif > [2] https://media.giphy.com/media/G6if3AWViiNdC/giphy.gif > > > On 3/26/18, 2:07 PM, "Major Hayden" wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hey there, > > As promised, I am stepping down from being an OpenStack-Ansible core reviewer since I am unable to meet the obligations of the role with my new job. :( > > Thanks to everyone who has mentored me along the way and put up with my gate job breakages. I have learned an incredible amount about OpenStack, Ansible, complex software deployments, and open source communities. I appreciate everyone's support as I worked through the creation of the ansible-hardening role as well as adding CentOS support for OpenStack-Ansible. > > - -- > Major Hayden > -----BEGIN PGP SIGNATURE----- > > iQIzBAEBCAAdFiEEG/mSZJWWADNpjCUrc3BR4MEBH7EFAlq4774ACgkQc3BR4MEB > H7E+gA/9HJEDibsQhdy191NbxbhF75wUup3gRDHhGPI6eFqHo/Iz8Q5Kv9Z9CXbo > rkBGMebbGzoKwiLnKbFWr448azMJkj5/bTRLHb1eDQg2S2xaywP2L4e0CU+Gouto > DucmGT6uLg+LKdQByYTB8VAHelub4DoxV2LhwsH+uYgWp6rZ2tB2nEIDTYQihhGx > /WukfG+3zA99RZQjWRHmfnb6djB8sONzGIM8qY4qDUw9Xjp5xguHOU4+lzn4Fq6B > cEpsJnztuEYnEpeTjynu4Dc8g+PX8y8fcObhcj+1D0NkZ1qW7sdX6CA64wuYOqec > S552ej/fR5FPRKLHF3y8rbtNIlK5qfpNPE4UFKuVLjGSTSBz4Kp9cGn2jNCzyw5c > aDQs/wQHIiUECzY+oqU1RHZJf9/Yq1VVw3vio+Dye1IMgkoaNpmX9lTcNw9wb1i7 > lac+fm0e438D+c+YZAttmHBCCaVWgKdGxH7BY84FoQaXRcaJ9y3ZoDEx6Rr8poBQ > pK4YjUzVP9La2f/7S1QemX2ficisCbX+MVmAX9G4Yr9U2n98aXVWFMaF4As1H+OS > zm9r9saoAZr6Z8BxjROjoClrg97RN1zkPseUDwMQwlJwF3V33ye3ib1dYWRr7BSm > zAht+Jih/JE6Xtp+5UEF+6TBCYFVtXO8OHzCcac14w9dy1ur900= > =fx64 > -----END PGP SIGNATURE----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ________________________________ > Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From allprog at gmail.com Wed Mar 28 08:58:56 2018 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Wed, 28 Mar 2018 10:58:56 +0200 Subject: [openstack-dev] [openstack-infra] Where did the ARA logs go? Message-ID: Hi, Recently I noticed that ARA logs were published for all CI jobs. It seems like the reports do not contain the these logs any more. I tried to research on what happened to them but couldn't find any info. Can someone please enlighten me about this change? Thank you, Andras From lvmxhster at gmail.com Wed Mar 28 09:10:22 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Wed, 28 Mar 2018 17:10:22 +0800 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> Message-ID: I have summarize some scenarios for fpga devices request. https://etherpad.openstack.org/p/cyborg-fpga-request-scenarios Please add more more scenarios to find out the exceptions that placement can not satisfy the filter and weight. IMOH, I refer placement to do filter and weight. If we have to let cyborg do filter and weight. Nova scheduler just need call cyborg once for all host weight though we do the weigh one by one. 2018-03-23 12:27 GMT+08:00 Nadathur, Sundar : > Hi all, > There seems to be a possibility of a race condition in the Cyborg/Nova > flow. Apologies for missing this earlier. (You can refer to the proposed > Cyborg/Nova spec > > for details.) > > Consider the scenario where the flavor specifies a resource class for a > device type, and also specifies a function (e.g. encrypt) in the extra > specs. The Nova scheduler would only track the device type as a resource, > and Cyborg needs to track the availability of functions. Further, to keep > it simple, say all the functions exist all the time (no reprogramming > involved). > > To recap, here is the scheduler flow for this case: > > - A request spec with a flavor comes to Nova conductor/scheduler. The > flavor has a device type as a resource class, and a function in the extra > specs. > - Placement API returns the list of RPs (compute nodes) which contain > the requested device types (but not necessarily the function). > - Cyborg will provide a custom filter which queries Cyborg DB. This > needs to check which hosts contain the needed function, and filter out the > rest. > - The scheduler selects one node from the filtered list, and the > request goes to the compute node. > > For the filter to work, the Cyborg DB needs to maintain a table with > triples of (host, function type, #free units). The filter checks if a given > host has one or more free units of the requested function type. But, to > keep the # free units up to date, Cyborg on the selected compute node needs > to notify the Cyborg API to decrement the #free units when an instance is > spawned, and to increment them when resources are released. > > Therein lies the catch: this loop from the compute node to controller is > susceptible to race conditions. For example, if two simultaneous requests > each ask for function A, and there is only one unit of that available, the > Cyborg filter will approve both, both may land on the same host, and one > will fail. This is because Cyborg on the controller does not decrement > resource usage due to one request before processing the next request. > > This is similar to this previous Nova scheduling issue > . > That was solved by having the scheduler claim a resource in Placement for > the selected node. I don't see an analog for Cyborg, since it would not > know which node is selected. > > Thanks in advance for suggestions and solutions. > > Regards, > Sundar > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Mar 28 10:58:02 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 28 Mar 2018 18:58:02 +0800 Subject: [openstack-dev] [cyborg]Team Weekly Meeting 2018.03.28 Message-ID: Hi Team, Weekly meeting as usual starting UTC1400 at #openstack-cyborg, initial agenda as follows: * Cyborg GPU support discussion * Clock driver introduction by ZTE team * Rocky dev discussion: https://review.openstack.org/#/q/status:open+project:openstack/cyborg -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Wed Mar 28 11:39:26 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 28 Mar 2018 13:39:26 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder bi-weekly meeting Public Cloud WG Message-ID: Hi all, Time again for a meeting for the Public Cloud WG - at our new time and channel - tomorrow at 1400 UTC in #openstack-publiccloud Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg Cheers, Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From doug at doughellmann.com Wed Mar 28 13:51:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 28 Mar 2018 09:51:34 -0400 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> Message-ID: <1522245065-sup-2271@lrrr.local> Excerpts from Thierry Carrez's message of 2018-03-26 15:33:03 +0200: > Hi! > > TL;DR: > We used to do complex things with ACLs for stable/* branches around > releases. Let's stop doing that as it's not really useful anyway, and > just trust the $project-stable-maint teams to do the right thing. +1 to not doing things we no longer consider useful From sean.mcginnis at gmx.com Wed Mar 28 14:03:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 28 Mar 2018 09:03:55 -0500 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <1522245065-sup-2271@lrrr.local> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> <1522245065-sup-2271@lrrr.local> Message-ID: <20180328140355.GA22364@sm-xps> On Wed, Mar 28, 2018 at 09:51:34AM -0400, Doug Hellmann wrote: > Excerpts from Thierry Carrez's message of 2018-03-26 15:33:03 +0200: > > Hi! > > > > TL;DR: > > We used to do complex things with ACLs for stable/* branches around > > releases. Let's stop doing that as it's not really useful anyway, and > > just trust the $project-stable-maint teams to do the right thing. > > +1 to not doing things we no longer consider useful > +1 to keeping things simple. From ksnhr.tech at gmail.com Wed Mar 28 14:18:02 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Wed, 28 Mar 2018 23:18:02 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan & Horizon folks AFAIK, Horizon team had conclusion that you will add the specific members to xstatic-core, correct ? Can I ask you to add the following members ? # All of tree are heat-dashboard core. Kazunori Shinohara / ksnhr.tech at gmail.com #myself Xinni Ge / xinni.ge1990 at gmail.com Keiichi Hikita / keiichi.hikita at gmail.com Please give me a shout, if we are not on same page or any concern. Regards, Kaz 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : > Hi Ivan, Akihiro, > > > Thanks for your kind arrangement. > Looking forward to hearing your decision soon. > > Regards, > Kaz > > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >> HI Team, >> >> From my perspective, I'm OK both with #2 and #3 options. I agree that #4 >> could be too complicated for us. Anyway, we've got this topic on the meeting >> agenda [1] so we'll discuss it there too. I'll share our decision after the >> meeting. >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >> >> >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki wrote: >>> >>> Hi Kaz and Ivan, >>> >>> Yeah, it is worth discussed officially in the horizon team meeting or the >>> mailing list thread to get a consensus. >>> Hopefully you can add this topic to the horizon meeting agenda. >>> >>> After sending the previous mail, I noticed anther option. I see there are >>> several options now. >>> (1) Keep xstatic-core and horizon-core same. >>> (2) Add specific members to xstatic-core >>> (3) Add specific horizon-plugin core to xstatic-core >>> (4) Split core membership into per-repo basis (perhaps too complicated!!) >>> >>> My current vote is (2) as xstatic-core needs to understand what is xstatic >>> and how it is maintained. >>> >>> Thanks, >>> Akihiro >>> >>> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >>>> >>>> Hi Akihiro, >>>> >>>> >>>> Thanks for your comment. >>>> The background of my request to add us to xstatic-core comes from >>>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion. >>>> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >>>> Line135, "we can share ownership if needed - e0ne" >>>> >>>> Just in case, could you guys confirm unified opinion on this matter as >>>> Horizon team ? >>>> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >>>> because it's easier & smoother to manage what we are taking for >>>> heat-dashboard. >>>> On the other hand, I can understand what Akihiro you are saying, the >>>> newly added repos belong to Horizon project & being managed by not >>>> Horizon core is not consistent. >>>> Also having exception might make unexpected confusion in near future. >>>> >>>> Eventually we will follow your opinion, let me hear Horizon team's >>>> conclusion. >>>> >>>> Regards, >>>> Kaz >>>> >>>> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >>>> > Hi Kaz, >>>> > >>>> > These repositories are under horizon project. It looks better to keep >>>> > the >>>> > current core team. >>>> > It potentially brings some confusion if we treat some horizon plugin >>>> > team >>>> > specially. >>>> > Reviewing xstatic repos would be a small burden, wo I think it would >>>> > work >>>> > without problem even if only horizon-core can approve xstatic reviews. >>>> > >>>> > >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : >>>> >> >>>> >> Hi Ivan, Horizon folks, >>>> >> >>>> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. >>>> >> >>>> >> In project-config for them, I've set same acl-config as the existing >>>> >> xstatic repos. >>>> >> It means only "xstatic-core" can manage the newly created repos on >>>> >> gerrit. >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" like as >>>> >> what horizon-core is doing ? >>>> >> >>>> >> xstatic-core >>>> >> https://review.openstack.org/#/admin/groups/385,members >>>> >> >>>> >> heat-dashboard-core >>>> >> https://review.openstack.org/#/admin/groups/1844,members >>>> >> >>>> >> Of course, we will surely touch only what we made, just would like to >>>> >> manage them smoothly by ourselves. >>>> >> In case we need to touch the other ones, will ask Horizon team for >>>> >> help. >>>> >> >>>> >> Thanks in advance. >>>> >> >>>> >> Regards, >>>> >> Kaz >>>> >> >>>> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >>>> >> > Hi Horizon Team, >>>> >> > >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >>>> >> > option, >>>> >> > and submitted a patch for it. >>>> >> > Could you please help to review the patch. >>>> >> > >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >>>> >> > https://review.openstack.org/#/c/552259/ >>>> >> > >>>> >> > Thank you very much. >>>> >> > >>>> >> > Best Regards, >>>> >> > Xinni >>>> >> > >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >>>> >> > wrote: >>>> >> >> >>>> >> >> Hi Kaz, >>>> >> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these patches >>>> >> >> >>>> >> >> Regards, >>>> >> >> Ivan Kolodyazhny, >>>> >> >> http://blog.e0ne.info/ >>>> >> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >>>> >> >> >>>> >> >> wrote: >>>> >> >>> >>>> >> >>> Hi Ivan & Horizon folks, >>>> >> >>> >>>> >> >>> >>>> >> >>> Now we are submitting a couple of patches to have the new xstatic >>>> >> >>> modules. >>>> >> >>> Let me request you to have review the following patches. >>>> >> >>> We need Horizon PTL's +1 to move these forward. >>>> >> >>> >>>> >> >>> project-config >>>> >> >>> https://review.openstack.org/#/c/551978/ >>>> >> >>> >>>> >> >>> governance >>>> >> >>> https://review.openstack.org/#/c/551980/ >>>> >> >>> >>>> >> >>> Thanks in advance:) >>>> >> >>> >>>> >> >>> Regards, >>>> >> >>> Kaz >>>> >> >>> >>>> >> >>> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >>>> >> >>> : >>>> >> >>> > Yes, please do that. We can then discuss in the review about >>>> >> >>> > technical >>>> >> >>> > details. >>>> >> >>> > >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>>> >> >>> > >>>> >> >>> > wrote: >>>> >> >>> >> >>>> >> >>> >> Hi, Akihiro >>>> >> >>> >> >>>> >> >>> >> Thanks for the quick reply. >>>> >> >>> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should not >>>> >> >>> >> be >>>> >> >>> >> modified. >>>> >> >>> >> It is much better to enhance horizon plugin settings, >>>> >> >>> >> and I think maybe there could be one option like >>>> >> >>> >> ADD_XSTATIC_MODULES. >>>> >> >>> >> This option adds the plugin's xstatic files in >>>> >> >>> >> STATICFILES_DIRS. >>>> >> >>> >> I am considering to add a bug report to describe it at first, >>>> >> >>> >> and >>>> >> >>> >> give >>>> >> >>> >> a >>>> >> >>> >> patch later maybe. >>>> >> >>> >> Is that ok with the Horizon team? >>>> >> >>> >> >>>> >> >>> >> Best Regards. >>>> >> >>> >> Xinni >>>> >> >>> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>>> >> >>> >> >>>> >> >>> >> wrote: >>>> >> >>> >>> >>>> >> >>> >>> Hi Xinni, >>>> >> >>> >>> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge : >>>> >> >>> >>> > Hello Horizon Team, >>>> >> >>> >>> > >>>> >> >>> >>> > I would like to hear about your opinions about how to add >>>> >> >>> >>> > new >>>> >> >>> >>> > xstatic >>>> >> >>> >>> > modules to horizon settings. >>>> >> >>> >>> > >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files >>>> >> >>> >>> > issue, >>>> >> >>> >>> > thanks >>>> >> >>> >>> > for >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them and >>>> >> >>> >>> > referencing as >>>> >> >>> >>> > new >>>> >> >>> >>> > xstatic-* libs. >>>> >> >>> >>> >>>> >> >>> >>> Thanks for moving this forward. >>>> >> >>> >>> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded as >>>> >> >>> >>> > openstack >>>> >> >>> >>> > official >>>> >> >>> >>> > repos yet) in our development environment now, but hesitate >>>> >> >>> >>> > to >>>> >> >>> >>> > decide >>>> >> >>> >>> > how to >>>> >> >>> >>> > add the new installed xstatic lib path to STATICFILES_DIRS >>>> >> >>> >>> > in >>>> >> >>> >>> > openstack_dashboard.settings so that the static files could >>>> >> >>> >>> > be >>>> >> >>> >>> > automatically >>>> >> >>> >>> > collected by *collectstatic* process. >>>> >> >>> >>> > >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant >>>> >> >>> >>> > static >>>> >> >>> >>> > fils >>>> >> >>> >>> > are >>>> >> >>> >>> > added >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin >>>> >> >>> >>> > dashboard. >>>> >> >>> >>> > We may want new plugin setting keywords ( something similar >>>> >> >>> >>> > to >>>> >> >>> >>> > ADD_JS_FILES) >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >>>> >> >>> >>> > STATICFILES_DIRS). >>>> >> >>> >>> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic >>>> >> >>> >>> modules >>>> >> >>> >>> through horizon plugin settings. I don't think it is a good >>>> >> >>> >>> idea >>>> >> >>> >>> to >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon >>>> >> >>> >>> plugin >>>> >> >>> >>> usages. It makes difficult to track why and where a xstatic >>>> >> >>> >>> module >>>> >> >>> >>> in >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so horizon code >>>> >> >>> >>> to >>>> >> >>> >>> handle plugin settings should merge multiple entries to a >>>> >> >>> >>> single >>>> >> >>> >>> one >>>> >> >>> >>> hopefully. >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >>>> >> >>> >>> >>>> >> >>> >>> Akihiro >>>> >> >>> >>> >>>> >> >>> >>> > >>>> >> >>> >>> > Looking forward to hearing any suggestions from you guys, >>>> >> >>> >>> > and >>>> >> >>> >>> > Best Regards, >>>> >> >>> >>> > >>>> >> >>> >>> > Xinni Ge >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > __________________________________________________________________________ >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage questions) >>>> >> >>> >>> > Unsubscribe: >>>> >> >>> >>> > >>>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> >>> > >>>> >> >>> >>> >>>> >> >>> >>> >>>> >> >>> >>> >>>> >> >>> >>> >>>> >> >>> >>> >>>> >> >>> >>> __________________________________________________________________________ >>>> >> >>> >>> OpenStack Development Mailing List (not for usage questions) >>>> >> >>> >>> Unsubscribe: >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> >>> >>>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> -- >>>> >> >>> >> 葛馨霓 Xinni Ge >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> __________________________________________________________________________ >>>> >> >>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> >>> >> Unsubscribe: >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> >> >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > __________________________________________________________________________ >>>> >> >>> > OpenStack Development Mailing List (not for usage questions) >>>> >> >>> > Unsubscribe: >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> > >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> > >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> __________________________________________________________________________ >>>> >> >>> OpenStack Development Mailing List (not for usage questions) >>>> >> >>> Unsubscribe: >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> __________________________________________________________________________ >>>> >> >> OpenStack Development Mailing List (not for usage questions) >>>> >> >> Unsubscribe: >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >> >>>> >> > >>>> >> > >>>> >> > >>>> >> > -- >>>> >> > 葛馨霓 Xinni Ge >>>> >> > >>>> >> > >>>> >> > >>>> >> > __________________________________________________________________________ >>>> >> > OpenStack Development Mailing List (not for usage questions) >>>> >> > Unsubscribe: >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> > >>>> >> >>>> >> >>>> >> __________________________________________________________________________ >>>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> Unsubscribe: >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > >>>> > >>>> > >>>> > __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From sean.mcginnis at gmx.com Wed Mar 28 14:26:49 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 28 Mar 2018 09:26:49 -0500 Subject: [openstack-dev] [openstack-infra] Where did the ARA logs go? In-Reply-To: References: Message-ID: <20180328142649.GB22364@sm-xps> On Wed, Mar 28, 2018 at 10:58:56AM +0200, András Kövi wrote: > Hi, > > Recently I noticed that ARA logs were published for all CI jobs. It > seems like the reports do not contain the these logs any more. I tried > to research on what happened to them but couldn't find any info. Can > someone please enlighten me about this change? > > Thank you, > Andras > I believe the ARA logs are only captured on failing jobs. From sfinucan at redhat.com Wed Mar 28 14:31:36 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 28 Mar 2018 15:31:36 +0100 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc Message-ID: <1522247496.4003.31.camel@redhat.com> As noted last week [1], we're trying to move away from pbr's autodoc feature as part of the new docs PTI. To that end, I've created sphinxcontrib-apidoc, which should do what pbr was previously doing for us by via a Sphinx extension. https://pypi.org/project/sphinxcontrib-apidoc/ This works by reading some configuration from your documentation's 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no longer need pbr to do this for. I have pushed version 0.1.0 to PyPi already but before I add this to global requirements, I'd like to ensure things are working as expected. smcginnis was kind enough to test this out on glance and it seemed to work for him but I'd appreciate additional data points. The configuration steps for this extension are provided in the above link. To test this yourself, you simply need to do the following: 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or doc/requirements.txt file 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' configuration from 'setup.cfg' 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' 4. Run 'tox -e docs' 5. Profit? Be sure to let me know if anyone encounters issues. If not, I'll be pushing for this to be included in global requirements so we can start the migration. Cheers, Stephen [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html From gr at ham.ie Wed Mar 28 14:34:32 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 28 Mar 2018 15:34:32 +0100 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> Message-ID: <8e7bab5c-0d01-f6cd-a624-c61768459025@ham.ie> On 26/03/2018 14:33, Thierry Carrez wrote: > Hi! > > TL;DR: > We used to do complex things with ACLs for stable/* branches around > releases. Let's stop doing that as it's not really useful anyway, and > just trust the $project-stable-maint teams to do the right thing. > > > Current situation: > > As we get close to the end of a release cycle, we start creating > stable/$series branches to refine what is likely to become a part of the > coordinated release at the end of the cycle. After release, that same > stable/$series branch is used to backport fixes and issue further point > releases. > > The rules to apply for approving changes to stable/$series differ > slightly depending on whether you are pre-release or post-release. To > reflect that, we use two different groups. Pre-release the branch is > controlled by the $project-release group (and Release Managers) and > post-release the branch is controlled by the $project-stable-maint group > (and stable-maint-core). > > To switch between the two without blocking on an infra ACL change, the > release team enters a complex dance where we initially create an ACL for > stable/$series, giving control of it to a $project-release-branch group, > whose membership is reset at every cycle to contain $project-release. At > release time, we update $project-release-branch Gerrit group membership > to contain $project-stable-maint instead. Then we get rid of the > stable/$series ACL altogether. > > This process is a bit complex and error-prone (and we tend to have to > re-learn it every cycle). It's also designed for a time when we expected > completely-different people to be in -release and -stable-maint groups, > while those are actually, most of the time, the same people. > Furthermore, with more and more deliverables being released under the > cycle-with-intermediary model, pre-release and post-release approval > rules are actually more and more of the same. > > Proposal: > > By default, let's just have $project-stable-maint control stable/*. We > no longer create new ACLs for stable/$series every cycle, we no longer > switch from $project-release control to $project-stable-maint control. > The release team no longer does anything around stable branch ACLs or > groups during the release cycle. > > That way, the same group ends up being used to control stable/* > pre-release and post-release. They were mostly the same people already: > Release managers are a part of stable-maint-core, which is included in > every $project-stable-maint anyway, so they retain control. > > What that changes for you: > > If you are part of $project-release but not part of > $project-stable-maint, you'll probably want to join that team. If you > review pre-release changes on a stable branch for a > cycle-with-milestones deliverable, you will have to remember that the > rules there are slightly different from stable branch approval rules. In > doubt, do not approve, and ask. It is more complex than just "joining that team" if the project follows stable policy. the stable team have to approve the additions, and do reject people trying to join them. I don't want to have a release where someone has to self approve / ninja approve patches due to cores *not* having the access rights that they previously had. > But I don't like that! I prefer tight ACLs! > > While we do not recommend it, every team can still specify more complex > ACLs to control their stable branches. As long as the "Release Managers" > group retains ability to approve changes pre-release (and > stable-maint-core retains ability to approve changes post-release), more > specific ACLs are fine. > > Let me know if you have any comment, otherwise we'll start using that > new process for the Rocky cycle (stable/rocky branch). > > Thanks ! > From yamamoto at midokura.com Wed Mar 28 14:59:39 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 28 Mar 2018 23:59:39 +0900 Subject: [openstack-dev] [tap-as-a-service] publish on pypi Message-ID: hi, i'm thinking about publishing the latest release of tap-as-a-service on pypi. background: https://review.openstack.org/#/c/555788/ iirc, the naming (tap-as-a-service vs neutron-taas) was one of concerns when we talked about this topic last time. (long time ago. my memory is dim.) do you have any ideas or suggestions? probably i'll just use "tap-as-a-service" unless anyone has strong opinions. because: - it's the name we use the most frequently - we are not neutron (yet?) From chh_cc at 163.com Wed Mar 28 15:03:56 2018 From: chh_cc at 163.com (=?GBK?B?s8K6uQ==?=) Date: Wed, 28 Mar 2018 23:03:56 +0800 (CST) Subject: [openstack-dev] Queries about API Extension Message-ID: <3cd65f45.d6c8.1626d230ea9.Coremail.chh_cc@163.com> Hi all, Here are my questions: For the projects whose api parts were implemented with Pecan, is there any way(hope it is graceful) to extend these api? I mean, for example, somehow I have to add several extra attributes in Class Chassis in ironic project. Do you guys have any better way instead of directly editing the file of chassis.py? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Mar 28 15:13:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 28 Mar 2018 15:13:13 +0000 Subject: [openstack-dev] [openstack-infra] Where did the ARA logs go? In-Reply-To: <20180328142649.GB22364@sm-xps> References: <20180328142649.GB22364@sm-xps> Message-ID: <20180328151313.vzfpm2jfgyabtw36@yuggoth.org> On 2018-03-28 09:26:49 -0500 (-0500), Sean McGinnis wrote: [...] > I believe the ARA logs are only captured on failing jobs. Correct. This was a stop-gap some months ago when we noticed we were overrunning our inode capacity on the logserver. ARA was was only one of the various contributors to that increased consumption but due to its original model based on numerous tiny files, limiting it to job failures (where it was most useful) was one of the ways we temporarily curtailed inode utilization. ARA has very recently grown the ability to stuff all that data into a single sqlite file and then handle it browser-side, so I expect we'll be able to switch back to collecting it for all job runs again fairly soon. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zhang.lei.fly at gmail.com Wed Mar 28 15:47:40 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 28 Mar 2018 23:47:40 +0800 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project Message-ID: There are two projects to solve the issue that run OpenStack on Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both leverage helm tool for orchestration. There is some different perspective at the beginning, which results in the two teams could not work together. But recently, the difference becomes too small. and there is also no active contributor in the kolla-kubernetes project. So I propose to retire kolla-kubernetes project. If you are still interested in running OpenStack on kubernetes, please refer to openstack-helm project. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Mar 28 16:07:49 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 28 Mar 2018 09:07:49 -0700 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? Message-ID: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> Hello everyone, We were reviewing a bug fix for the vmware driver [0] today and we noticed it appears that the VMware NSX CI is no longer running, not even on only the nova/virt/vmwareapi/ tree. From the third-party CI dashboard, I see some claims of it running but when I open the patches, I don't see any reporting from VMware NSX CI [1]. Can anyone from the vmware subteam comment on whether or not the vmware third-party CI is going to be fixed or if it has been abandoned? Thanks, -melanie [0] https://review.openstack.org/557377 [1] http://ci-watch.tintri.com/project?project=nova&time=7+days From davanum at gmail.com Wed Mar 28 16:14:54 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 28 Mar 2018 12:14:54 -0400 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 to consolidate. Thanks, Dims On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang wrote: > There are two projects to solve the issue that run OpenStack on > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > leverage helm tool for orchestration. There is some different perspective > at the beginning, which results in the two teams could not work together. > > But recently, the difference becomes too small. and there is also no active > contributor in the kolla-kubernetes project. > > So I propose to retire kolla-kubernetes project. If you are still > interested in running OpenStack on kubernetes, please refer to > openstack-helm project. > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From paul.bourke at oracle.com Wed Mar 28 16:16:55 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Wed, 28 Mar 2018 17:16:55 +0100 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: <8e7dfdcd-8c07-2376-eb13-1279b0a81efa@oracle.com> +1 Thanks Jeffrey for taking the time to investigate. On 28/03/18 16:47, Jeffrey Zhang wrote: > There are two projects to solve the issue that run OpenStack on > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > leverage helm tool for orchestration. There is some different perspective > at the beginning, which results in the two teams could not work together. > > But recently, the difference becomes too small. and there is also no active > contributor in the kolla-kubernetes project. > > So I propose to retire kolla-kubernetes project. If you are still > interested in running OpenStack on kubernetes, please refer to > openstack-helm project. > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Wed Mar 28 16:21:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Mar 2018 11:21:00 -0500 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> Message-ID: <309777fe-d604-0fd3-5006-2d7f79ca412b@gmail.com> On 3/28/2018 11:07 AM, melanie witt wrote: > We were reviewing a bug fix for the vmware driver [0] today and we > noticed it appears that the VMware NSX CI is no longer running, not even > on only the nova/virt/vmwareapi/ tree. > > From the third-party CI dashboard, I see some claims of it running but > when I open the patches, I don't see any reporting from VMware NSX CI [1]. > > Can anyone from the vmware subteam comment on whether or not the vmware > third-party CI is going to be fixed or if it has been abandoned? > > Thanks, > -melanie > > [0] https://review.openstack.org/557377 > [1] http://ci-watch.tintri.com/project?project=nova&time=7+days As a result, I've posted a change to log a warning on start of the driver indicating its quality cannot be ensured since it doesn't get the same level of testing as the other drivers. https://review.openstack.org/#/c/557398/ This also makes me basically -2 on any vmware driver specs since I don't see a point in adding new features for the driver when the CI is never working, and by "never" I mean for at least the last couple of years. I could go back and find the seemingly quarterly mailing list posts I've had to make like this in the past. -- Thanks, Matt From andr.kurilin at gmail.com Wed Mar 28 16:21:24 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 28 Mar 2018 19:21:24 +0300 Subject: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing In-Reply-To: <6d0bc1ef-6995-bbb6-50e8-af883e1a9b8c@gmail.com> References: <6d0bc1ef-6995-bbb6-50e8-af883e1a9b8c@gmail.com> Message-ID: Hi Melanie and stacker! This is a nice goal which revises me the first my patch to OpenStack community. It was a patch to Nova and it was related to removing mox :) PS: https://review.openstack.org/#/c/59694/ PS2: it was abandoned due to several -2 :) 2018-03-27 1:06 GMT+03:00 melanie witt : > Hey everyone, > > This cycle there is a community goal to remove the use of mox/mox3 for > testing [0]. In nova, we're tracking our work at this blueprint: > > https://blueprints.launchpad.net/nova/+spec/mox-removal > > If you propose patches contributing to this goal, please be sure to add > something like "Part of blueprint mox-removal" in the commit message of > your patch so it will be tracked as part of the blueprint for Rocky. > > NOTE: Please avoid converting any tests related to cells v1 or > nova-network as these two legacy features are either in-progress of being > removed or on the road map to being removed within the next two cycles. > Tests to *avoid* converting are located: > > nova/tests/unit/cells/ > nova/nova/tests/unit/compute/test_compute_cells.py > nova/tests/unit/network/test_manager.py > > Please reply with other cells v1 or nova-network test locations to avoid > if I've missed any. > > Thanks, > -melanie > > [0] https://storyboard.openstack.org/#!/story/2001546 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Mar 28 16:36:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Mar 2018 11:36:07 -0500 Subject: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing In-Reply-To: References: <6d0bc1ef-6995-bbb6-50e8-af883e1a9b8c@gmail.com> Message-ID: <310a0b49-631d-3063-390a-62a7d9344645@gmail.com> On 3/28/2018 11:21 AM, Andrey Kurilin wrote: > PS: https://review.openstack.org/#/c/59694/ > PS2: it was abandoned due to several -2 :) Look how nice I was as a reviewer 5 years ago... -- Thanks, Matt From melwittt at gmail.com Wed Mar 28 16:37:41 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 28 Mar 2018 09:37:41 -0700 Subject: [openstack-dev] [nova] Rocky community goal: remove the use of mox/mox3 for testing In-Reply-To: References: <6d0bc1ef-6995-bbb6-50e8-af883e1a9b8c@gmail.com> Message-ID: On Wed, 28 Mar 2018 19:21:24 +0300, Andrey Kurilin wrote: > > This is a nice goal which revises me the first my patch to OpenStack > community. It was a patch to Nova and it was related to removing mox :) > > PS: https://review.openstack.org/#/c/59694/ > PS2: it was abandoned due to several -2 :) You were ahead of your time. :) -melanie From cdent+os at anticdent.org Wed Mar 28 16:43:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 28 Mar 2018 17:43:56 +0100 (BST) Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> Message-ID: On Wed, 28 Mar 2018, melanie witt wrote: > Can anyone from the vmware subteam comment on whether or not the vmware > third-party CI is going to be fixed or if it has been abandoned? I've got no substantive information yet, but for the sake of the thread not looking ignored, I can report that the beacons have been lit within the team that cares for such things and there should be some progress soon. Given that there hasn't been awareness in that group of the flakiness, we'll probably use that as the starting point: enhanced observability. And go from there to reach some measure of better. Long term it would be sweet to zuulv3 on a legit cluster, with more tests being run than just the chunk of tempest that happens now. If nobody else has posted something more helpful by tomorrow UTC, I'll chase. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From hswayne77 at gmail.com Wed Mar 28 16:47:19 2018 From: hswayne77 at gmail.com (=?utf-8?B?5qWK552/6LGq?=) Date: Thu, 29 Mar 2018 00:47:19 +0800 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: <8e7dfdcd-8c07-2376-eb13-1279b0a81efa@oracle.com> References: <8e7dfdcd-8c07-2376-eb13-1279b0a81efa@oracle.com> Message-ID: <93403FC0-028C-4BD9-9B87-33129DBD6EA6@gmail.com> +1 To consolidate them > Paul Bourke 於 2018年3月29日 上午12:16 寫道: > > +1 > > Thanks Jeffrey for taking the time to investigate. > >> On 28/03/18 16:47, Jeffrey Zhang wrote: >> There are two projects to solve the issue that run OpenStack on >> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both >> leverage helm tool for orchestration. There is some different perspective >> at the beginning, which results in the two teams could not work together. >> But recently, the difference becomes too small. and there is also no active >> contributor in the kolla-kubernetes project. >> So I propose to retire kolla-kubernetes project. If you are still >> interested in running OpenStack on kubernetes, please refer to >> openstack-helm project. >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Wed Mar 28 16:58:01 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 28 Mar 2018 17:58:01 +0100 (BST) Subject: [openstack-dev] Queries about API Extension In-Reply-To: <3cd65f45.d6c8.1626d230ea9.Coremail.chh_cc@163.com> References: <3cd65f45.d6c8.1626d230ea9.Coremail.chh_cc@163.com> Message-ID: On Wed, 28 Mar 2018, 陈汗 wrote: > Hi all, > Here are my questions: > For the projects whose api parts were implemented with Pecan, is there any way(hope it is graceful) to extend these api? > I mean, for example, somehow I have to add several extra attributes in Class Chassis in ironic project. Do you guys have any better way instead of directly editing the file of chassis.py? As a general rule you should avoid doing this as it breaks interoperability. If you really need a special extension to an existing API, make a custom API in a custom service that does what you need it to do. By being separate it is clearly identified as not being a part of the standard API and client code written to that standard API will continue to work. Of course, I'm sure plenty of people in their private clouds make adjustments to existing services and API all the time. If you must do, that doing it directly in the code may be one of the best ways to go as it makes it obvious that things have changed. Also, it might be that there are ways to such a thing in Ironic, in which case I hope someone will followup with that. I'm speaking from the position of APIs in OpenStack in general. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Wed Mar 28 17:26:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 28 Mar 2018 13:26:49 -0400 Subject: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests Message-ID: <1522257468-sup-81@lrrr.local> In the course of preparing the next release of oslo.config, Ben noticed that nova's unit tests fail with oslo.config master [1]. The underlying issue is that the tests mock things that oslo.config is now calling as part of determining where options are being set in code. This isn't an API change in oslo.config, and it is all transparent for normal uses of the library. But the mocks replace os.path.exists() and open() for the entire duration of a test function (not just for the isolated application code being tested), and so the library behavior change surfaces as a test error. I'm not really in a position to go through and clean up the use of mocks in those (and other?) tests myself, and I would like to not have to revert the feature work in oslo.config, especially since we did it for the placement API stuff for the nova team. I'm looking for ideas about what to do. Doug [1] http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 From sundar.nadathur at intel.com Wed Mar 28 17:27:42 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 28 Mar 2018 10:27:42 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> Message-ID: Hi Eric and all,     I should have clarified that this race condition happens only for the case of devices with multiple functions. There is a prior thread about it. I was trying to get a solution within Cyborg, but that faces this race condition as well. IIUC, this situation is somewhat similar to the issue with vGPU types (thanks to Alex Xu for pointing this out). In the latter case, we could start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after consuming a unit of vGPU-type-a, ideally the inventory should change to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, we start with an RP inventory of (region-type-A: 1, function-X: 4). But, after consuming a unit of that function, ideally the inventory should change to: (region-type-A: 0, function-X: 3). I understand that this approach is controversial :) Also, one difference from the vGPU case is that the number and count of vGPU types is static, whereas with FPGAs, one could reprogram it to result in more or fewer functions. That said, we could hopefully keep this analogy in mind for future discussions. We probably will not support multi-function accelerators in Rocky. This discussion is for the longer term. Regards, Sundar On 3/23/2018 12:44 PM, Eric Fried wrote: > Sundar- > > First thought is to simplify by NOT keeping inventory information in > the cyborg db at all. The provider record in the placement service > already knows the device (the provider ID, which you can look up in the > cyborg db) the host (the root_provider_uuid of the provider representing > the device) and the inventory, and (I hope) you'll be augmenting it with > traits indicating what functions it's capable of. That way, you'll > always get allocation candidates with devices that *can* load the > desired function; now you just have to engage your weigher to prioritize > the ones that already have it loaded so you can prefer those. > > Am I missing something? > > efried > > On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >> Hi all, >>     There seems to be a possibility of a race condition in the >> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to >> the proposed Cyborg/Nova spec >> >> for details.) >> >> Consider the scenario where the flavor specifies a resource class for a >> device type, and also specifies a function (e.g. encrypt) in the extra >> specs. The Nova scheduler would only track the device type as a >> resource, and Cyborg needs to track the availability of functions. >> Further, to keep it simple, say all the functions exist all the time (no >> reprogramming involved). >> >> To recap, here is the scheduler flow for this case: >> >> * A request spec with a flavor comes to Nova conductor/scheduler. The >> flavor has a device type as a resource class, and a function in the >> extra specs. >> * Placement API returns the list of RPs (compute nodes) which contain >> the requested device types (but not necessarily the function). >> * Cyborg will provide a custom filter which queries Cyborg DB. This >> needs to check which hosts contain the needed function, and filter >> out the rest. >> * The scheduler selects one node from the filtered list, and the >> request goes to the compute node. >> >> For the filter to work, the Cyborg DB needs to maintain a table with >> triples of (host, function type, #free units). The filter checks if a >> given host has one or more free units of the requested function type. >> But, to keep the # free units up to date, Cyborg on the selected compute >> node needs to notify the Cyborg API to decrement the #free units when an >> instance is spawned, and to increment them when resources are released. >> >> Therein lies the catch: this loop from the compute node to controller is >> susceptible to race conditions. For example, if two simultaneous >> requests each ask for function A, and there is only one unit of that >> available, the Cyborg filter will approve both, both may land on the >> same host, and one will fail. This is because Cyborg on the controller >> does not decrement resource usage due to one request before processing >> the next request. >> >> This is similar to this previous Nova scheduling issue >> . >> That was solved by having the scheduler claim a resource in Placement >> for the selected node. I don't see an analog for Cyborg, since it would >> not know which node is selected. >> >> Thanks in advance for suggestions and solutions. >> >> Regards, >> Sundar >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Wed Mar 28 17:53:48 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Wed, 28 Mar 2018 17:53:48 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: <8e7dfdcd-8c07-2376-eb13-1279b0a81efa@oracle.com> References: <8e7dfdcd-8c07-2376-eb13-1279b0a81efa@oracle.com> Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8965B9B547@MOSTLS1MSGUSRFF.ITServices.sbc.com> The OpenStack-Helm team would eagerly welcome contributions from Kolla-Kubernetes team members! Several of the current OSH team come from a Kolla-Kubernetes background, and the project has benefitted greatly from their experience and domain knowledge. Please reach out to me or say hi in #openstack-helm if you'd like to get looped in. Thanks, Matt -----Original Message----- From: Paul Bourke [mailto:paul.bourke at oracle.com] Sent: Wednesday, March 28, 2018 11:17 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project +1 Thanks Jeffrey for taking the time to investigate. On 28/03/18 16:47, Jeffrey Zhang wrote: > There are two projects to solve the issue that run OpenStack on > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > leverage helm tool for orchestration. There is some different perspective > at the beginning, which results in the two teams could not work together. > > But recently, the difference becomes too small. and there is also no active > contributor in the kolla-kubernetes project. > > So I propose to retire kolla-kubernetes project. If you are still > interested in running OpenStack on kubernetes, please refer to > openstack-helm project. > > -- > Regards, > Jeffrey Zhang > Blog: https://urldefense.proofpoint.com/v2/url?u=http-3A__xcodest.me&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo&s=hQmYlooBLjpF5tBAsthjxDFNn1zNssgnvtW-smJ7MYk&e= > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo&s=UQmPU1ND-ti1FNE8yfZx9qDP_I4gwW-jC2EOybg58mA&e= > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=_C5hC_103uW491yNPPpNmA&m=aRg_FabM3_QiSWpeuGcuJXSLceTM0KGMLJgHyDOfrAo&s=UQmPU1ND-ti1FNE8yfZx9qDP_I4gwW-jC2EOybg58mA&e= From zulcss at gmail.com Wed Mar 28 17:54:19 2018 From: zulcss at gmail.com (Chuck Short) Date: Wed, 28 Mar 2018 13:54:19 -0400 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 Regards chuck On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang wrote: > There are two projects to solve the issue that run OpenStack on > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > leverage helm tool for orchestration. There is some different perspective > at the beginning, which results in the two teams could not work together. > > But recently, the difference becomes too small. and there is also no active > contributor in the kolla-kubernetes project. > > So I propose to retire kolla-kubernetes project. If you are still > interested in running OpenStack on kubernetes, please refer to > openstack-helm project. > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Mar 28 18:06:44 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 28 Mar 2018 11:06:44 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> Message-ID: Hi Shaohe,   I have responded in the Etherpad. The Cyborg/Nova scheduling spec details the 4 types of user requests . I believe you are looking for more details on what the RC names, traits and flavors will look like. I will add that to the spec itself. Thanks, Sundar On 3/28/2018 2:10 AM, 少合冯 wrote: > I have summarize some scenarios for fpga devices request. > https://etherpad.openstack.org/p/cyborg-fpga-request-scenarios > > Please add more  more scenarios to find out the exceptions that > placement can not satisfy the filter and weight. > > IMOH, I refer placementto do filter and weight. If we have to let > cyborg do filter and weight.  Nova scheduler just need call cyborg > once for all host weight though we do the weigh one by one. > > > 2018-03-23 12:27 GMT+08:00 Nadathur, Sundar >: > > Hi all, >     There seems to be a possibility of a race condition in the > Cyborg/Nova flow. Apologies for missing this earlier. (You can > refer to the proposed Cyborg/Nova spec > > for details.) > > Consider the scenario where the flavor specifies a resource class > for a device type, and also specifies a function (e.g. encrypt) in > the extra specs. The Nova scheduler would only track the device > type as a resource, and Cyborg needs to track the availability of > functions. Further, to keep it simple, say all the functions exist > all the time (no reprogramming involved). > > To recap, here is the scheduler flow for this case: > > * A request spec with a flavor comes to Nova > conductor/scheduler. The flavor has a device type as a > resource class, and a function in the extra specs. > * Placement API returns the list of RPs (compute nodes) which > contain the requested device types (but not necessarily the > function). > * Cyborg will provide a custom filter which queries Cyborg DB. > This needs to check which hosts contain the needed function, > and filter out the rest. > * The scheduler selects one node from the filtered list, and the > request goes to the compute node. > > For the filter to work, the Cyborg DB needs to maintain a table > with triples of (host, function type, #free units). The filter > checks if a given host has one or more free units of the requested > function type. But, to keep the # free units up to date, Cyborg on > the selected compute node needs to notify the Cyborg API to > decrement the #free units when an instance is spawned, and to > increment them when resources are released. > > Therein lies the catch: this loop from the compute node to > controller is susceptible to race conditions. For example, if two > simultaneous requests each ask for function A, and there is only > one unit of that available, the Cyborg filter will approve both, > both may land on the same host, and one will fail. This is because > Cyborg on the controller does not decrement resource usage due to > one request before processing the next request. > > This is similar to this previous Nova scheduling issue > . > That was solved by having the scheduler claim a resource in > Placement for the selected node. I don't see an analog for Cyborg, > since it would not know which node is selected. > > Thanks in advance for suggestions and solutions. > > Regards, > Sundar > > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Wed Mar 28 18:34:29 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 28 Mar 2018 14:34:29 -0400 Subject: [openstack-dev] [PTLS] Project Updates & Project Onboarding In-Reply-To: References: Message-ID: <20180328183429.fexs6j4w7uga6mxa@barron.net> Would you be so kind as to add Victoria Martinez de la Cruz and Dustin Schoenbrun to the manila project Onboarding session [1] ? They are confirmed for conference attendance. Thanks much! -- Tom Barron [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21637/manila-project-onboarding On 21/03/18 22:14 +0000, Kendall Nelson wrote: >Hello! > >Project Updates[1] & Project Onboarding[2] sessions are now live on the >schedule! > >We did as best as we could to keep project onboarding sessions adjacent to >project update slots. Though, given the differences in duration and the >number of each we have per day that got increasingly difficult as the days >went on, hopefully what is there will work for everyone. > >If there are any speakers you need added to your slots, or any conflicts >you need addressed, feel free to email speakersupport at openstack.org and >they should be able to help you out. > >Thanks! > >-Kendall Nelson (diablo_rojo) > >[1] >https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Update >[2] >https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Onboarding >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tpb at dyncloud.net Wed Mar 28 18:41:48 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 28 Mar 2018 14:41:48 -0400 Subject: [openstack-dev] [PTLS] Project Updates & Project Onboarding In-Reply-To: <20180328183429.fexs6j4w7uga6mxa@barron.net> References: <20180328183429.fexs6j4w7uga6mxa@barron.net> Message-ID: <20180328184147.bkk4v6t7klyd7d3d@barron.net> Many apologies for sending this to the openstack-dev list; I thought I had removed the list from my address list but clearly did not. On 28/03/18 14:34 -0400, Tom Barron wrote: >Would you be so kind as to add <... snip ...> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From openstack at fried.cc Wed Mar 28 18:48:26 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 28 Mar 2018 13:48:26 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> Message-ID: <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> Sundar- We're running across this issue in several places right now. One thing that's definitely not going to get traction is automatically/implicitly tweaking inventory in one resource class when an allocation is made on a different resource class (whether in the same or different RPs). Slightly less of a nonstarter, but still likely to get significant push-back, is the idea of tweaking traits on the fly. For example, your vGPU case might be modeled as: PGPU_RP: { inventory: { CUSTOM_VGPU_TYPE_A: 2, CUSTOM_VGPU_TYPE_B: 4, } traits: [ CUSTOM_VGPU_TYPE_A_CAPABLE, CUSTOM_VGPU_TYPE_B_CAPABLE, ] } The request would come in for resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting in an allocation of CUSTOM_VGPU_TYPE_A:1. Now while you're processing that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. So it doesn't matter that there's still inventory of CUSTOM_VGPU_TYPE_B:4, because a request including required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. There's of course a window between when the initial allocation is made and when you tweak the trait list. In that case you'll just have to fail the loser. This would be like any other failure in e.g. the spawn process; it would bubble up, the allocation would be removed; retries might happen or whatever. Like I said, you're likely to get a lot of resistance to this idea as well. (Though TBH, I'm not sure how we can stop you beyond -1'ing your patches; there's nothing about placement that disallows it.) The simple-but-inefficient solution is simply that we'd still be able to make allocations for vGPU type B, but you would have to fail right away when it came down to cyborg to attach the resource. Which is code you pretty much have to write anyway. It's an improvement if cyborg gets to be involved in the post-get-allocation-candidates weighing/filtering step, because you can do that check at that point to help filter out the candidates that would fail. Of course there's still a race condition there, but it's no different than for any other resource. efried On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: > Hi Eric and all, >     I should have clarified that this race condition happens only for > the case of devices with multiple functions. There is a prior thread > > about it. I was trying to get a solution within Cyborg, but that faces > this race condition as well. > > IIUC, this situation is somewhat similar to the issue with vGPU types > > (thanks to Alex Xu for pointing this out). In the latter case, we could > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after > consuming a unit of  vGPU-type-a, ideally the inventory should change > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, > we start with an RP inventory of (region-type-A: 1, function-X: 4). But, > after consuming a unit of that function, ideally the inventory should > change to: (region-type-A: 0, function-X: 3). > > I understand that this approach is controversial :) Also, one difference > from the vGPU case is that the number and count of vGPU types is static, > whereas with FPGAs, one could reprogram it to result in more or fewer > functions. That said, we could hopefully keep this analogy in mind for > future discussions. > > We probably will not support multi-function accelerators in Rocky. This > discussion is for the longer term. > > Regards, > Sundar > > On 3/23/2018 12:44 PM, Eric Fried wrote: >> Sundar- >> >> First thought is to simplify by NOT keeping inventory information in >> the cyborg db at all. The provider record in the placement service >> already knows the device (the provider ID, which you can look up in the >> cyborg db) the host (the root_provider_uuid of the provider representing >> the device) and the inventory, and (I hope) you'll be augmenting it with >> traits indicating what functions it's capable of. That way, you'll >> always get allocation candidates with devices that *can* load the >> desired function; now you just have to engage your weigher to prioritize >> the ones that already have it loaded so you can prefer those. >> >> Am I missing something? >> >> efried >> >> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >>> Hi all, >>>     There seems to be a possibility of a race condition in the >>> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to >>> the proposed Cyborg/Nova spec >>> >>> for details.) >>> >>> Consider the scenario where the flavor specifies a resource class for a >>> device type, and also specifies a function (e.g. encrypt) in the extra >>> specs. The Nova scheduler would only track the device type as a >>> resource, and Cyborg needs to track the availability of functions. >>> Further, to keep it simple, say all the functions exist all the time (no >>> reprogramming involved). >>> >>> To recap, here is the scheduler flow for this case: >>> >>> * A request spec with a flavor comes to Nova conductor/scheduler. The >>> flavor has a device type as a resource class, and a function in the >>> extra specs. >>> * Placement API returns the list of RPs (compute nodes) which contain >>> the requested device types (but not necessarily the function). >>> * Cyborg will provide a custom filter which queries Cyborg DB. This >>> needs to check which hosts contain the needed function, and filter >>> out the rest. >>> * The scheduler selects one node from the filtered list, and the >>> request goes to the compute node. >>> >>> For the filter to work, the Cyborg DB needs to maintain a table with >>> triples of (host, function type, #free units). The filter checks if a >>> given host has one or more free units of the requested function type. >>> But, to keep the # free units up to date, Cyborg on the selected compute >>> node needs to notify the Cyborg API to decrement the #free units when an >>> instance is spawned, and to increment them when resources are released. >>> >>> Therein lies the catch: this loop from the compute node to controller is >>> susceptible to race conditions. For example, if two simultaneous >>> requests each ask for function A, and there is only one unit of that >>> available, the Cyborg filter will approve both, both may land on the >>> same host, and one will fail. This is because Cyborg on the controller >>> does not decrement resource usage due to one request before processing >>> the next request. >>> >>> This is similar to this previous Nova scheduling issue >>> . >>> That was solved by having the scheduler claim a resource in Placement >>> for the selected node. I don't see an analog for Cyborg, since it would >>> not know which node is selected. >>> >>> Thanks in advance for suggestions and solutions. >>> >>> Regards, >>> Sundar >>> >>> >>> >>> >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Wed Mar 28 19:14:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 28 Mar 2018 14:14:44 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521715425.17048.8.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> Message-ID: <20180328191443.GA26845@sm-xps> On Thu, Mar 22, 2018 at 10:43:45AM +0000, Stephen Finucane wrote: > On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote: > > On Wed, Mar 21, 2018 at 10:49:02AM +0000, Stephen Finucane wrote: > > > tl;dr: Make sure you stop using pbr's autodoc feature before converting > > > them to the new PTI for docs. > > > > > > [snip] > > > > > That's unfortunate. What we really need is a migration path from the > 'pbr' way of doing things to something else. I see three possible > avenues at this point in time: > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > things to 'sphinx-apidoc' but it takes the form of an extension. > From my brief experiments, the output generated from this is > radically different and far less comprehensive than what 'sphinx- > apidoc' generates. However, it supports templating so we could > probably configure this somehow and add our own special directive > somewhere like 'openstackdocstheme' > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > against upstream Sphinx [1]. This essentially does what the PBR > extension does but moves configuration into 'conf.py'. However, this > is currently held up as I can't adequately explain the differences > between this and 'sphinx.ext.autosummary' (there's definite overlap > but I don't understand 'autosummary' well enough to compare them). > 3. Modify the upstream jobs that detect the pbr integration and have > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > technically appealing approach as it still leaves us unable to build > stuff locally and adds yet more "magic" to the gate, but it does let > us progress. > It's not mentioned here, but I discovered today that Cinder is using the sphinx.ext.autodoc module. Is there any issue with using this? From mriedemos at gmail.com Wed Mar 28 19:35:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 28 Mar 2018 14:35:50 -0500 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> Message-ID: <100034a8-57f3-1eea-a792-97ca1328967c@gmail.com> On 3/27/2018 10:37 AM, Jay Pipes wrote: > > If we want to actually fix the issue once and for all, we need to make > availability zones a real thing that has a permanent identifier (UUID) > and store that permanent identifier in the instance (not the instance > metadata). > > Or we can continue to paper over major architectural weaknesses like this. Stepping back a second from the rest of this thread, what if we do the hard fail bug fix thing, which could be backported to stable branches, and then we have the option of completely re-doing this with aggregate UUIDs as the key rather than the aggregate name? Because I think the former could get done in Rocky, but the latter probably not. -- Thanks, Matt From melwittt at gmail.com Wed Mar 28 19:37:43 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 28 Mar 2018 12:37:43 -0700 Subject: [openstack-dev] [nova] review runways are now live! Message-ID: <834cc62a-a17a-6401-7fcb-67112417a0e0@gmail.com> Hi Stackers, This is just an standalone announcement that review runways [0] are now live and in active use. Details and instructions are documented on the etherpad. For approved blueprint code authors, please consult the etherpad instructions and add your blueprint to the Queue when your code is ready for review (requirements are documented in the etherpad). For nova-core team members, please make blueprints in runways your priority for review. As mentioned before, runways are an experimental process and we are open to feedback and will adjust the process incrementally on a continual basis as we gain experience with it. The process is not meant to be rigid and unchanging during the cycle. Thanks, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From corvus at inaugust.com Wed Mar 28 20:21:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 28 Mar 2018 13:21:38 -0700 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT Message-ID: <87woxwymr1.fsf@meyer.lemoncheese.net> Hi, I've proposed a change to devstack which slightly alters the LIBS_FROM_GIT behavior. This shouldn't be a significant change for those using legacy devstack jobs (but you may want to be aware of it). It is more significant for new-style devstack jobs. The change is at https://review.openstack.org/549252 In summary, when this change lands, new-style devstack jobs should no longer need to set LIBS_FROM_GIT explicitly. Existing legacy jobs should be unaffected (but there is a change to the verification process performed by devstack). Currently devstack expects the contents of LIBS_FROM_GIT to be exclusively a list of python packages which, obviously, should be installed from git and not pypi. It is used for two purposes: determining whether an individual package should be installed from git, and verifying that a package was installed from git. In the old devstack-gate system, we prepared many of the common git repos, whether they were used or not. So LIBS_FROM_GIT was created to indicate that in some cases devstack should ignore those repos and install from pypi instead. In other words, its original purpose was purely as a method of selecting whether a devstack-gate prepared repo should be used or ignored. In Zuul v3, we have a good way to indicate whether a job is going to use a repo or not -- add it to "required-projects". Considering that, the LIBS_FROM_GIT variable is redundant. So my patch causes it to be automatically generated based on the contents of required-projects. This means that job authors don't need to list every required repository twice. However, a naïve implementation of that runs afoul of the second use of LIBS_FROM_GIT -- verifying that python packages are installed from git. This usage was added later, after a typographical error ("-" vs "_" in a python package name) in a constraints file caused us not to install a package from git. Now devstack verifies that every package in LIBS_FROM_GIT is installed. However, Zuul doesn't know that devstack, tempest, and other packages aren't installed. So adding them automatically to LIBS_FROM_GIT will cause devstack to fail. My change modifies this verification to only check that packages mentioned in LIBS_FROM_GIT that devstack tried to install were actually installed. I realize that stated as such this sounds tautological, however, this check is still valid -- it would have caught the original error that prompted the check in the first case. What the revised check will no longer handle is a typo in a legacy job. If someone enters a typo into LIBS_FROM_GIT, it will no longer fail. However, I think the risk is worthwhile -- particularly since it is in service of a system which eliminates the opportunity to introduce such an error in the first place. To see the result in action, take a look at this change which, in only a few lines, implements what was a significantly more complex undertaking in Zuul v2: https://review.openstack.org/548331 Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for some reason, you require a new-style devstack job to manually set LIBS_FROM_GIT, that will still work. Simply define the variable as normal, and the module which generates the devstack config will bypass automatic generation if the variable is already set. -Jim From e0ne at e0ne.info Wed Mar 28 20:40:49 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 28 Mar 2018 23:40:49 +0300 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Kuz, Don't worry, we're on the same page with you. I added both you, Xinni and Keichii to the xstatic-core group. Thank you for your contributions! Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara wrote: > Hi Ivan & Horizon folks > > > AFAIK, Horizon team had conclusion that you will add the specific > members to xstatic-core, correct ? > Can I ask you to add the following members ? > # All of tree are heat-dashboard core. > > Kazunori Shinohara / ksnhr.tech at gmail.com #myself > Xinni Ge / xinni.ge1990 at gmail.com > Keiichi Hikita / keiichi.hikita at gmail.com > > Please give me a shout, if we are not on same page or any concern. > > Regards, > Kaz > > > 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : > > Hi Ivan, Akihiro, > > > > > > Thanks for your kind arrangement. > > Looking forward to hearing your decision soon. > > > > Regards, > > Kaz > > > > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : > >> HI Team, > >> > >> From my perspective, I'm OK both with #2 and #3 options. I agree that #4 > >> could be too complicated for us. Anyway, we've got this topic on the > meeting > >> agenda [1] so we'll discuss it there too. I'll share our decision after > the > >> meeting. > >> > >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon > >> > >> > >> > >> Regards, > >> Ivan Kolodyazhny, > >> http://blog.e0ne.info/ > >> > >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > wrote: > >>> > >>> Hi Kaz and Ivan, > >>> > >>> Yeah, it is worth discussed officially in the horizon team meeting or > the > >>> mailing list thread to get a consensus. > >>> Hopefully you can add this topic to the horizon meeting agenda. > >>> > >>> After sending the previous mail, I noticed anther option. I see there > are > >>> several options now. > >>> (1) Keep xstatic-core and horizon-core same. > >>> (2) Add specific members to xstatic-core > >>> (3) Add specific horizon-plugin core to xstatic-core > >>> (4) Split core membership into per-repo basis (perhaps too > complicated!!) > >>> > >>> My current vote is (2) as xstatic-core needs to understand what is > xstatic > >>> and how it is maintained. > >>> > >>> Thanks, > >>> Akihiro > >>> > >>> > >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : > >>>> > >>>> Hi Akihiro, > >>>> > >>>> > >>>> Thanks for your comment. > >>>> The background of my request to add us to xstatic-core comes from > >>>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion. > >>>> > >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion > >>>> Line135, "we can share ownership if needed - e0ne" > >>>> > >>>> Just in case, could you guys confirm unified opinion on this matter as > >>>> Horizon team ? > >>>> > >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core > >>>> because it's easier & smoother to manage what we are taking for > >>>> heat-dashboard. > >>>> On the other hand, I can understand what Akihiro you are saying, the > >>>> newly added repos belong to Horizon project & being managed by not > >>>> Horizon core is not consistent. > >>>> Also having exception might make unexpected confusion in near future. > >>>> > >>>> Eventually we will follow your opinion, let me hear Horizon team's > >>>> conclusion. > >>>> > >>>> Regards, > >>>> Kaz > >>>> > >>>> > >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : > >>>> > Hi Kaz, > >>>> > > >>>> > These repositories are under horizon project. It looks better to > keep > >>>> > the > >>>> > current core team. > >>>> > It potentially brings some confusion if we treat some horizon plugin > >>>> > team > >>>> > specially. > >>>> > Reviewing xstatic repos would be a small burden, wo I think it would > >>>> > work > >>>> > without problem even if only horizon-core can approve xstatic > reviews. > >>>> > > >>>> > > >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : > >>>> >> > >>>> >> Hi Ivan, Horizon folks, > >>>> >> > >>>> >> > >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been landed. > >>>> >> > >>>> >> In project-config for them, I've set same acl-config as the > existing > >>>> >> xstatic repos. > >>>> >> It means only "xstatic-core" can manage the newly created repos on > >>>> >> gerrit. > >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" > like as > >>>> >> what horizon-core is doing ? > >>>> >> > >>>> >> xstatic-core > >>>> >> https://review.openstack.org/#/admin/groups/385,members > >>>> >> > >>>> >> heat-dashboard-core > >>>> >> https://review.openstack.org/#/admin/groups/1844,members > >>>> >> > >>>> >> Of course, we will surely touch only what we made, just would like > to > >>>> >> manage them smoothly by ourselves. > >>>> >> In case we need to touch the other ones, will ask Horizon team for > >>>> >> help. > >>>> >> > >>>> >> Thanks in advance. > >>>> >> > >>>> >> Regards, > >>>> >> Kaz > >>>> >> > >>>> >> > >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : > >>>> >> > Hi Horizon Team, > >>>> >> > > >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin > >>>> >> > option, > >>>> >> > and submitted a patch for it. > >>>> >> > Could you please help to review the patch. > >>>> >> > > >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 > >>>> >> > https://review.openstack.org/#/c/552259/ > >>>> >> > > >>>> >> > Thank you very much. > >>>> >> > > >>>> >> > Best Regards, > >>>> >> > Xinni > >>>> >> > > >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny < > e0ne at e0ne.info> > >>>> >> > wrote: > >>>> >> >> > >>>> >> >> Hi Kaz, > >>>> >> >> > >>>> >> >> Thanks for cleaning this up. I put +1 on both of these patches > >>>> >> >> > >>>> >> >> Regards, > >>>> >> >> Ivan Kolodyazhny, > >>>> >> >> http://blog.e0ne.info/ > >>>> >> >> > >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara > >>>> >> >> > >>>> >> >> wrote: > >>>> >> >>> > >>>> >> >>> Hi Ivan & Horizon folks, > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> Now we are submitting a couple of patches to have the new > xstatic > >>>> >> >>> modules. > >>>> >> >>> Let me request you to have review the following patches. > >>>> >> >>> We need Horizon PTL's +1 to move these forward. > >>>> >> >>> > >>>> >> >>> project-config > >>>> >> >>> https://review.openstack.org/#/c/551978/ > >>>> >> >>> > >>>> >> >>> governance > >>>> >> >>> https://review.openstack.org/#/c/551980/ > >>>> >> >>> > >>>> >> >>> Thanks in advance:) > >>>> >> >>> > >>>> >> >>> Regards, > >>>> >> >>> Kaz > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski > >>>> >> >>> : > >>>> >> >>> > Yes, please do that. We can then discuss in the review about > >>>> >> >>> > technical > >>>> >> >>> > details. > >>>> >> >>> > > >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge > >>>> >> >>> > > >>>> >> >>> > wrote: > >>>> >> >>> >> > >>>> >> >>> >> Hi, Akihiro > >>>> >> >>> >> > >>>> >> >>> >> Thanks for the quick reply. > >>>> >> >>> >> > >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should > not > >>>> >> >>> >> be > >>>> >> >>> >> modified. > >>>> >> >>> >> It is much better to enhance horizon plugin settings, > >>>> >> >>> >> and I think maybe there could be one option like > >>>> >> >>> >> ADD_XSTATIC_MODULES. > >>>> >> >>> >> This option adds the plugin's xstatic files in > >>>> >> >>> >> STATICFILES_DIRS. > >>>> >> >>> >> I am considering to add a bug report to describe it at > first, > >>>> >> >>> >> and > >>>> >> >>> >> give > >>>> >> >>> >> a > >>>> >> >>> >> patch later maybe. > >>>> >> >>> >> Is that ok with the Horizon team? > >>>> >> >>> >> > >>>> >> >>> >> Best Regards. > >>>> >> >>> >> Xinni > >>>> >> >>> >> > >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki > >>>> >> >>> >> > >>>> >> >>> >> wrote: > >>>> >> >>> >>> > >>>> >> >>> >>> Hi Xinni, > >>>> >> >>> >>> > >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge < > xinni.ge1990 at gmail.com>: > >>>> >> >>> >>> > Hello Horizon Team, > >>>> >> >>> >>> > > >>>> >> >>> >>> > I would like to hear about your opinions about how to add > >>>> >> >>> >>> > new > >>>> >> >>> >>> > xstatic > >>>> >> >>> >>> > modules to horizon settings. > >>>> >> >>> >>> > > >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files > >>>> >> >>> >>> > issue, > >>>> >> >>> >>> > thanks > >>>> >> >>> >>> > for > >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them and > >>>> >> >>> >>> > referencing as > >>>> >> >>> >>> > new > >>>> >> >>> >>> > xstatic-* libs. > >>>> >> >>> >>> > >>>> >> >>> >>> Thanks for moving this forward. > >>>> >> >>> >>> > >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded as > >>>> >> >>> >>> > openstack > >>>> >> >>> >>> > official > >>>> >> >>> >>> > repos yet) in our development environment now, but > hesitate > >>>> >> >>> >>> > to > >>>> >> >>> >>> > decide > >>>> >> >>> >>> > how to > >>>> >> >>> >>> > add the new installed xstatic lib path to > STATICFILES_DIRS > >>>> >> >>> >>> > in > >>>> >> >>> >>> > openstack_dashboard.settings so that the static files > could > >>>> >> >>> >>> > be > >>>> >> >>> >>> > automatically > >>>> >> >>> >>> > collected by *collectstatic* process. > >>>> >> >>> >>> > > >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in > >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant > >>>> >> >>> >>> > static > >>>> >> >>> >>> > fils > >>>> >> >>> >>> > are > >>>> >> >>> >>> > added > >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin > >>>> >> >>> >>> > dashboard. > >>>> >> >>> >>> > We may want new plugin setting keywords ( something > similar > >>>> >> >>> >>> > to > >>>> >> >>> >>> > ADD_JS_FILES) > >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update > >>>> >> >>> >>> > STATICFILES_DIRS). > >>>> >> >>> >>> > >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic > >>>> >> >>> >>> modules > >>>> >> >>> >>> through horizon plugin settings. I don't think it is a good > >>>> >> >>> >>> idea > >>>> >> >>> >>> to > >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon > >>>> >> >>> >>> plugin > >>>> >> >>> >>> usages. It makes difficult to track why and where a xstatic > >>>> >> >>> >>> module > >>>> >> >>> >>> in > >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. > >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so horizon > code > >>>> >> >>> >>> to > >>>> >> >>> >>> handle plugin settings should merge multiple entries to a > >>>> >> >>> >>> single > >>>> >> >>> >>> one > >>>> >> >>> >>> hopefully. > >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. > >>>> >> >>> >>> > >>>> >> >>> >>> Akihiro > >>>> >> >>> >>> > >>>> >> >>> >>> > > >>>> >> >>> >>> > Looking forward to hearing any suggestions from you guys, > >>>> >> >>> >>> > and > >>>> >> >>> >>> > Best Regards, > >>>> >> >>> >>> > > >>>> >> >>> >>> > Xinni Ge > >>>> >> >>> >>> > > >>>> >> >>> >>> > > >>>> >> >>> >>> > > >>>> >> >>> >>> > > >>>> >> >>> >>> > > >>>> >> >>> >>> > ______________________________ > ____________________________________________ > >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage > questions) > >>>> >> >>> >>> > Unsubscribe: > >>>> >> >>> >>> > > >>>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>>> >> >>> >>> > > >>>> >> >>> >>> > > >>>> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >>> >>> > > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> > >>>> >> >>> >>> ______________________________ > ____________________________________________ > >>>> >> >>> >>> OpenStack Development Mailing List (not for usage > questions) > >>>> >> >>> >>> Unsubscribe: > >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>>> >> >>> >>> > >>>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> -- > >>>> >> >>> >> 葛馨霓 Xinni Ge > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> > >>>> >> >>> >> ______________________________ > ____________________________________________ > >>>> >> >>> >> OpenStack Development Mailing List (not for usage questions) > >>>> >> >>> >> Unsubscribe: > >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>>> >> >>> >> > >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >>> >> > >>>> >> >>> > > >>>> >> >>> > > >>>> >> >>> > > >>>> >> >>> > > >>>> >> >>> > > >>>> >> >>> > ____________________________________________________________ > ______________ > >>>> >> >>> > OpenStack Development Mailing List (not for usage questions) > >>>> >> >>> > Unsubscribe: > >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>>> >> >>> > > >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >>> > > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> > >>>> >> >>> ____________________________________________________________ > ______________ > >>>> >> >>> OpenStack Development Mailing List (not for usage questions) > >>>> >> >>> Unsubscribe: > >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >> > >>>> >> >> > >>>> >> >> > >>>> >> >> > >>>> >> >> > >>>> >> >> ____________________________________________________________ > ______________ > >>>> >> >> OpenStack Development Mailing List (not for usage questions) > >>>> >> >> Unsubscribe: > >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> >> > >>>> >> > > >>>> >> > > >>>> >> > > >>>> >> > -- > >>>> >> > 葛馨霓 Xinni Ge > >>>> >> > > >>>> >> > > >>>> >> > > >>>> >> > ____________________________________________________________ > ______________ > >>>> >> > OpenStack Development Mailing List (not for usage questions) > >>>> >> > Unsubscribe: > >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >>>> >> > > >>>> >> > >>>> >> > >>>> >> ____________________________________________________________ > ______________ > >>>> >> OpenStack Development Mailing List (not for usage questions) > >>>> >> Unsubscribe: > >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > ____________________________________________________________ > ______________ > >>>> > OpenStack Development Mailing List (not for usage questions) > >>>> > Unsubscribe: > >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>>> > > >>>> > >>>> > >>>> ____________________________________________________________ > ______________ > >>>> OpenStack Development Mailing List (not for usage questions) > >>>> Unsubscribe: > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Wed Mar 28 20:58:27 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 28 Mar 2018 21:58:27 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180328191443.GA26845@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180328191443.GA26845@sm-xps> Message-ID: <1522270707.4003.32.camel@redhat.com> On Wed, 2018-03-28 at 14:14 -0500, Sean McGinnis wrote: > On Thu, Mar 22, 2018 at 10:43:45AM +0000, Stephen Finucane wrote: > > On Wed, 2018-03-21 at 09:57 -0500, Sean McGinnis wrote: > > > On Wed, Mar 21, 2018 at 10:49:02AM +0000, Stephen Finucane wrote: > > > > tl;dr: Make sure you stop using pbr's autodoc feature before converting > > > > them to the new PTI for docs. > > > > > > > > [snip] > > > > > > > > That's unfortunate. What we really need is a migration path from the > > 'pbr' way of doing things to something else. I see three possible > > avenues at this point in time: > > > > 1. Start using 'sphinx.ext.autosummary'. Apparently this can do similar > > things to 'sphinx-apidoc' but it takes the form of an extension. > > From my brief experiments, the output generated from this is > > radically different and far less comprehensive than what 'sphinx- > > apidoc' generates. However, it supports templating so we could > > probably configure this somehow and add our own special directive > > somewhere like 'openstackdocstheme' > > 2. Push for the 'sphinx.ext.apidoc' extension I proposed some time back > > against upstream Sphinx [1]. This essentially does what the PBR > > extension does but moves configuration into 'conf.py'. However, this > > is currently held up as I can't adequately explain the differences > > between this and 'sphinx.ext.autosummary' (there's definite overlap > > but I don't understand 'autosummary' well enough to compare them). > > 3. Modify the upstream jobs that detect the pbr integration and have > > them run 'sphinx-apidoc' before 'sphinx-build'. This is the least > > technically appealing approach as it still leaves us unable to build > > stuff locally and adds yet more "magic" to the gate, but it does let > > us progress. > > > > It's not mentioned here, but I discovered today that Cinder is using the > sphinx.ext.autodoc module. Is there any issue with using this? > Nope - sphinx-apidoc and the likes use autodoc under the hood. You can see this by checking the output in 'contributor/api' or the likes. Stephen > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Wed Mar 28 22:35:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 28 Mar 2018 18:35:54 -0400 Subject: [openstack-dev] [nova] Hard fail if you try to rename an AZ with instances in it? In-Reply-To: <100034a8-57f3-1eea-a792-97ca1328967c@gmail.com> References: <2c6ff74e-65e9-d7e2-369e-d7c6fd37798a@gmail.com> <4460ff7f-7a1b-86ac-c37e-dbd7a42631ed@gmail.com> <100034a8-57f3-1eea-a792-97ca1328967c@gmail.com> Message-ID: <594cca34-a710-0c4b-200b-45f892e98581@gmail.com> On 03/28/2018 03:35 PM, Matt Riedemann wrote: > On 3/27/2018 10:37 AM, Jay Pipes wrote: >> >> If we want to actually fix the issue once and for all, we need to make >> availability zones a real thing that has a permanent identifier (UUID) >> and store that permanent identifier in the instance (not the instance >> metadata). >> >> Or we can continue to paper over major architectural weaknesses like >> this. > > Stepping back a second from the rest of this thread, what if we do the > hard fail bug fix thing, which could be backported to stable branches, > and then we have the option of completely re-doing this with aggregate > UUIDs as the key rather than the aggregate name? Because I think the > former could get done in Rocky, but the latter probably not. I'm fine with that (and was fine with it before, just stating that solving the problem long-term requires different thinking) Best, -jay From doug at doughellmann.com Wed Mar 28 22:53:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 28 Mar 2018 18:53:03 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1521110096-sup-3634@lrrr.local> References: <1521110096-sup-3634@lrrr.local> Message-ID: <1522276901-sup-6868@lrrr.local> We're making good progress. Some of the important parts of the global job changes are in place. There are still a lot of open patches to add the lower-constraints jobs to repos, however. Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: [...] > What I Want to Do > ----------------- > > 1. Update the requirements-check test job to change the check for > an exact match to be a check for compatibility with the > upper-constraints.txt value. This change has merged: https://review.openstack.org/#/c/555402/ There are some additional changes to that job still in the queue. In particular, the change in https://review.openstack.org/#/c/557034/3 will start enforcing some rules to ensure the lower-constraints.txt settings stay at the bottom of the requirements files. Because we had some communication issues and did a few steps out of order, when this patch lands projects that have approved bot-proposed requirements updates may find that their requirements and lower-constraints files no longer match, which may lead to job failures. It should be easy enough to fix the problems by making the values in the constraints files match the values in the requirements files (by editing either set of files, depending on what is appropriate). I apologize for any inconvenience this causes. > 2. We should stop syncing dependencies by turning off the > propose-update-requirements job entirely. This is also done: https://review.openstack.org/#/c/555426/ > 3. Remove the minimum specifications from the global requirements > list to make clear that the global list is no longer expressing > minimums. > > This clean-up step has been a bit more controversial among the > requirements team, but I think it is a key piece. As the minimum > versions of dependencies diverge within projects, there will no > longer *be* a real global set of minimum values. Tracking a list of > "highest minimums", would either require rebuilding the list from the > settings in all projects, or requiring two patches to change the > minimum version of a dependency within a project. > > Maintaining a global list of minimums also implies that we > consider it OK to run OpenStack as a whole with that list. This > message conflicts with the message we've been sending about the > upper constraints list since that was established, which is that > we have a known good list of versions and deploying all of > OpenStack with different versions of those dependencies is > untested. We've decided not to do this step, because some of the other requirements team members want to use those lower bound values. Projects are no longer required to be consistent with the lower bounds in that global file, however. > Testing Lower Bounds of Dependencies > ------------------------------------ [...] > > The results of those steps can be combined into a single patch and > proposed to the project. To avoid overwhelming zuul's job configuration > resolver, we need to propose the patches in separate batches of > about 10 repos at a time. This is all mostly scriptable, so I will > write a script and propose the patches (unless someone else wants to do > it all -- we need a single person to keep up with how many patches we're > proposing at one time). > > The point of creating the initial lower-constraints.txt file is not > necessarily to be "accurate" with the constraints immediately, but > to have something to work from. After the patches are proposed, > please either plan to land them or vote -2 indicating that you don't > want a job like that on that repo. If you want to change the > constraints significantly, please do that in a separate patch. With > ~325 of them, I'm not going to be able to keep up with everyone's > separate needs and this is all meant to just establish the initial > version of the job anyway. I ended up needing fewer patches than expected because many of the projects receiving requirements syncs didn't have unit test jobs (ansible roles, and some other packaging-related things, that are tested other ways). Approvals have been making good progress. As I say above, if you have minor issues with the patch, either propose a fix on top of it or take it over and fix it directly. Even though there are fewer patches than I expected, I'm still not going to be able to be able to keep up with lots of individual differences or merge conflicts in projects. Help wanted. > For projects that currently only support python 2 we can modify the > proposed patches to not set base-python to use python3. > > You will have noticed that this will only apply to unit test jobs. > Projects are free to use the results to add their own functional > test jobs using the same lower-constraints.txt files, but that's > up to them to do. I'm not aware of anyone trying to do this, yet. If you are, please let us know how it's going. Doug From sundar.nadathur at intel.com Wed Mar 28 23:03:53 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 28 Mar 2018 16:03:53 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> Message-ID: <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> Thanks, Eric. Looks like there are no good solutions even as candidates, but only options with varying levels of unacceptability. It is funny that that the option that is considered the least unacceptable is to let the problem happen and then fail the request (last one in your list). Could I ask what is the objection to the scheme that applies multiple traits and removes one as needed, apart from the fact that it has races? Regards, Sundar On 3/28/2018 11:48 AM, Eric Fried wrote: > Sundar- > > We're running across this issue in several places right now. One > thing that's definitely not going to get traction is > automatically/implicitly tweaking inventory in one resource class when > an allocation is made on a different resource class (whether in the same > or different RPs). > > Slightly less of a nonstarter, but still likely to get significant > push-back, is the idea of tweaking traits on the fly. For example, your > vGPU case might be modeled as: > > PGPU_RP: { > inventory: { > CUSTOM_VGPU_TYPE_A: 2, > CUSTOM_VGPU_TYPE_B: 4, > } > traits: [ > CUSTOM_VGPU_TYPE_A_CAPABLE, > CUSTOM_VGPU_TYPE_B_CAPABLE, > ] > } > > The request would come in for > resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting > in an allocation of CUSTOM_VGPU_TYPE_A:1. Now while you're processing > that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. > So it doesn't matter that there's still inventory of > CUSTOM_VGPU_TYPE_B:4, because a request including > required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. > There's of course a window between when the initial allocation is made > and when you tweak the trait list. In that case you'll just have to > fail the loser. This would be like any other failure in e.g. the spawn > process; it would bubble up, the allocation would be removed; retries > might happen or whatever. > > Like I said, you're likely to get a lot of resistance to this idea as > well. (Though TBH, I'm not sure how we can stop you beyond -1'ing your > patches; there's nothing about placement that disallows it.) > > The simple-but-inefficient solution is simply that we'd still be able > to make allocations for vGPU type B, but you would have to fail right > away when it came down to cyborg to attach the resource. Which is code > you pretty much have to write anyway. It's an improvement if cyborg > gets to be involved in the post-get-allocation-candidates > weighing/filtering step, because you can do that check at that point to > help filter out the candidates that would fail. Of course there's still > a race condition there, but it's no different than for any other resource. > > efried > > On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: >> Hi Eric and all, >>     I should have clarified that this race condition happens only for >> the case of devices with multiple functions. There is a prior thread >> >> about it. I was trying to get a solution within Cyborg, but that faces >> this race condition as well. >> >> IIUC, this situation is somewhat similar to the issue with vGPU types >> >> (thanks to Alex Xu for pointing this out). In the latter case, we could >> start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after >> consuming a unit of  vGPU-type-a, ideally the inventory should change >> to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, >> we start with an RP inventory of (region-type-A: 1, function-X: 4). But, >> after consuming a unit of that function, ideally the inventory should >> change to: (region-type-A: 0, function-X: 3). >> >> I understand that this approach is controversial :) Also, one difference >> from the vGPU case is that the number and count of vGPU types is static, >> whereas with FPGAs, one could reprogram it to result in more or fewer >> functions. That said, we could hopefully keep this analogy in mind for >> future discussions. >> >> We probably will not support multi-function accelerators in Rocky. This >> discussion is for the longer term. >> >> Regards, >> Sundar >> >> On 3/23/2018 12:44 PM, Eric Fried wrote: >>> Sundar- >>> >>> First thought is to simplify by NOT keeping inventory information in >>> the cyborg db at all. The provider record in the placement service >>> already knows the device (the provider ID, which you can look up in the >>> cyborg db) the host (the root_provider_uuid of the provider representing >>> the device) and the inventory, and (I hope) you'll be augmenting it with >>> traits indicating what functions it's capable of. That way, you'll >>> always get allocation candidates with devices that *can* load the >>> desired function; now you just have to engage your weigher to prioritize >>> the ones that already have it loaded so you can prefer those. >>> >>> Am I missing something? >>> >>> efried >>> >>> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >>>> Hi all, >>>>     There seems to be a possibility of a race condition in the >>>> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to >>>> the proposed Cyborg/Nova spec >>>> >>>> for details.) >>>> >>>> Consider the scenario where the flavor specifies a resource class for a >>>> device type, and also specifies a function (e.g. encrypt) in the extra >>>> specs. The Nova scheduler would only track the device type as a >>>> resource, and Cyborg needs to track the availability of functions. >>>> Further, to keep it simple, say all the functions exist all the time (no >>>> reprogramming involved). >>>> >>>> To recap, here is the scheduler flow for this case: >>>> >>>> * A request spec with a flavor comes to Nova conductor/scheduler. The >>>> flavor has a device type as a resource class, and a function in the >>>> extra specs. >>>> * Placement API returns the list of RPs (compute nodes) which contain >>>> the requested device types (but not necessarily the function). >>>> * Cyborg will provide a custom filter which queries Cyborg DB. This >>>> needs to check which hosts contain the needed function, and filter >>>> out the rest. >>>> * The scheduler selects one node from the filtered list, and the >>>> request goes to the compute node. >>>> >>>> For the filter to work, the Cyborg DB needs to maintain a table with >>>> triples of (host, function type, #free units). The filter checks if a >>>> given host has one or more free units of the requested function type. >>>> But, to keep the # free units up to date, Cyborg on the selected compute >>>> node needs to notify the Cyborg API to decrement the #free units when an >>>> instance is spawned, and to increment them when resources are released. >>>> >>>> Therein lies the catch: this loop from the compute node to controller is >>>> susceptible to race conditions. For example, if two simultaneous >>>> requests each ask for function A, and there is only one unit of that >>>> available, the Cyborg filter will approve both, both may land on the >>>> same host, and one will fail. This is because Cyborg on the controller >>>> does not decrement resource usage due to one request before processing >>>> the next request. >>>> >>>> This is similar to this previous Nova scheduling issue >>>> . >>>> That was solved by having the scheduler claim a resource in Placement >>>> for the selected node. I don't see an analog for Cyborg, since it would >>>> not know which node is selected. >>>> >>>> Thanks in advance for suggestions and solutions. >>>> >>>> Regards, >>>> Sundar >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Wed Mar 28 23:14:22 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 29 Mar 2018 10:14:22 +1100 Subject: [openstack-dev] [stable][release] Remove complex ACL changes around releases In-Reply-To: <8e7bab5c-0d01-f6cd-a624-c61768459025@ham.ie> References: <9937acbe-f5b6-f112-1bfd-4147fff42116@openstack.org> <8e7bab5c-0d01-f6cd-a624-c61768459025@ham.ie> Message-ID: <20180328231422.GI13389@thor.bakeyournoodle.com> On Wed, Mar 28, 2018 at 03:34:32PM +0100, Graham Hayes wrote: > It is more complex than just "joining that team" if the project follows > stable policy. the stable team have to approve the additions, and do > reject people trying to join them. This is true but when we (I) say no I explain what's required to get $project-stable-maint for the requested people. Which typically boils down to "do the reviews that show they grok the stable policy" and we set a short runway (typically 3 months) It is absolutely that same as joining *any* core team. > I don't want to have a release where > someone has to self approve / ninja approve patches due to cores *not* > having the access rights that they previously had. You can always ping stable-maint-core to avoid that. Looking at recent stable reviews stable-maint-core and releease-managers have been doing a pretty good job there. And as this will happen in July/August there's plenty of time for it to be a non-issue. Yours Tony. [1] https://review.openstack.org/#/admin/groups/101,members [2] https://review.openstack.org/#/admin/groups/1098,members -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doug at doughellmann.com Wed Mar 28 23:37:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 28 Mar 2018 19:37:19 -0400 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT In-Reply-To: <87woxwymr1.fsf@meyer.lemoncheese.net> References: <87woxwymr1.fsf@meyer.lemoncheese.net> Message-ID: <1522280173-sup-6848@lrrr.local> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700: > Hi, > > I've proposed a change to devstack which slightly alters the > LIBS_FROM_GIT behavior. This shouldn't be a significant change for > those using legacy devstack jobs (but you may want to be aware of it). > It is more significant for new-style devstack jobs. > > The change is at https://review.openstack.org/549252 > > In summary, when this change lands, new-style devstack jobs should no > longer need to set LIBS_FROM_GIT explicitly. Existing legacy jobs > should be unaffected (but there is a change to the verification process > performed by devstack). > > > Currently devstack expects the contents of LIBS_FROM_GIT to be > exclusively a list of python packages which, obviously, should be > installed from git and not pypi. It is used for two purposes: > determining whether an individual package should be installed from git, > and verifying that a package was installed from git. > > In the old devstack-gate system, we prepared many of the common git > repos, whether they were used or not. So LIBS_FROM_GIT was created to > indicate that in some cases devstack should ignore those repos and > install from pypi instead. In other words, its original purpose was > purely as a method of selecting whether a devstack-gate prepared repo > should be used or ignored. > > In Zuul v3, we have a good way to indicate whether a job is going to use > a repo or not -- add it to "required-projects". Considering that, the > LIBS_FROM_GIT variable is redundant. So my patch causes it to be > automatically generated based on the contents of required-projects. > This means that job authors don't need to list every required repository > twice. > > However, a naïve implementation of that runs afoul of the second use of > LIBS_FROM_GIT -- verifying that python packages are installed from git. > > This usage was added later, after a typographical error ("-" vs "_" in a > python package name) in a constraints file caused us not to install a > package from git. Now devstack verifies that every package in > LIBS_FROM_GIT is installed. However, Zuul doesn't know that devstack, > tempest, and other packages aren't installed. So adding them > automatically to LIBS_FROM_GIT will cause devstack to fail. > > My change modifies this verification to only check that packages > mentioned in LIBS_FROM_GIT that devstack tried to install were actually > installed. I realize that stated as such this sounds tautological, > however, this check is still valid -- it would have caught the original > error that prompted the check in the first case. > > What the revised check will no longer handle is a typo in a legacy job. > If someone enters a typo into LIBS_FROM_GIT, it will no longer fail. > However, I think the risk is worthwhile -- particularly since it is in > service of a system which eliminates the opportunity to introduce such > an error in the first place. > > To see the result in action, take a look at this change which, in only a > few lines, implements what was a significantly more complex undertaking > in Zuul v2: > > https://review.openstack.org/548331 > > Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for > some reason, you require a new-style devstack job to manually set > LIBS_FROM_GIT, that will still work. Simply define the variable as > normal, and the module which generates the devstack config will bypass > automatic generation if the variable is already set. > > -Jim > How does this apply to uses of devstack outside of zuul, such as in a local development environment? Doug From ekcs.openstack at gmail.com Thu Mar 29 00:19:35 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 28 Mar 2018 17:19:35 -0700 Subject: [openstack-dev] [Congress] updated backlog Message-ID: Here's an updated backlog following Rocky discussions. https://etherpad.openstack.org/p/congress-task-priority Please feel free to comment and suggest additions/deletions and changes in priority. From ekcs.openstack at gmail.com Thu Mar 29 00:47:40 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Wed, 28 Mar 2018 17:47:40 -0700 Subject: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client In-Reply-To: References: Message-ID: Thank you, Dougal and Ghanshyam for the responses! What I can gather is: service client registration > import service client > retaining copy. So the best thing for Congress to do now is to import the service client. On 3/17/18, 9:00 PM, "Ghanshyam Mann" wrote: >Hi All, > >Sorry for late response, i kept this mail unread but forgot to >respond. reply inline. > >On Fri, Mar 16, 2018 at 8:08 PM, Dougal Matthews >wrote: >> >> >> On 13 March 2018 at 18:51, Eric K wrote: >>> >>> Hi Mistral folks and others, >>> >>> I'm working on Congress tempest tests [1] for integration with >>>Mistral. In >>> the tests, we use a Mistral service client to call Mistral APIs and >>> compare results against those obtained by Mistral driver for Congress. >>> >>> Regarding the service client, Congress can either import directly from >>> Mistral tempest plugin [2] or maintain its own copy within Congress >>> tempest plugin. > >Maintaining own copy will leads to lot of issues and lot of duplicate >code among many plugins. > >>I'm not sure whether Mistral team expects the service >>> client to be internal use only, so I hope to hear folks' thoughts on >>>which >>> approach is preferred. Thanks very much! >> >> >> I don't have a strong opinion here. I am happy for you to use the >>Mistral >> service client, but it will be hard to guarantee stability. It has been >> stable (since it hasn't changed), but we have a temptest refactor >>planned >> (once we move the final tempest tests from mistraclient to >> mistral-tempest-plugin). So there is a fair chance we will break the >>API at >> that point, however, I don't know when it will happen, as nobody is >> currently working on it. > >From QA team, service clients are the main interface which can be used >across tempest plugins. For example, congress need many other service >clients from other Tempest Plugins liek Mistral. Tempest also declare >all their in-tree service clients as library interface and we maintain >them as per backward compatibility [3]. This way we make these service >clients usable outside of Tempest also to avoid duplicate >code/interface. > >For Service Clients defined in Tempest plugins (like Mistral service >clients), we suggest (strongly) the same process which is to declare >plugins's service clients as stable interface which gives 2 advantage: >1. By this you make sure that you are not allowing to change the API >calling interface(service clietns) which indirectly means you are not >allowing to change the APIs. Makes your tempest plugin testing more >reliable. > >2. Your service clients can be used in other Tempest plugins to avoid >duplicate code/interface. If any other plugins use you service clients >means, they also test your project so it is good to help them by >providing the required interface as stable. > >Initial idea of owning the service clients in their respective plugins >was to share them among plugins for integrated testing of more then >one openstack service. > >Now on usage of service clients, Tempest provide a better way to do so >than importing them directly [4]. You can see the example for Manila's >tempest plugin [5]. This gives an advantage of discovering your >registered service clients in other Tempest plugins automatically. >They do not need to import other plugins service clients. QA is hoping >that each tempest plugins will move to new service client registration >process. > >Overall, we recommend to have service clients as stable interface so >that other plugins can use them and test your projects in more >integrated way. > >> >> I have cc'ed Chandan - hopefully he can provide some input. He has >>advised >> me and the Mistral team regarding tempest before. >> >>> >>> >>> Eric >>> >>> [1] https://review.openstack.org/#/c/538336/ >>> [2] >>> >>> >>>https://github.com/openstack/mistral-tempest-plugin/blob/master/mistral_ >>>tem >>> pest_tests/services/v2/mistral_client.py >>> >>> > >..3 >http://git.openstack.org/cgit/openstack/tempest/tree/tempest/lib/services >..4 >https://docs.openstack.org/tempest/latest/plugin.html#get_service_clients( >) >..5 https://review.openstack.org/#/c/334596/34 > >-gmann > >>> >>> >>>________________________________________________________________________ >>>__ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >>_________________________________________________________________________ >>_ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Thu Mar 29 00:59:16 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 28 Mar 2018 17:59:16 -0700 Subject: [openstack-dev] [nova] Rocky spec review day In-Reply-To: <960aea25-5e90-1423-ad51-c8016de3e967@gmail.com> References: <960aea25-5e90-1423-ad51-c8016de3e967@gmail.com> Message-ID: On Wed, 21 Mar 2018 21:48:40 -0700, Melanie Witt wrote: > On Tue, 20 Mar 2018 16:47:58 -0700, Melanie Witt wrote: >> The past several cycles, we've had a spec review day in the cycle where >> reviewers focus on specs and iterating quickly with spec authors for the >> day. Spec freeze is April 19 so I wanted to get some input from all of >> you about what day would work best for a spec review day. >> >> I was thinking that 2-3 weeks ahead of spec freeze would be appropriate, >> so that would be March 27 (next week) or April 3 if we do it on a Tuesday. > > Thanks for all who replied on the thread. There was consensus that > earlier is better, so let's do the spec review day next week: Tuesday > March 27. Thank you to all who participated in the spec review day yesterday. We approved 2 specs, merged some spec amendments, and many other specs received feedback from reviewers. As a reminder, the spec freeze date is at r-2, June 7 this cycle (it has been moved out because of the new review runways effort going on [0]). As we work through the runways queue, we will be looking at approving more specs leading up to r-2, if we make sufficient progress on the queue. Thanks, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From ksnhr.tech at gmail.com Thu Mar 29 00:59:16 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Thu, 29 Mar 2018 09:59:16 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan, Thank you very much. I've confirmed that all of us have been added to xstatic-core. As discussed, we will focus on the followings what we added for heat-dashboard, will not touch other xstatic repos as core. xstatic-angular-material xstatic-angular-notify xstatic-angular-uuid xstatic-angular-vis xstatic-filesaver xstatic-js-yaml xstatic-json2yaml xstatic-vis Regards, Kaz 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : > Hi Kuz, > > Don't worry, we're on the same page with you. I added both you, Xinni and > Keichii to the xstatic-core group. Thank you for your contributions! > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara wrote: >> >> Hi Ivan & Horizon folks >> >> >> AFAIK, Horizon team had conclusion that you will add the specific >> members to xstatic-core, correct ? >> Can I ask you to add the following members ? >> # All of tree are heat-dashboard core. >> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >> Xinni Ge / xinni.ge1990 at gmail.com >> Keiichi Hikita / keiichi.hikita at gmail.com >> >> Please give me a shout, if we are not on same page or any concern. >> >> Regards, >> Kaz >> >> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >> > Hi Ivan, Akihiro, >> > >> > >> > Thanks for your kind arrangement. >> > Looking forward to hearing your decision soon. >> > >> > Regards, >> > Kaz >> > >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >> >> HI Team, >> >> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree that >> >> #4 >> >> could be too complicated for us. Anyway, we've got this topic on the >> >> meeting >> >> agenda [1] so we'll discuss it there too. I'll share our decision after >> >> the >> >> meeting. >> >> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >> >> >> >> >> >> >> >> Regards, >> >> Ivan Kolodyazhny, >> >> http://blog.e0ne.info/ >> >> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki >> >> wrote: >> >>> >> >>> Hi Kaz and Ivan, >> >>> >> >>> Yeah, it is worth discussed officially in the horizon team meeting or >> >>> the >> >>> mailing list thread to get a consensus. >> >>> Hopefully you can add this topic to the horizon meeting agenda. >> >>> >> >>> After sending the previous mail, I noticed anther option. I see there >> >>> are >> >>> several options now. >> >>> (1) Keep xstatic-core and horizon-core same. >> >>> (2) Add specific members to xstatic-core >> >>> (3) Add specific horizon-plugin core to xstatic-core >> >>> (4) Split core membership into per-repo basis (perhaps too >> >>> complicated!!) >> >>> >> >>> My current vote is (2) as xstatic-core needs to understand what is >> >>> xstatic >> >>> and how it is maintained. >> >>> >> >>> Thanks, >> >>> Akihiro >> >>> >> >>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >> >>>> >> >>>> Hi Akihiro, >> >>>> >> >>>> >> >>>> Thanks for your comment. >> >>>> The background of my request to add us to xstatic-core comes from >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard discussion. >> >>>> >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >> >>>> Line135, "we can share ownership if needed - e0ne" >> >>>> >> >>>> Just in case, could you guys confirm unified opinion on this matter >> >>>> as >> >>>> Horizon team ? >> >>>> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >> >>>> because it's easier & smoother to manage what we are taking for >> >>>> heat-dashboard. >> >>>> On the other hand, I can understand what Akihiro you are saying, the >> >>>> newly added repos belong to Horizon project & being managed by not >> >>>> Horizon core is not consistent. >> >>>> Also having exception might make unexpected confusion in near future. >> >>>> >> >>>> Eventually we will follow your opinion, let me hear Horizon team's >> >>>> conclusion. >> >>>> >> >>>> Regards, >> >>>> Kaz >> >>>> >> >>>> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >> >>>> > Hi Kaz, >> >>>> > >> >>>> > These repositories are under horizon project. It looks better to >> >>>> > keep >> >>>> > the >> >>>> > current core team. >> >>>> > It potentially brings some confusion if we treat some horizon >> >>>> > plugin >> >>>> > team >> >>>> > specially. >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >> >>>> > would >> >>>> > work >> >>>> > without problem even if only horizon-core can approve xstatic >> >>>> > reviews. >> >>>> > >> >>>> > >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : >> >>>> >> >> >>>> >> Hi Ivan, Horizon folks, >> >>>> >> >> >>>> >> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >> >>>> >> landed. >> >>>> >> >> >>>> >> In project-config for them, I've set same acl-config as the >> >>>> >> existing >> >>>> >> xstatic repos. >> >>>> >> It means only "xstatic-core" can manage the newly created repos on >> >>>> >> gerrit. >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" >> >>>> >> like as >> >>>> >> what horizon-core is doing ? >> >>>> >> >> >>>> >> xstatic-core >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >> >>>> >> >> >>>> >> heat-dashboard-core >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >> >>>> >> >> >>>> >> Of course, we will surely touch only what we made, just would like >> >>>> >> to >> >>>> >> manage them smoothly by ourselves. >> >>>> >> In case we need to touch the other ones, will ask Horizon team for >> >>>> >> help. >> >>>> >> >> >>>> >> Thanks in advance. >> >>>> >> >> >>>> >> Regards, >> >>>> >> Kaz >> >>>> >> >> >>>> >> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> >>>> >> > Hi Horizon Team, >> >>>> >> > >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >> >>>> >> > option, >> >>>> >> > and submitted a patch for it. >> >>>> >> > Could you please help to review the patch. >> >>>> >> > >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> >>>> >> > https://review.openstack.org/#/c/552259/ >> >>>> >> > >> >>>> >> > Thank you very much. >> >>>> >> > >> >>>> >> > Best Regards, >> >>>> >> > Xinni >> >>>> >> > >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> >>>> >> > >> >>>> >> > wrote: >> >>>> >> >> >> >>>> >> >> Hi Kaz, >> >>>> >> >> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these patches >> >>>> >> >> >> >>>> >> >> Regards, >> >>>> >> >> Ivan Kolodyazhny, >> >>>> >> >> http://blog.e0ne.info/ >> >>>> >> >> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> >>>> >> >> >> >>>> >> >> wrote: >> >>>> >> >>> >> >>>> >> >>> Hi Ivan & Horizon folks, >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >> >>>> >> >>> xstatic >> >>>> >> >>> modules. >> >>>> >> >>> Let me request you to have review the following patches. >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >> >>>> >> >>> >> >>>> >> >>> project-config >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >> >>>> >> >>> >> >>>> >> >>> governance >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >> >>>> >> >>> >> >>>> >> >>> Thanks in advance:) >> >>>> >> >>> >> >>>> >> >>> Regards, >> >>>> >> >>> Kaz >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >>>> >> >>> : >> >>>> >> >>> > Yes, please do that. We can then discuss in the review about >> >>>> >> >>> > technical >> >>>> >> >>> > details. >> >>>> >> >>> > >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> >>>> >> >>> > >> >>>> >> >>> > wrote: >> >>>> >> >>> >> >> >>>> >> >>> >> Hi, Akihiro >> >>>> >> >>> >> >> >>>> >> >>> >> Thanks for the quick reply. >> >>>> >> >>> >> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES should >> >>>> >> >>> >> not >> >>>> >> >>> >> be >> >>>> >> >>> >> modified. >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >> >>>> >> >>> >> and I think maybe there could be one option like >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >> >>>> >> >>> >> This option adds the plugin's xstatic files in >> >>>> >> >>> >> STATICFILES_DIRS. >> >>>> >> >>> >> I am considering to add a bug report to describe it at >> >>>> >> >>> >> first, >> >>>> >> >>> >> and >> >>>> >> >>> >> give >> >>>> >> >>> >> a >> >>>> >> >>> >> patch later maybe. >> >>>> >> >>> >> Is that ok with the Horizon team? >> >>>> >> >>> >> >> >>>> >> >>> >> Best Regards. >> >>>> >> >>> >> Xinni >> >>>> >> >>> >> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> >>>> >> >>> >> >> >>>> >> >>> >> wrote: >> >>>> >> >>> >>> >> >>>> >> >>> >>> Hi Xinni, >> >>>> >> >>> >>> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >> >>>> >> >>> >>> : >> >>>> >> >>> >>> > Hello Horizon Team, >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > I would like to hear about your opinions about how to >> >>>> >> >>> >>> > add >> >>>> >> >>> >>> > new >> >>>> >> >>> >>> > xstatic >> >>>> >> >>> >>> > modules to horizon settings. >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files >> >>>> >> >>> >>> > issue, >> >>>> >> >>> >>> > thanks >> >>>> >> >>> >>> > for >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them and >> >>>> >> >>> >>> > referencing as >> >>>> >> >>> >>> > new >> >>>> >> >>> >>> > xstatic-* libs. >> >>>> >> >>> >>> >> >>>> >> >>> >>> Thanks for moving this forward. >> >>>> >> >>> >>> >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded as >> >>>> >> >>> >>> > openstack >> >>>> >> >>> >>> > official >> >>>> >> >>> >>> > repos yet) in our development environment now, but >> >>>> >> >>> >>> > hesitate >> >>>> >> >>> >>> > to >> >>>> >> >>> >>> > decide >> >>>> >> >>> >>> > how to >> >>>> >> >>> >>> > add the new installed xstatic lib path to >> >>>> >> >>> >>> > STATICFILES_DIRS >> >>>> >> >>> >>> > in >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static files >> >>>> >> >>> >>> > could >> >>>> >> >>> >>> > be >> >>>> >> >>> >>> > automatically >> >>>> >> >>> >>> > collected by *collectstatic* process. >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the relevant >> >>>> >> >>> >>> > static >> >>>> >> >>> >>> > fils >> >>>> >> >>> >>> > are >> >>>> >> >>> >>> > added >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon plugin >> >>>> >> >>> >>> > dashboard. >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >> >>>> >> >>> >>> > similar >> >>>> >> >>> >>> > to >> >>>> >> >>> >>> > ADD_JS_FILES) >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >>>> >> >>> >>> > STATICFILES_DIRS). >> >>>> >> >>> >>> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add xstatic >> >>>> >> >>> >>> modules >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a >> >>>> >> >>> >>> good >> >>>> >> >>> >>> idea >> >>>> >> >>> >>> to >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon >> >>>> >> >>> >>> plugin >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >> >>>> >> >>> >>> xstatic >> >>>> >> >>> >>> module >> >>>> >> >>> >>> in >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so horizon >> >>>> >> >>> >>> code >> >>>> >> >>> >>> to >> >>>> >> >>> >>> handle plugin settings should merge multiple entries to a >> >>>> >> >>> >>> single >> >>>> >> >>> >>> one >> >>>> >> >>> >>> hopefully. >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >> >>>> >> >>> >>> >> >>>> >> >>> >>> Akihiro >> >>>> >> >>> >>> >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >> >>>> >> >>> >>> > guys, >> >>>> >> >>> >>> > and >> >>>> >> >>> >>> > Best Regards, >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > Xinni Ge >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > __________________________________________________________________________ >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >> >>>> >> >>> >>> > questions) >> >>>> >> >>> >>> > Unsubscribe: >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> >>> > >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> __________________________________________________________________________ >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >> >>>> >> >>> >>> questions) >> >>>> >> >>> >>> Unsubscribe: >> >>>> >> >>> >>> >> >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> -- >> >>>> >> >>> >> 葛馨霓 Xinni Ge >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> __________________________________________________________________________ >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >> >>>> >> >>> >> questions) >> >>>> >> >>> >> Unsubscribe: >> >>>> >> >>> >> >> >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> >> >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > __________________________________________________________________________ >> >>>> >> >>> > OpenStack Development Mailing List (not for usage questions) >> >>>> >> >>> > Unsubscribe: >> >>>> >> >>> > >> >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >>> > >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> __________________________________________________________________________ >> >>>> >> >>> OpenStack Development Mailing List (not for usage questions) >> >>>> >> >>> Unsubscribe: >> >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >>> >> >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> __________________________________________________________________________ >> >>>> >> >> OpenStack Development Mailing List (not for usage questions) >> >>>> >> >> Unsubscribe: >> >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> >> >> >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> >> >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> > -- >> >>>> >> > 葛馨霓 Xinni Ge >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> > __________________________________________________________________________ >> >>>> >> > OpenStack Development Mailing List (not for usage questions) >> >>>> >> > Unsubscribe: >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> > >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> >> > >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> __________________________________________________________________________ >> >>>> >> OpenStack Development Mailing List (not for usage questions) >> >>>> >> Unsubscribe: >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > __________________________________________________________________________ >> >>>> > OpenStack Development Mailing List (not for usage questions) >> >>>> > Unsubscribe: >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>>> > >> >>>> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >> >>>> OpenStack Development Mailing List (not for usage questions) >> >>>> Unsubscribe: >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >>> >> >>> >> >>> >> >>> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >>> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rbowen at redhat.com Thu Mar 29 06:04:27 2018 From: rbowen at redhat.com (Rich Bowen) Date: Thu, 29 Mar 2018 14:04:27 +0800 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AB9797D.1090209@tipit.net> References: <5AB9797D.1090209@tipit.net> Message-ID: <0d317f08-c633-1df7-8add-11dcf0e03d14@redhat.com> A huge thank you to Will and Kambiz who maintained this service in addition to their real jobs, for all of these years. And a big thank you to the folks who have helped to retire the service and transition over to Passport. --Rich On 03/27/2018 06:51 AM, Jimmy Mcarthur wrote: > Hi everyone, > > We recently made the tough decision, in conjunction with the dedicated > volunteers that run TryStack, to end the service as of March 29, 2018. > For those of you that used it, thank you for being part of the TryStack > community. > > The good news is that you can find more resources to try OpenStack at > http://www.openstack.org/start, including the Passport Program > , where you can test on any > participating public cloud. If you are looking to test different tools > or application stacks with OpenStack clouds, you should check out Open > Lab . > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and > the many other volunteers who have managed this valuable service for the > last several years!  Your contribution to OpenStack was noticed and > appreciated by many in the community. -- Rich Bowen: Community Architect rbowen at redhat.com @rbowen // @RDOCommunity // @CentOSProject 1 859 351 9166 From delightwook at ssu.ac.kr Thu Mar 29 07:25:52 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Thu, 29 Mar 2018 16:25:52 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> Message-ID: <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 38362 bytes Desc: not available URL: From rgerganov at vmware.com Thu Mar 29 07:44:52 2018 From: rgerganov at vmware.com (Radoslav Gerganov) Date: Thu, 29 Mar 2018 10:44:52 +0300 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> Message-ID: <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> On 28.03.2018 19:07, melanie witt wrote: > We were reviewing a bug fix for the vmware driver [0] today and we noticed it appears that the VMware NSX CI is no longer running, not even on only the nova/virt/vmwareapi/ tree. > > From the third-party CI dashboard, I see some claims of it running but when I open the patches, I don't see any reporting from VMware NSX CI [1]. > > Can anyone from the vmware subteam comment on whether or not the vmware third-party CI is going to be fixed or if it has been abandoned? > While running the VMware CI continues to be a challenge, I must say this patch fixes a regression introduced by Matt Riedemann's patch: https://review.openstack.org/#/c/549411/ for which the VMware CI clearly indicated there was a problem and nevertheless the core team submitted it. Before blaming the CI for not voting enough, the core team should start taking into account existing CI votes. It'd be nice also to include VMware driver maintainers as reviewers when making changes to the VMware driver. From renat.akhmerov at gmail.com Thu Mar 29 08:00:58 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 29 Mar 2018 15:00:58 +0700 Subject: [openstack-dev] [requirements] Adding objgraph to global requirements Message-ID: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> Hi, Can we consider to add objgraph [1] to OpenStack global requirements? I found this library extremely useful for investigating memory leaks in Python programs but unfortunately I can’t push upstream any code using it. It seems to be pretty mature and supports all needed Python versions. Or maybe there’s some alternative already available in the OpenStack requirements? [1] https://pypi.python.org/pypi/objgraph/3.4.0 Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Mar 29 08:31:31 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 29 Mar 2018 19:31:31 +1100 Subject: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E96280FC1@ALA-MBD.corp.ad.wrs.com> Message-ID: <20180329083131.GM13389@thor.bakeyournoodle.com> On Fri, Mar 16, 2018 at 02:29:51PM +0000, Kwan, Louie wrote: > In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 are described in openstack's upper-constraints.txt, > > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411 > https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297 > > If I do > > > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens > > And then stack.sh > > We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0 Okay that's pretty strange. I can't think of why you'd be getting the master version of upper-constraints.txt from the queens branch. [tony at thor requirements]$ tools/grep-all.sh openstacksdk | grep -E '(master|queens)' origin/master : openstacksdk>=0.11.2 # Apache-2.0 origin/stable/queens : openstacksdk>=0.9.19 # Apache-2.0 origin/master : openstacksdk===0.12.0 origin/stable/queens : openstacksdk===0.11.3 [tony at thor requirements]$ tools/grep-all.sh os-service-types | grep -E '(master|queens)' origin/master : os-service-types>=1.2.0 # Apache-2.0 origin/stable/queens : os-service-types>=1.1.0 # Apache-2.0 origin/master : os-service-types===1.2.0 origin/stable/queens : os-service-types===1.1.0 I quick eyeball of the code doesn't show anything obvious. Can you provide the devstack log somewhere? > Having said that, we need the older version, how to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0 We can try to work out why you're getting the wrong versions but what error/problem do you see with the version from master? I'd expect some general we need version X of FOO but Y is installed messages. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Mar 29 08:32:14 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 29 Mar 2018 19:32:14 +1100 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: References: <20180314212003.GC25428@thor.bakeyournoodle.com> <20180315011132.GF25428@thor.bakeyournoodle.com> Message-ID: <20180329083213.GN13389@thor.bakeyournoodle.com> On Thu, Mar 15, 2018 at 10:57:58AM +0000, Jean-Philippe Evrard wrote: > Looks good to me. This has been done now. Thanks for being patient :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Thu Mar 29 08:36:25 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 29 Mar 2018 19:36:25 +1100 Subject: [openstack-dev] [all][stable] No more stable Phases welcome Extended Maintenance Message-ID: <20180329083625.GO13389@thor.bakeyournoodle.com> Hi all, At Sydney we started the process of change on the stable branches. Recently we merged a TC resolution[1] to alter the EOL process. The next step is refinining the stable policy itself. I've created a review to do that. I think it covers most of the points from Sydney and Dublin. Please check it out: https://review.openstack.org/#/c/552733/ Yours Tony. [1] https://review.openstack.org/548916 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From renat.akhmerov at gmail.com Thu Mar 29 08:33:58 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Thu, 29 Mar 2018 15:33:58 +0700 Subject: [openstack-dev] [requirements] Adding objgraph to global requirements In-Reply-To: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> References: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> Message-ID: After some discussion in IRC on this topic there was an idea just to write and push upstream needed tools using objgraph w/o having it in the requirements.txt at all. We just need to make sure that those tools are never used during production runs and unit tests (CI will help to verify that). If needed, objgraph can be manually installed used when we need to investigate something. If such practice is considered OK and doesn’t violate any OpenStack guidelines then I think this would work, at least in my case. Thanks Renat Akhmerov @Nokia On 29 Mar 2018, 15:00 +0700, Renat Akhmerov , wrote: > Hi, > > Can we consider to add objgraph [1] to OpenStack global requirements? I found this library extremely useful for investigating memory leaks in Python programs but unfortunately I can’t push upstream any code using it. It seems to be pretty mature and supports all needed Python versions. > > Or maybe there’s some alternative already available in the OpenStack requirements? > > [1] https://pypi.python.org/pypi/objgraph/3.4.0 > > > Thanks > > Renat Akhmerov > @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Mar 29 10:03:26 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 29 Mar 2018 03:03:26 -0700 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> Message-ID: On Thu, 29 Mar 2018 10:44:52 +0300, Radoslav Gerganov wrote: > On 28.03.2018 19:07, melanie witt wrote: >> We were reviewing a bug fix for the vmware driver [0] today and we noticed it appears that the VMware NSX CI is no longer running, not even on only the nova/virt/vmwareapi/ tree. >> >> From the third-party CI dashboard, I see some claims of it running but when I open the patches, I don't see any reporting from VMware NSX CI [1]. >> >> Can anyone from the vmware subteam comment on whether or not the vmware third-party CI is going to be fixed or if it has been abandoned? >> > > While running the VMware CI continues to be a challenge, I must say this patch fixes a regression introduced by Matt Riedemann's patch: > > https://review.openstack.org/#/c/549411/ > > for which the VMware CI clearly indicated there was a problem and nevertheless the core team submitted it. > Before blaming the CI for not voting enough, the core team should start taking into account existing CI votes. > It'd be nice also to include VMware driver maintainers as reviewers when making changes to the VMware driver. Thank you for bringing the root cause to our attention and I'm sorry we made a mistake that broke the driver. You are right that the VMware CI vote should have been taken into consideration and that the VMware subteam members should have been added as reviewers on the patch. It was not my intention to blame the VMware CI for not voting enough. I just wanted to know what happened to it and whether or not it is being maintained. I would like to see the VMware CI running again and it need only run on changes under the nova/virt/vmwareapi/ tree, to save on your resources. And on our side, I'd like us to add VMware subteam members to VMware driver patch reviews (I believe most of the active team members are listed on the priorities etherpad [0]) and to be sure we consult VMware CI votes when we review. Best, -melanie [0] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking L256 From gkotton at vmware.com Thu Mar 29 10:09:09 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 29 Mar 2018 10:09:09 +0000 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> Message-ID: <0DDE1027-ECC0-4B45-BF2C-AA62352C7F67@vmware.com> Hi, Here is an example where the CI has run on a recent patch - yesterday - https://review.openstack.org/557256 Thanks Gary On 3/29/18, 1:04 PM, "melanie witt" wrote: On Thu, 29 Mar 2018 10:44:52 +0300, Radoslav Gerganov wrote: > On 28.03.2018 19:07, melanie witt wrote: >> We were reviewing a bug fix for the vmware driver [0] today and we noticed it appears that the VMware NSX CI is no longer running, not even on only the nova/virt/vmwareapi/ tree. >> >> From the third-party CI dashboard, I see some claims of it running but when I open the patches, I don't see any reporting from VMware NSX CI [1]. >> >> Can anyone from the vmware subteam comment on whether or not the vmware third-party CI is going to be fixed or if it has been abandoned? >> > > While running the VMware CI continues to be a challenge, I must say this patch fixes a regression introduced by Matt Riedemann's patch: > > https://review.openstack.org/#/c/549411/ > > for which the VMware CI clearly indicated there was a problem and nevertheless the core team submitted it. > Before blaming the CI for not voting enough, the core team should start taking into account existing CI votes. > It'd be nice also to include VMware driver maintainers as reviewers when making changes to the VMware driver. Thank you for bringing the root cause to our attention and I'm sorry we made a mistake that broke the driver. You are right that the VMware CI vote should have been taken into consideration and that the VMware subteam members should have been added as reviewers on the patch. It was not my intention to blame the VMware CI for not voting enough. I just wanted to know what happened to it and whether or not it is being maintained. I would like to see the VMware CI running again and it need only run on changes under the nova/virt/vmwareapi/ tree, to save on your resources. And on our side, I'd like us to add VMware subteam members to VMware driver patch reviews (I believe most of the active team members are listed on the priorities etherpad [0]) and to be sure we consult VMware CI votes when we review. Best, -melanie [0] https://etherpad.openstack.org/p/rocky-nova-priorities-tracking L256 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Thu Mar 29 10:19:04 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 29 Mar 2018 03:19:04 -0700 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <0DDE1027-ECC0-4B45-BF2C-AA62352C7F67@vmware.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> <0DDE1027-ECC0-4B45-BF2C-AA62352C7F67@vmware.com> Message-ID: On Thu, 29 Mar 2018 10:09:09 +0000, Gary Kotton wrote: > Here is an example where the CI has run on a recent patch - yesterday -https://review.openstack.org/557256 Thanks. Just curious, how is the CI passing if the driver is currently broken for detach_volume? I had thought maybe particular tests were skipped in response to my original email that linked the bug fix patch, but it looks like that run was from before I sent the original email. -melanie From ifat.afek at nokia.com Thu Mar 29 11:06:59 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Thu, 29 Mar 2018 11:06:59 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> Message-ID: Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Thu Mar 29 11:51:21 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Thu, 29 Mar 2018 20:51:21 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> Message-ID: <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 45302 bytes Desc: not available URL: From soulxu at gmail.com Thu Mar 29 12:38:44 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 29 Mar 2018 20:38:44 +0800 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> Message-ID: Agree with that, whatever the tweak inventory or traits, none of them works. Same as VGPU, we can support pre-programmed mode for multiple-functions region, and each region only can support one type function. There are two reasons why Cyborg has a filter: * records the usage of functions in a region * records which function is programmed. For #1, each region provider multiple functions. Each function can be assigned to a VM. So we should create ResourceProvider for the region. And the resource class is function. That is similar to the SR-IOV device. The region(The PF) provides functions (VFs). For #2, We should use trait to distinguish the function type. Then we didn't keep any inventory info in the cyborg again, and we needn't any filter in cyborg also, and there is no race condition anymore. 2018-03-29 2:48 GMT+08:00 Eric Fried : > Sundar- > > We're running across this issue in several places right now. One > thing that's definitely not going to get traction is > automatically/implicitly tweaking inventory in one resource class when > an allocation is made on a different resource class (whether in the same > or different RPs). > > Slightly less of a nonstarter, but still likely to get significant > push-back, is the idea of tweaking traits on the fly. For example, your > vGPU case might be modeled as: > > PGPU_RP: { > inventory: { > CUSTOM_VGPU_TYPE_A: 2, > CUSTOM_VGPU_TYPE_B: 4, > } > traits: [ > CUSTOM_VGPU_TYPE_A_CAPABLE, > CUSTOM_VGPU_TYPE_B_CAPABLE, > ] > } > > The request would come in for > resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting > in an allocation of CUSTOM_VGPU_TYPE_A:1. Now while you're processing > that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. > So it doesn't matter that there's still inventory of > CUSTOM_VGPU_TYPE_B:4, because a request including > required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. > There's of course a window between when the initial allocation is made > and when you tweak the trait list. In that case you'll just have to > fail the loser. This would be like any other failure in e.g. the spawn > process; it would bubble up, the allocation would be removed; retries > might happen or whatever. > > Like I said, you're likely to get a lot of resistance to this idea > as > well. (Though TBH, I'm not sure how we can stop you beyond -1'ing your > patches; there's nothing about placement that disallows it.) > > The simple-but-inefficient solution is simply that we'd still be > able > to make allocations for vGPU type B, but you would have to fail right > away when it came down to cyborg to attach the resource. Which is code > you pretty much have to write anyway. It's an improvement if cyborg > gets to be involved in the post-get-allocation-candidates > weighing/filtering step, because you can do that check at that point to > help filter out the candidates that would fail. Of course there's still > a race condition there, but it's no different than for any other resource. > > efried > > On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: > > Hi Eric and all, > > I should have clarified that this race condition happens only for > > the case of devices with multiple functions. There is a prior thread > > March/127882.html> > > about it. I was trying to get a solution within Cyborg, but that faces > > this race condition as well. > > > > IIUC, this situation is somewhat similar to the issue with vGPU types > > %23openstack-nova.2018-03-27.log.html#t2018-03-27T13:41:00> > > (thanks to Alex Xu for pointing this out). In the latter case, we could > > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4). But, after > > consuming a unit of vGPU-type-a, ideally the inventory should change > > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, > > we start with an RP inventory of (region-type-A: 1, function-X: 4). But, > > after consuming a unit of that function, ideally the inventory should > > change to: (region-type-A: 0, function-X: 3). > > > > I understand that this approach is controversial :) Also, one difference > > from the vGPU case is that the number and count of vGPU types is static, > > whereas with FPGAs, one could reprogram it to result in more or fewer > > functions. That said, we could hopefully keep this analogy in mind for > > future discussions. > > > > We probably will not support multi-function accelerators in Rocky. This > > discussion is for the longer term. > > > > Regards, > > Sundar > > > > On 3/23/2018 12:44 PM, Eric Fried wrote: > >> Sundar- > >> > >> First thought is to simplify by NOT keeping inventory information > in > >> the cyborg db at all. The provider record in the placement service > >> already knows the device (the provider ID, which you can look up in the > >> cyborg db) the host (the root_provider_uuid of the provider representing > >> the device) and the inventory, and (I hope) you'll be augmenting it with > >> traits indicating what functions it's capable of. That way, you'll > >> always get allocation candidates with devices that *can* load the > >> desired function; now you just have to engage your weigher to prioritize > >> the ones that already have it loaded so you can prefer those. > >> > >> Am I missing something? > >> > >> efried > >> > >> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: > >>> Hi all, > >>> There seems to be a possibility of a race condition in the > >>> Cyborg/Nova flow. Apologies for missing this earlier. (You can refer to > >>> the proposed Cyborg/Nova spec > >>> cyborg-nova-sched.rst> > >>> for details.) > >>> > >>> Consider the scenario where the flavor specifies a resource class for a > >>> device type, and also specifies a function (e.g. encrypt) in the extra > >>> specs. The Nova scheduler would only track the device type as a > >>> resource, and Cyborg needs to track the availability of functions. > >>> Further, to keep it simple, say all the functions exist all the time > (no > >>> reprogramming involved). > >>> > >>> To recap, here is the scheduler flow for this case: > >>> > >>> * A request spec with a flavor comes to Nova conductor/scheduler. The > >>> flavor has a device type as a resource class, and a function in the > >>> extra specs. > >>> * Placement API returns the list of RPs (compute nodes) which contain > >>> the requested device types (but not necessarily the function). > >>> * Cyborg will provide a custom filter which queries Cyborg DB. This > >>> needs to check which hosts contain the needed function, and filter > >>> out the rest. > >>> * The scheduler selects one node from the filtered list, and the > >>> request goes to the compute node. > >>> > >>> For the filter to work, the Cyborg DB needs to maintain a table with > >>> triples of (host, function type, #free units). The filter checks if a > >>> given host has one or more free units of the requested function type. > >>> But, to keep the # free units up to date, Cyborg on the selected > compute > >>> node needs to notify the Cyborg API to decrement the #free units when > an > >>> instance is spawned, and to increment them when resources are released. > >>> > >>> Therein lies the catch: this loop from the compute node to controller > is > >>> susceptible to race conditions. For example, if two simultaneous > >>> requests each ask for function A, and there is only one unit of that > >>> available, the Cyborg filter will approve both, both may land on the > >>> same host, and one will fail. This is because Cyborg on the controller > >>> does not decrement resource usage due to one request before processing > >>> the next request. > >>> > >>> This is similar to this previous Nova scheduling issue > >>> pike/implemented/placement-claims.html>. > >>> That was solved by having the scheduler claim a resource in Placement > >>> for the selected node. I don't see an analog for Cyborg, since it would > >>> not know which node is selected. > >>> > >>> Thanks in advance for suggestions and solutions. > >>> > >>> Regards, > >>> Sundar > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijie at unitedstack.com Thu Mar 29 12:43:11 2018 From: lijie at unitedstack.com (=?utf-8?B?5p2O5p2w?=) Date: Thu, 29 Mar 2018 20:43:11 +0800 Subject: [openstack-dev] [cinder][nova] about re-image the volume Message-ID: Hi,all This is the spec [0] about rebuild the volumed backed server.The question raised in the spec is about how to bandle the root volume.Finally,in Nova team,we think that the cleanest / best solution to this is to add a volume action API to cinder for re-imaging the volume.Once that is available in a new cinder v3 microversion, nova can use it. The reason I think this should be done in Cinder with re-imaging the volume there is (1) it's cleaner from the nova side and (2) then Cinder is in control of how that re-image should happen, along with any details it needs to update, e.g. the volume's "volume_image_metadata" information would need to be updated.We really aren't suitable to do the volume create/delete/swap orchestration thing since that entails issues with the volume type being gone, going over quota, what to do about deleting the old volume, etc. So Nova team want Cinder to achieve the re-image api.But, I see a spec about volume revert by snapshot[1].It is so good for rebuild operation.In short,I have two ideas,one is change the volume revert by snapshot spec to re-image spec,not only it can let the volume revert by snapshot,but also can re-image the volume which the image's size is greater than 0;another idea is add a only re-image spec,it only can re-image the volume which the image's size is greater than 0. What do you think of the two ideas?Any suggestion is welcome.Thank you! Note:the instance snapshot for image backed server's image size is greater than 0,but the volume backed server 's image size is equal 0. Re: [0]https://review.openstack.org/#/c/532407/ [1]https://specs.openstack.org/openstack/cinder-specs/specs/pike/cinder-volume-revert-by-snapshot.html Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 29 12:47:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 07:47:40 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1522270707.4003.32.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180328191443.GA26845@sm-xps> <1522270707.4003.32.camel@redhat.com> Message-ID: <20180329124740.GA45069@smcginnis-mbp.local> > > > > It's not mentioned here, but I discovered today that Cinder is using the > > sphinx.ext.autodoc module. Is there any issue with using this? > > > > Nope - sphinx-apidoc and the likes use autodoc under the hood. You can > see this by checking the output in 'contributor/api' or the likes. > > Stephen > I'm wondering if there is a problem with using this vs. the way being proposed. In other words, do we need to switch over to this new sphinxcontrib module, or staying with autodoc should be OK. And if so, why not switch current users of the pbr method over to use sphinx.ext.autdoc rather than introducing something new? From edmondsw at us.ibm.com Thu Mar 29 12:53:38 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 29 Mar 2018 07:53:38 -0500 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> Message-ID: melanie witt wrote on 03/29/2018 06:03:26 AM: > I would like to see the VMware CI running again and it need only run on > changes under the nova/virt/vmwareapi/ tree, to save on your resources. > And on our side, I'd like us to add VMware subteam members to VMware > driver patch reviews (I believe most of the active team members are > listed on the priorities etherpad [0]) and to be sure we consult VMware > CI votes when we review. running only on virt/vmwareapi changes would not catch problems caused by changes elsewhere, such as compute/manager.py or virt/driver.py -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 29 12:55:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 07:55:11 -0500 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT In-Reply-To: <1522280173-sup-6848@lrrr.local> References: <87woxwymr1.fsf@meyer.lemoncheese.net> <1522280173-sup-6848@lrrr.local> Message-ID: <20180329125511.GB45069@smcginnis-mbp.local> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote: > Excerpts from corvus's message of 2018-03-28 13:21:38 -0700: > > Hi, > > > > I've proposed a change to devstack which slightly alters the > > LIBS_FROM_GIT behavior. This shouldn't be a significant change for > > those using legacy devstack jobs (but you may want to be aware of it). > > It is more significant for new-style devstack jobs. > > > > -snip- > > > > How does this apply to uses of devstack outside of zuul, such as in a > local development environment? > > Doug > This is my question too. I know in Cinder there are a lot of third party CI systems that do not use zuul. If they are impacted in any way by changes to devstack, we will need to make sure they are all aware of those changes (and have an alternative method for them to get the same functionality). Sean From doug at doughellmann.com Thu Mar 29 13:20:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 29 Mar 2018 09:20:04 -0400 Subject: [openstack-dev] [requirements] Adding objgraph to global requirements In-Reply-To: References: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> Message-ID: <1522329549-sup-5818@lrrr.local> Excerpts from Renat Akhmerov's message of 2018-03-29 15:33:58 +0700: > After some discussion in IRC on this topic there was an idea just to write and push upstream needed tools using objgraph w/o having it in the requirements.txt at all. We just need to make sure that those tools are never used during production runs and unit tests (CI will help to verify that). If needed, objgraph can be manually installed used when we need to investigate something. > > If such practice is considered OK and doesn’t violate any OpenStack guidelines then I think this would work, at least in my case. I don't see any problem with that but I'm also not necessarily opposed to adding it to the global requirements list so we can use it like other dependencies. What sorts of tools are you talking about building? > > Thanks > > Renat Akhmerov > @Nokia > > On 29 Mar 2018, 15:00 +0700, Renat Akhmerov , wrote: > > Hi, > > > > Can we consider to add objgraph [1] to OpenStack global requirements? I found this library extremely useful for investigating memory leaks in Python programs but unfortunately I can’t push upstream any code using it. It seems to be pretty mature and supports all needed Python versions. > > > > Or maybe there’s some alternative already available in the OpenStack requirements? > > > > [1] https://pypi.python.org/pypi/objgraph/3.4.0 > > > > > > Thanks > > > > Renat Akhmerov > > @Nokia From openstack at fried.cc Thu Mar 29 13:24:18 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 29 Mar 2018 08:24:18 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> Message-ID: <084c6a9d-7760-9792-76a6-29f020956627@fried.cc> Sundar- To be clear, *all* of the solutions will have race conditions. There's no getting around the fact that we need to account for situations where an allocation is made, but then can't be satisfied by cyborg (or neutron, or nova, or cinder, or whoever). That failure has to bubble up and cause retry or failure of the overarching flow. The objection to "dynamic trait setting" is that traits are intended to indicate characteristics, not states. https://www.google.com/search?q=estar+vs+ser I'll have to let Jay or Dan explain it further. Because TBH, I don't see the harm in mucking with traits/inventories dynamically. The solutions I discussed here are if it's critical that everything be dynamic and ultimately flexible. Alex brings up a different option in another subthread which is more likely how we're going to handle this for our Nova scenarios in Rocky. I'll comment further in that subthread. -efried On 03/28/2018 06:03 PM, Nadathur, Sundar wrote: > Thanks, Eric. Looks like there are no good solutions even as candidates, > but only options with varying levels of unacceptability. It is funny > that that the option that is considered the least unacceptable is to let > the problem happen and then fail the request (last one in your list). > > Could I ask what is the objection to the scheme that applies multiple > traits and removes one as needed, apart from the fact that it has races? > > Regards, > Sundar > > On 3/28/2018 11:48 AM, Eric Fried wrote: >> Sundar- >> >>     We're running across this issue in several places right now.   One >> thing that's definitely not going to get traction is >> automatically/implicitly tweaking inventory in one resource class when >> an allocation is made on a different resource class (whether in the same >> or different RPs). >> >>     Slightly less of a nonstarter, but still likely to get significant >> push-back, is the idea of tweaking traits on the fly.  For example, your >> vGPU case might be modeled as: >> >> PGPU_RP: { >>    inventory: { >>        CUSTOM_VGPU_TYPE_A: 2, >>        CUSTOM_VGPU_TYPE_B: 4, >>    } >>    traits: [ >>        CUSTOM_VGPU_TYPE_A_CAPABLE, >>        CUSTOM_VGPU_TYPE_B_CAPABLE, >>    ] >> } >> >>     The request would come in for >> resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting >> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing >> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. >> So it doesn't matter that there's still inventory of >> CUSTOM_VGPU_TYPE_B:4, because a request including >> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. >> There's of course a window between when the initial allocation is made >> and when you tweak the trait list.  In that case you'll just have to >> fail the loser.  This would be like any other failure in e.g. the spawn >> process; it would bubble up, the allocation would be removed; retries >> might happen or whatever. >> >>     Like I said, you're likely to get a lot of resistance to this idea as >> well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your >> patches; there's nothing about placement that disallows it.) >> >>     The simple-but-inefficient solution is simply that we'd still be able >> to make allocations for vGPU type B, but you would have to fail right >> away when it came down to cyborg to attach the resource.  Which is code >> you pretty much have to write anyway.  It's an improvement if cyborg >> gets to be involved in the post-get-allocation-candidates >> weighing/filtering step, because you can do that check at that point to >> help filter out the candidates that would fail.  Of course there's still >> a race condition there, but it's no different than for any other >> resource. >> >> efried >> >> On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: >>> Hi Eric and all, >>>      I should have clarified that this race condition happens only for >>> the case of devices with multiple functions. There is a prior thread >>> >>> >>> about it. I was trying to get a solution within Cyborg, but that faces >>> this race condition as well. >>> >>> IIUC, this situation is somewhat similar to the issue with vGPU types >>> >>> >>> (thanks to Alex Xu for pointing this out). In the latter case, we could >>> start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, after >>> consuming a unit of  vGPU-type-a, ideally the inventory should change >>> to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function accelerators, >>> we start with an RP inventory of (region-type-A: 1, function-X: 4). But, >>> after consuming a unit of that function, ideally the inventory should >>> change to: (region-type-A: 0, function-X: 3). >>> >>> I understand that this approach is controversial :) Also, one difference >>> from the vGPU case is that the number and count of vGPU types is static, >>> whereas with FPGAs, one could reprogram it to result in more or fewer >>> functions. That said, we could hopefully keep this analogy in mind for >>> future discussions. >>> >>> We probably will not support multi-function accelerators in Rocky. This >>> discussion is for the longer term. >>> >>> Regards, >>> Sundar >>> >>> On 3/23/2018 12:44 PM, Eric Fried wrote: >>>> Sundar- >>>> >>>>     First thought is to simplify by NOT keeping inventory >>>> information in >>>> the cyborg db at all.  The provider record in the placement service >>>> already knows the device (the provider ID, which you can look up in the >>>> cyborg db) the host (the root_provider_uuid of the provider >>>> representing >>>> the device) and the inventory, and (I hope) you'll be augmenting it >>>> with >>>> traits indicating what functions it's capable of.  That way, you'll >>>> always get allocation candidates with devices that *can* load the >>>> desired function; now you just have to engage your weigher to >>>> prioritize >>>> the ones that already have it loaded so you can prefer those. >>>> >>>>     Am I missing something? >>>> >>>>         efried >>>> >>>> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >>>>> Hi all, >>>>>      There seems to be a possibility of a race condition in the >>>>> Cyborg/Nova flow. Apologies for missing this earlier. (You can >>>>> refer to >>>>> the proposed Cyborg/Nova spec >>>>> >>>>> >>>>> for details.) >>>>> >>>>> Consider the scenario where the flavor specifies a resource class >>>>> for a >>>>> device type, and also specifies a function (e.g. encrypt) in the extra >>>>> specs. The Nova scheduler would only track the device type as a >>>>> resource, and Cyborg needs to track the availability of functions. >>>>> Further, to keep it simple, say all the functions exist all the >>>>> time (no >>>>> reprogramming involved). >>>>> >>>>> To recap, here is the scheduler flow for this case: >>>>> >>>>>    * A request spec with a flavor comes to Nova >>>>> conductor/scheduler. The >>>>>      flavor has a device type as a resource class, and a function >>>>> in the >>>>>      extra specs. >>>>>    * Placement API returns the list of RPs (compute nodes) which >>>>> contain >>>>>      the requested device types (but not necessarily the function). >>>>>    * Cyborg will provide a custom filter which queries Cyborg DB. This >>>>>      needs to check which hosts contain the needed function, and >>>>> filter >>>>>      out the rest. >>>>>    * The scheduler selects one node from the filtered list, and the >>>>>      request goes to the compute node. >>>>> >>>>> For the filter to work, the Cyborg DB needs to maintain a table with >>>>> triples of (host, function type, #free units). The filter checks if a >>>>> given host has one or more free units of the requested function type. >>>>> But, to keep the # free units up to date, Cyborg on the selected >>>>> compute >>>>> node needs to notify the Cyborg API to decrement the #free units >>>>> when an >>>>> instance is spawned, and to increment them when resources are >>>>> released. >>>>> >>>>> Therein lies the catch: this loop from the compute node to >>>>> controller is >>>>> susceptible to race conditions. For example, if two simultaneous >>>>> requests each ask for function A, and there is only one unit of that >>>>> available, the Cyborg filter will approve both, both may land on the >>>>> same host, and one will fail. This is because Cyborg on the controller >>>>> does not decrement resource usage due to one request before processing >>>>> the next request. >>>>> >>>>> This is similar to this previous Nova scheduling issue >>>>> . >>>>> >>>>> That was solved by having the scheduler claim a resource in Placement >>>>> for the selected node. I don't see an analog for Cyborg, since it >>>>> would >>>>> not know which node is selected. >>>>> >>>>> Thanks in advance for suggestions and solutions. >>>>> >>>>> Regards, >>>>> Sundar >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Mar 29 14:24:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 09:24:34 -0500 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> <0DDE1027-ECC0-4B45-BF2C-AA62352C7F67@vmware.com> Message-ID: On 3/29/2018 5:19 AM, melanie witt wrote: > Thanks. Just curious, how is the CI passing if the driver is currently > broken for detach_volume? I had thought maybe particular tests were > skipped in response to my original email that linked the bug fix patch, > but it looks like that run was from before I sent the original email. I had the same question, and it looks like the tests that failed in [1] aren't being run in [2]. [1] https://review.openstack.org/#/c/549411/ [2] https://review.openstack.org/#/c/557256/ -- Thanks, Matt From mriedemos at gmail.com Thu Mar 29 14:26:54 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 09:26:54 -0500 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> Message-ID: <6fd9592a-1ae2-e8b8-12be-c3aa04ab57bb@gmail.com> On 3/29/2018 7:53 AM, William M Edmonds wrote: > running only on virt/vmwareapi changes would not catch problems caused > by changes elsewhere, such as compute/manager.py or virt/driver.py Right, I think virt driver 3rd party CI should run on at least some select sub-trees, the major ones that come to mind are: nova/compute/manager.py nova/virt/ nova/virt/block_device.py There are likely others. -- Thanks, Matt From sfinucan at redhat.com Thu Mar 29 14:26:54 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 29 Mar 2018 15:26:54 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180329124740.GA45069@smcginnis-mbp.local> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180328191443.GA26845@sm-xps> <1522270707.4003.32.camel@redhat.com> <20180329124740.GA45069@smcginnis-mbp.local> Message-ID: <1522333614.3554.11.camel@redhat.com> On Thu, 2018-03-29 at 07:47 -0500, Sean McGinnis wrote: > > > > > > It's not mentioned here, but I discovered today that Cinder is using the > > > sphinx.ext.autodoc module. Is there any issue with using this? > > > > > > > Nope - sphinx-apidoc and the likes use autodoc under the hood. You can > > see this by checking the output in 'contributor/api' or the likes. > > > > Stephen > > > > I'm wondering if there is a problem with using this vs. the way being proposed. > > In other words, do we need to switch over to this new sphinxcontrib module, or > staying with autodoc should be OK. And if so, why not switch current users of > the pbr method over to use sphinx.ext.autdoc rather than introducing something > new? tl;dr: You don't _have_ to automate this stuff, but it helps. sphinx-apidoc generates stub files containing a whole load of autodoc directives. As noted above, you can check the output of a sphinx-apidoc run and you'll see just this. If I were to guess, Cinder simply checked in the output of such a run [*] into Git, meaning they don't need to run it each time. This works but it comes with the downside that your docs can and will get out of sync with the actual code when, for example, you either add or remove some modules or functions. Running sphinx-apidoc on each build, as we've been doing with pbr's autodoc feature, ensures this out-of-sync issue doesn't happen, at the expense of increased doc build times. Stephen [*] They might also have handwritten this stuff, but I highly doubt that (it's rather tedious to write). From sean.mcginnis at gmx.com Thu Mar 29 14:28:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 09:28:13 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: Message-ID: <20180329142813.GA25762@sm-xps> > This is the spec [0] about rebuild the volumed backed server. > The question raised in the spec is about how to bandle the root volume. > Finally,in Nova team,we think that the cleanest / best solution to this is to > add a volume action API to cinder for re-imaging the volume.Once that is > available in a new cinder v3 microversion, nova can use it. The reason I > ... > So Nova team want Cinder to achieve the re-image api.But, I see a spec > about volume revert by snapshot[1].It is so good for rebuild operation.In > short,I have two ideas,one is change the volume revert by snapshot spec to > re-image spec,not only it can let the volume revert by snapshot,but also can > re-image the volume which the image's size is greater than 0;another idea is > add a only re-image spec,it only can re-image the volume which the image's > size is greater than 0. > I do not think changing the revert to snapshot implementation is appropriate here. There may be some cases where this can get the desired result, but there is no guarantee that there is a snapshot on the volume's base image state to revert to. It also would not make sense to overload this functionality to "revert to snapshot if you can, otherwise do all this other stuff instead." This would need to be a new API (microversioned) to add a reimage call. I wouldn't expect implementation to be too difficult as we already have that functionality for new volumes. We would just need to figure out the most appropriate way to take an already in-use volume, detach it, rewrite the image, then reattach it. Ideally, from my perspective, Nova would take care of the detach/attach portion and Cinder would only need to take care of imaging the volume. Sean From mriedemos at gmail.com Thu Mar 29 14:45:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 09:45:27 -0500 Subject: [openstack-dev] [nova] VMware NSX CI - no longer running? In-Reply-To: <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> References: <963d554e-3700-5bea-a526-8751e65c7041@gmail.com> <6569c4c7-95a8-fd15-afd4-0090280e2bdd@vmware.com> Message-ID: <2cb4c6c0-6786-784e-b7a1-2a94b39a7d9e@gmail.com> On 3/29/2018 2:44 AM, Radoslav Gerganov wrote: > While running the VMware CI continues to be a challenge, I must say this > patch fixes a regression introduced by Matt Riedemann's patch: > > https://review.openstack.org/#/c/549411/ > > for which the VMware CI clearly indicated there was a problem and > nevertheless the core team submitted it. > Before blaming the CI for not voting enough, the core team should start > taking into account existing CI votes. > It'd be nice also to include VMware driver maintainers as reviewers when > making changes to the VMware driver. Yup, clearly my fault on that one, and I deserve the karmic hit. -- Thanks, Matt From corvus at inaugust.com Thu Mar 29 14:49:43 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 29 Mar 2018 07:49:43 -0700 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT In-Reply-To: <20180329125511.GB45069@smcginnis-mbp.local> (Sean McGinnis's message of "Thu, 29 Mar 2018 07:55:11 -0500") References: <87woxwymr1.fsf@meyer.lemoncheese.net> <1522280173-sup-6848@lrrr.local> <20180329125511.GB45069@smcginnis-mbp.local> Message-ID: <87woxvuebc.fsf@meyer.lemoncheese.net> Sean McGinnis writes: > On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote: >> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700: >> > Hi, >> > >> > I've proposed a change to devstack which slightly alters the >> > LIBS_FROM_GIT behavior. This shouldn't be a significant change for >> > those using legacy devstack jobs (but you may want to be aware of it). >> > It is more significant for new-style devstack jobs. >> > >> > -snip- >> > >> >> How does this apply to uses of devstack outside of zuul, such as in a >> local development environment? >> >> Doug >> > > This is my question too. I know in Cinder there are a lot of third party CI > systems that do not use zuul. If they are impacted in any way by changes to > devstack, we will need to make sure they are all aware of those changes (and > have an alternative method for them to get the same functionality). Neither local nor third-party CI use should be affected. There's no change in behavior based on current usage patterns. Only the caveat that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or non-existent package name), it will not automatically be caught. -Jim From jeremyfreudberg at gmail.com Thu Mar 29 15:16:13 2018 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Thu, 29 Mar 2018 11:16:13 -0400 Subject: [openstack-dev] [sahara] Breathing new life into external Sahara CI Message-ID: I am happy to announce that I have finally acquired two machines to be used towards our external CI infrastructure. (Thanks very much to Cisco for their generosity!) Now that we have accomplished the hard part, getting a hardware donation, we can finally move on to the next step of actually deploying the CI services. I call upon the Sahara community as a whole to assist me in this endeavor. We can use the sahara-ci-config repo as a starting point, but there are some tweaks to discuss. Best, Jeremy From esikachov at gmail.com Thu Mar 29 15:23:51 2018 From: esikachov at gmail.com (Evgeny Sikachov) Date: Thu, 29 Mar 2018 19:23:51 +0400 Subject: [openstack-dev] [sahara] Breathing new life into external Sahara CI In-Reply-To: References: Message-ID: <230e0ab5-df4a-4fc9-ad00-857b122f70a2@Spark> Cool! This is the really good news! I am ready to help On Mar 29, 2018, 7:16 PM +0400, Jeremy Freudberg , wrote: > I am happy to announce that I have finally acquired two machines to be > used towards our external CI infrastructure. (Thanks very much to > Cisco for their generosity!) > > Now that we have accomplished the hard part, getting a hardware > donation, we can finally move on to the next step of actually > deploying the CI services. I call upon the Sahara community as a whole > to assist me in this endeavor. We can use the sahara-ci-config repo as > a starting point, but there are some tweaks to discuss. > > Best, > Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 29 15:54:29 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 10:54:29 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1522333614.3554.11.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <20180321145716.GA23250@sm-xps> <1521715425.17048.8.camel@redhat.com> <20180328191443.GA26845@sm-xps> <1522270707.4003.32.camel@redhat.com> <20180329124740.GA45069@smcginnis-mbp.local> <1522333614.3554.11.camel@redhat.com> Message-ID: <20180329155429.GA30128@sm-xps> > > tl;dr: You don't _have_ to automate this stuff, but it helps. > > sphinx-apidoc generates stub files containing a whole load of autodoc > directives. As noted above, you can check the output of a sphinx-apidoc > run and you'll see just this. If I were to guess, Cinder simply checked > in the output of such a run [*] into Git, meaning they don't need to > run it each time. This works but it comes with the downside that your > docs can and will get out of sync with the actual code when, for > example, you either add or remove some modules or functions. Running > sphinx-apidoc on each build, as we've been doing with pbr's autodoc > feature, ensures this out-of-sync issue doesn't happen, at the expense > of increased doc build times. > > Stephen > > [*] They might also have handwritten this stuff, but I highly doubt > that (it's rather tedious to write). > Ah, perfect. This was the motivation I was looking for. I don't think anyone on the team is aware of this. In this case, switching over to using the new, dynamically generated way has a lot of benefit. Looking deeper, there are some custom things being done to generate this output. So rather than maintaining (or really not maintaining as is the reality here) this custom code, we should streamline this and be consistent by following the new approach. From sean.mcginnis at gmx.com Thu Mar 29 16:00:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 11:00:01 -0500 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT In-Reply-To: <87woxvuebc.fsf@meyer.lemoncheese.net> References: <87woxwymr1.fsf@meyer.lemoncheese.net> <1522280173-sup-6848@lrrr.local> <20180329125511.GB45069@smcginnis-mbp.local> <87woxvuebc.fsf@meyer.lemoncheese.net> Message-ID: <20180329160000.GB30128@sm-xps> > > Neither local nor third-party CI use should be affected. There's no > change in behavior based on current usage patterns. Only the caveat > that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or > non-existent package name), it will not automatically be caught. > > -Jim Perfect, thanks Jim. From sean.mcginnis at gmx.com Thu Mar 29 16:13:02 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 11:13:02 -0500 Subject: [openstack-dev] [release] Release countdown for week R-21, April 2-6 Message-ID: <20180329161302.GA31605@sm-xps> Welcome to our regular release countdown email. Development Focus ----------------- Team focus should be on spec approval and implementation for priority features. General Information ------------------- We would love to have all the liaisons attend the release team meeting every Friday[1]. [1] http://eavesdrop.openstack.org/#Release_Team_Meeting That said, we are skipping this week's meeting due to availability with the Easter weekend. :) But we would also like to have PTLs and/or release liaison's in the #openstack-release channel on milestone and release weeks. Please keep that in mind and try to linger there during these times if possible. From time to time it is necessary to track someone down to answer questions or resolve issues. It would be great to have someone easily pingable. I also want to make sure everyone is aware of the proposed changes to the stable policy in support of the "Extended Maintenance" changes [2]. Please take a look so you are aware of the planned changes, and please chime in if you have any issues, questions, or concerns about the proposal. [2] https://review.openstack.org/#/c/552733/ Upcoming Deadlines & Dates -------------------------- Rocky-1 milestone: April 19 (R-19 week) Forum at OpenStack Summit in Vancouver: May 21-24 -- Sean McGinnis (smcginnis) From dpeacock at redhat.com Thu Mar 29 16:33:45 2018 From: dpeacock at redhat.com (David Peacock) Date: Thu, 29 Mar 2018 12:33:45 -0400 Subject: [openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO Message-ID: Hi everyone, During the recent PTG in Dublin, it was decided that we'd prototype a way forward with Ansible tasks in TripleO that adhere to Ansible best practises, creating dedicated roles with unique git repositories and RPM packaging per role. With a view to moving in this direction, a couple of us on the TripleO team have begun developing tooling to facilitate this. Initially we've worked on a tool [0] to extract Ansible tasks lists from tripleo-heat-templates and move them into new formally structured Ansible roles. An example with the existing keystone docker service [1]: The upgrade_tasks block will become: ``` upgrade_tasks: - import_role: name: tripleo-role-keystone tasks_from: upgrade.yaml ``` The fast_forward_upgrade_tasks block will become: ``` fast_forward_upgrade_tasks: - import_role: name: tripleo-role-keystone tasks_from: fast_forward_upgrade.yaml ``` And this role [2] will be structured: ``` tripleo-role-keystone/ └── tasks ├── fast_forward_upgrade.yaml ├── main.yaml └── upgrade.yaml ``` We'd love to hear any feedback from the community as we move towards this. Thank you, David Peacock [0] https://github.com/davidjpeacock/openstack-role-extract/blob/master/role-extractor-creator.py [1] https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/keystone.yaml [2] https://github.com/davidjpeacock/tripleo-role-keystone -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Mar 29 16:43:15 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 29 Mar 2018 11:43:15 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> Message-ID: <7475d530-9800-35bc-711d-3ba91b71a7d1@fried.cc> We discussed this on IRC [1], hangout, and etherpad [2]. Here is the summary, which we mostly seem to agree on: There are two different classes of device we're talking about modeling/managing. (We don't know the real nomenclature, so forgive errors in that regard.) ==> Fully dynamic: You can program one region with one function, and then still program a different region with a different function, etc. ==> Single program: Once you program the card with a function, *all* its virtual slots are *only* capable of that function until the card is reprogrammed. And while any slot is in use, you can't reprogram. This is Sundar's FPGA use case. It is also Sylvain's VGPU use case. The "fully dynamic" case is straightforward (in the sense of being what placement was architected to handle). * Model the PF/region as a resource provider. * The RP has inventory of some generic resource class (e.g. "VGPU", "SRIOV_NET_VF", "FPGA_FUNCTION"). Allocations consume that inventory, plain and simple. * As a region gets programmed dynamically, it's acceptable for the thing doing the programming to set a trait indicating that that function is in play. (Sundar, this is the thing I originally said would get resistance; but we've agreed it's okay. No blood was shed :) * Requests *may* use preferred traits to help them land on a card that already has their function flashed on it. (Prerequisite: preferred traits, which can be implemented in placement. Candidates with the most preferred traits get sorted highest.) The "single program" case needs to be handled more like what Alex describes below. TL;DR: We do *not* support dynamic programming, traiting, or inventorying at instance boot time - it all has to be done "up front". * The PFs can be initially modeled as "empty" resource providers. Or maybe not at all. Either way, *they can not be deployed* in this state. * An operator or admin (via a CLI, config file, agent like blazar or cyborg, etc.) preprograms the PF to have the specific desired function/configuration. * This may be cyborg/blazar pre-programming devices to maintain an available set of each function * This may be in response to a user requesting some function, which causes a new image to be laid down on a device so it will be available for scheduling * This may be a human doing it at cloud-build time * This results in the resource provider being (created and) set up with the inventory and traits appropriate to that function. * Now deploys can happen, using required traits representing the desired function. -efried [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-29.log.html#t2018-03-29T12:52:56 [2] https://etherpad.openstack.org/p/placement-dynamic-traiting On 03/29/2018 07:38 AM, Alex Xu wrote: > Agree with that, whatever the tweak inventory or traits, none of them works. > > Same as VGPU, we can support pre-programmed mode for multiple-functions > region, and each region only can support one type function. > > There are two reasons why Cyborg has a filter: > * records the usage of functions in a region > * records which function is programmed. > > For #1, each region provider multiple functions. Each function can be > assigned to a VM. So we should create ResourceProvider for the region. And > the resource class is function. That is similar to the SR-IOV device. > The region(The PF) > provides functions (VFs). > > For #2, We should use trait to distinguish the function type. > > Then we didn't keep any inventory info in the cyborg again, and we > needn't any filter in cyborg also, > and there is no race condition anymore. > > 2018-03-29 2:48 GMT+08:00 Eric Fried >: > > Sundar- > >         We're running across this issue in several places right > now.   One > thing that's definitely not going to get traction is > automatically/implicitly tweaking inventory in one resource class when > an allocation is made on a different resource class (whether in the same > or different RPs). > >         Slightly less of a nonstarter, but still likely to get > significant > push-back, is the idea of tweaking traits on the fly.  For example, your > vGPU case might be modeled as: > > PGPU_RP: { >   inventory: { >       CUSTOM_VGPU_TYPE_A: 2, >       CUSTOM_VGPU_TYPE_B: 4, >   } >   traits: [ >       CUSTOM_VGPU_TYPE_A_CAPABLE, >       CUSTOM_VGPU_TYPE_B_CAPABLE, >   ] > } > >         The request would come in for > resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting > in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing > that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. > So it doesn't matter that there's still inventory of > CUSTOM_VGPU_TYPE_B:4, because a request including > required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. > There's of course a window between when the initial allocation is made > and when you tweak the trait list.  In that case you'll just have to > fail the loser.  This would be like any other failure in e.g. the spawn > process; it would bubble up, the allocation would be removed; retries > might happen or whatever. > >         Like I said, you're likely to get a lot of resistance to > this idea as > well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your > patches; there's nothing about placement that disallows it.) > >         The simple-but-inefficient solution is simply that we'd > still be able > to make allocations for vGPU type B, but you would have to fail right > away when it came down to cyborg to attach the resource.  Which is code > you pretty much have to write anyway.  It's an improvement if cyborg > gets to be involved in the post-get-allocation-candidates > weighing/filtering step, because you can do that check at that point to > help filter out the candidates that would fail.  Of course there's still > a race condition there, but it's no different than for any other > resource. > > efried > > On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: > > Hi Eric and all, > >     I should have clarified that this race condition happens only for > > the case of devices with multiple functions. There is a prior thread > > > > > > about it. I was trying to get a solution within Cyborg, but that faces > > this race condition as well. > > > > IIUC, this situation is somewhat similar to the issue with vGPU types > > > > > > (thanks to Alex Xu for pointing this out). In the latter case, we > could > > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, > after > > consuming a unit of  vGPU-type-a, ideally the inventory should change > > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function > accelerators, > > we start with an RP inventory of (region-type-A: 1, function-X: > 4). But, > > after consuming a unit of that function, ideally the inventory should > > change to: (region-type-A: 0, function-X: 3). > > > > I understand that this approach is controversial :) Also, one > difference > > from the vGPU case is that the number and count of vGPU types is > static, > > whereas with FPGAs, one could reprogram it to result in more or fewer > > functions. That said, we could hopefully keep this analogy in mind for > > future discussions. > > > > We probably will not support multi-function accelerators in Rocky. > This > > discussion is for the longer term. > > > > Regards, > > Sundar > > > > On 3/23/2018 12:44 PM, Eric Fried wrote: > >> Sundar- > >> > >>      First thought is to simplify by NOT keeping inventory > information in > >> the cyborg db at all.  The provider record in the placement service > >> already knows the device (the provider ID, which you can look up > in the > >> cyborg db) the host (the root_provider_uuid of the provider > representing > >> the device) and the inventory, and (I hope) you'll be augmenting > it with > >> traits indicating what functions it's capable of.  That way, you'll > >> always get allocation candidates with devices that *can* load the > >> desired function; now you just have to engage your weigher to > prioritize > >> the ones that already have it loaded so you can prefer those. > >> > >>      Am I missing something? > >> > >>              efried > >> > >> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: > >>> Hi all, > >>>     There seems to be a possibility of a race condition in the > >>> Cyborg/Nova flow. Apologies for missing this earlier. (You can > refer to > >>> the proposed Cyborg/Nova spec > >>> > > > >>> for details.) > >>> > >>> Consider the scenario where the flavor specifies a resource > class for a > >>> device type, and also specifies a function (e.g. encrypt) in the > extra > >>> specs. The Nova scheduler would only track the device type as a > >>> resource, and Cyborg needs to track the availability of functions. > >>> Further, to keep it simple, say all the functions exist all the > time (no > >>> reprogramming involved). > >>> > >>> To recap, here is the scheduler flow for this case: > >>> > >>>   * A request spec with a flavor comes to Nova > conductor/scheduler. The > >>>     flavor has a device type as a resource class, and a function > in the > >>>     extra specs. > >>>   * Placement API returns the list of RPs (compute nodes) which > contain > >>>     the requested device types (but not necessarily the function). > >>>   * Cyborg will provide a custom filter which queries Cyborg DB. > This > >>>     needs to check which hosts contain the needed function, and > filter > >>>     out the rest. > >>>   * The scheduler selects one node from the filtered list, and the > >>>     request goes to the compute node. > >>> > >>> For the filter to work, the Cyborg DB needs to maintain a table with > >>> triples of (host, function type, #free units). The filter checks > if a > >>> given host has one or more free units of the requested function > type. > >>> But, to keep the # free units up to date, Cyborg on the selected > compute > >>> node needs to notify the Cyborg API to decrement the #free units > when an > >>> instance is spawned, and to increment them when resources are > released. > >>> > >>> Therein lies the catch: this loop from the compute node to > controller is > >>> susceptible to race conditions. For example, if two simultaneous > >>> requests each ask for function A, and there is only one unit of that > >>> available, the Cyborg filter will approve both, both may land on the > >>> same host, and one will fail. This is because Cyborg on the > controller > >>> does not decrement resource usage due to one request before > processing > >>> the next request. > >>> > >>> This is similar to this previous Nova scheduling issue > >>> > >. > >>> That was solved by having the scheduler claim a resource in > Placement > >>> for the selected node. I don't see an analog for Cyborg, since > it would > >>> not know which node is selected. > >>> > >>> Thanks in advance for suggestions and solutions. > >>> > >>> Regards, > >>> Sundar > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > >>> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Thu Mar 29 17:01:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 29 Mar 2018 13:01:39 -0400 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> Message-ID: <90e1b624-3e8b-a0e9-25ea-f47b5a6d2ffe@gmail.com> On 03/28/2018 07:03 PM, Nadathur, Sundar wrote: > Thanks, Eric. Looks like there are no good solutions even as candidates, > but only options with varying levels of unacceptability. It is funny > that that the option that is considered the least unacceptable is to let > the problem happen and then fail the request (last one in your list). > > Could I ask what is the objection to the scheme that applies multiple > traits and removes one as needed, apart from the fact that it has races? The fundamental objection that I've had to various discussions that involve abusing traits in this fashion is that you are essentially trying to "consume" traits. But traits are *not consumable things*. Only resource classes are consumable things. If you want to track the inventory of a certain thing -- and consume those things during scheduling -- then you need to use resource classes for that thing. The inventory management system in placement already has race protections in it. This means that you won't be able to over-allocate a particular consumable accelerated function if there isn't inventory capacity for that particular function on an FPGA. Likewise, you would not be able to *remove* inventory for a particular function on an FPGA if some instance is consuming that particular function. This protection does *not* exist if you are tracking particular functions with traits; the reason is because an instance doesn't *consume* a trait. There's no such thing as "I started an instance with accelerated function X and therefore I am consuming trait Y on this FPGA." So, bottom line for me is make sure we're using resource classes for consumable items and traits for representing non-consumable capabilities **of the resource provider**. That means that for the (re)-programming scenarios you need to dynamically adjust the inventory of a particular FPGA resource provider. You will need to *add* an inventory item of a custom resource class representing the specific function you are flashing *to an empty region*. You *may* want to *delete* an inventory item of a custom resource class representing the specific function *when an instance that was using that specific function is terminated*. When the instance is terminated, Nova will *automatically* delete allocations of that custom resource class associated with the instance if you use a custom resource class to represent the particular accelerated function. No such automatic removal of allocations is done if you use traits to represent particular accelerated functions (again, because traits aren't consumable things). Best, -jay From mriedemos at gmail.com Thu Mar 29 17:19:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 12:19:21 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180329142813.GA25762@sm-xps> References: <20180329142813.GA25762@sm-xps> Message-ID: <79dbfa81-c62e-4db8-3799-abb41c5d57e2@gmail.com> On 3/29/2018 9:28 AM, Sean McGinnis wrote: > I do not think changing the revert to snapshot implementation is appropriate > here. There may be some cases where this can get the desired result, but there > is no guarantee that there is a snapshot on the volume's base image state to > revert to. It also would not make sense to overload this functionality to > "revert to snapshot if you can, otherwise do all this other stuff instead." > Agree. > This would need to be a new API (microversioned) to add a reimage call. I > wouldn't expect implementation to be too difficult as we already have that > functionality for new volumes. We would just need to figure out the most > appropriate way to take an already in-use volume, detach it, rewrite the image, > then reattach it. Agree. > > Ideally, from my perspective, Nova would take care of the detach/attach portion > and Cinder would only need to take care of imaging the volume. Agree. :) And yeah, I pointed this out in the nova spec for volume-backed rebuild also. I think nova can basically handle this like it does for shelve today, and we'd do something like this: 1. disconnect the volume from the host 2. create a new empty volume attachment for the volume and instance - this is needed so the volume stays 'reserved' while we re-image it 3. delete the old volume attachment 4. call the new cinder re-image API 5. once the volume is available (TODO: how would we know?) 6. re-attach the volume by updating the attachment with the host connector, connect on the host, and complete the attachment (marks the volume as in-use again) -- Thanks, Matt From mriedemos at gmail.com Thu Mar 29 17:21:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 12:21:53 -0500 Subject: [openstack-dev] [all][stable] No more stable Phases welcome Extended Maintenance In-Reply-To: <20180329083625.GO13389@thor.bakeyournoodle.com> References: <20180329083625.GO13389@thor.bakeyournoodle.com> Message-ID: <30a61fef-48e5-2d4b-ab08-4dd805b0ab71@gmail.com> On 3/29/2018 3:36 AM, Tony Breeds wrote: > Hi all, > At Sydney we started the process of change on the stable branches. > Recently we merged a TC resolution[1] to alter the EOL process. The > next step is refinining the stable policy itself. > > I've created a review to do that. I think it covers most of the points > from Sydney and Dublin. > > Please check it out: > https://review.openstack.org/#/c/552733/ > > Yours Tony. > > [1]https://review.openstack.org/548916 +ops -- Thanks, Matt From richwellum at gmail.com Thu Mar 29 17:26:36 2018 From: richwellum at gmail.com (Richard Wellum) Date: Thu, 29 Mar 2018 17:26:36 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: Hi, So as a current Kolla-Kubernetes Core - I have a slightly different opinion than most, I'll try to verbalize it coherently. Lets talk about what Kolla is: Kolla is a project that builds OpenStack docker images, stores them on dockerhub, and provides tools to build your own images from your own source. Both the images and the tools it provides, are widely used, very popular and extremely stable; TripleO, openstack-helm and kolla-ansible to name a few are all deployment methods that use Kolla. Kolla has two sub-projects, that both revolve around deployment methods; kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and used by many in the industry. Part of Kolla's quality is it's rock-solid dependability in many scenarios. As Kubernetes took over most of the COE world, it's only correct that the Kolla team created this sub-project; if swarm became suddenly very popular then we should create a kolla-swarm sub-project. So if we abandon kolla-kubernetes ('sunset' seems much more romantic admittedly) - we are abandoning the core Kolla team's efforts in this space. No matter how good openstack-helm is (and I've deployed it, know a lot of the cores and it's truly excellent and well driven), what happens down the line if openstack-helm decide to move on from Kolla - say focussing on Loci images or a new flavor that comes along? Then Kolla the core project, will no longer have any validation of it's docker images/containers running on Kubernetes. That to me is the big risk here. The key issue in my opinion is that the core Kolla team has focussed on kolla-ansible almost exclusively, and have not migrated to using kolla-kubernetes as well. As the code base has stagnated, the gates get intro trouble, and new features and configurations added to kolla-ansible are not translated to kolla-kubernetes. So I think the real question is not whether we should 'sunset' kolla-kubernetes the sub-project, but should we drop Kolla support on Kubernetes? Relying on a different team to do so is probably not the answer; although it's the one championed in this thread. In my opinion we should set some realistic goals before we sunset: 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to get there. We have a long list of items, maybe pair this down to something reasonable. 2. Agreement within Kolla core team to learn kolla-kubernetes and start to put a percentage of time into this sub-project. 3. Identify the people who are genuinely interested in working with it within the Kolla team. Without '2' I think sunsetting is the way forward, but the risks should be fully understood and hopefully I've made a case for what those are above. Thanks, ||Rich On Wed, Mar 28, 2018 at 1:54 PM Chuck Short wrote: > +1 > > Regards > chuck > On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang > wrote: > >> There are two projects to solve the issue that run OpenStack on >> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both >> leverage helm tool for orchestration. There is some different perspective >> at the beginning, which results in the two teams could not work together. >> >> But recently, the difference becomes too small. and there is also no >> active >> contributor in the kolla-kubernetes project. >> >> So I propose to retire kolla-kubernetes project. If you are still >> interested in running OpenStack on kubernetes, please refer to >> openstack-helm project. >> >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Thu Mar 29 17:32:22 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Thu, 29 Mar 2018 13:32:22 -0400 Subject: [openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO In-Reply-To: References: Message-ID: ​Nice! ​ I don't have a strong opinion ​about this but what I might recommend would be to chat with the openshift-ansible [1] and the kolla-ansible [2] folks.​ ​I'm happy to do the introductions if necessary !​ Their models, requirements or context might be different than ours but at the end of the day, it's a set of Ansible roles and playbooks to install something. It would be a good idea just to informally chat about the reasons why their things are set up the way they are, what are the pros, cons.. or their challenges. I'm not saying we should structure our things like theirs. What I'm trying to say is that they've surely learned a lot over the years these projects have existed and it's surely worthwhile to chat with them so we don't repeat some of the same mistakes. Generally just draw from their experience, learn from their conclusions and take that into account before committing to any particular model we'd like to have in TripleO ? ​[1]: https://github.com/openshift/openshift-ansible [2]: https://github.com/openstack/kolla-ansible​ David Moreau Simard Senior Software Engineer | Openstack RDO dmsimard = [irc, github, twitter] On Thu, Mar 29, 2018, 12:34 PM David Peacock, wrote: > Hi everyone, > > During the recent PTG in Dublin, it was decided that we'd prototype a way > forward with Ansible tasks in TripleO that adhere to Ansible best > practises, creating dedicated roles with unique git repositories and RPM > packaging per role. > > With a view to moving in this direction, a couple of us on the TripleO > team have begun developing tooling to facilitate this. Initially we've > worked on a tool [0] to extract Ansible tasks lists from > tripleo-heat-templates and move them into new formally structured Ansible > roles. > > An example with the existing keystone docker service [1]: > > The upgrade_tasks block will become: > > ``` > upgrade_tasks: > - import_role: > name: tripleo-role-keystone > tasks_from: upgrade.yaml > ``` > > The fast_forward_upgrade_tasks block will become: > > ``` > fast_forward_upgrade_tasks: > - import_role: > name: tripleo-role-keystone > tasks_from: fast_forward_upgrade.yaml > ``` > > And this role [2] will be structured: > > ``` > tripleo-role-keystone/ > └── tasks > ├── fast_forward_upgrade.yaml > ├── main.yaml > └── upgrade.yaml > ``` > > We'd love to hear any feedback from the community as we move towards this. > > Thank you, > David Peacock > > [0] https://github.com/davidjpeacock/openstack-role- > extract/blob/master/role-extractor-creator.py > [1] https://github.com/openstack/tripleo-heat-templates/blob/ > master/docker/services/keystone.yaml > [2] https://github.com/davidjpeacock/tripleo-role-keystone > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu Mar 29 17:39:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 29 Mar 2018 18:39:05 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Chaotic but fun API-SIG meeting today. elmiko has done some review of the long-in-progress microversion history doc [7] and reports that it is worth finishing and publishing as a historical document explaining why microversions exist. Having greater context on the why of microversions should help them feel like less of an imposition. We then discussed what sort of things, if any, the API-SIG should try to talk about during the Forum in Vancouver. edleafe will seek out some SDK-related people to see if we can share some time together. If you're reading this and you have some ideas, please respond and tell us. Then the long lost mordred returned from the land of zuul to discuss some ambiguities in how services, code and configuration use the concept of version. I have to admit that my eyes glazed over (with tears) for a moment through here, but the outcome was that the best thing to do is not surprise the users or make them do extra work. This will be encoded in a followup to the merging guidance on SDKs. elmiko had another report on research he was doing: He thinks he may have figured out a way to deal with microversions in OpenAPI documents. I don't want to oversell this yet, but if it works this could be combined with the pending experiments to make structured data out of existing api-ref documents [8] to auto-generate OpenAPI schema from documentation. Then we looked at some pending guidelines (see below). One that stood out as potentialy controversial is the use of service name or service type in errors document [9]. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814 # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None this week. # Guidelines Currently Under Review [3] * Break up the HTTP guideline into smaller documents https://review.openstack.org/#/c/554234/ * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Update the errors guidance to use service-type for code https://review.openstack.org/#/c/554921/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/444892 [8] https://review.openstack.org/#/c/528801/ [9] https://review.openstack.org/#/c/554921/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Thu Mar 29 17:57:08 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 29 Mar 2018 12:57:08 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <90e1b624-3e8b-a0e9-25ea-f47b5a6d2ffe@gmail.com> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> <90e1b624-3e8b-a0e9-25ea-f47b5a6d2ffe@gmail.com> Message-ID: > That means that for the (re)-programming scenarios you need to > dynamically adjust the inventory of a particular FPGA resource provider. Oh, see, this is something I had *thought* was a non-starter. This makes the "single program" case way easier to deal with, and allows it to be handled on the fly: * Model your region as a provider with separate resource classes for each function it supports. The inventory totals for each would be the total number of virtual slots (or whatever they're called) of that type that are possible when the device is flashed with that function. * An allocation is made for one unit of class X. This percolates down to cyborg to do the flashing/attaching. At this time, cyborg *deletes* the inventories for all the other resource classes. * In a race with different resource classes, whoever gets to cyborg first, wins. The second one will see that the device is already flashed with X, and fail. The failure will bubble up, causing the allocation to be released. * Requests for multiple different resource classes at once will have to filter out allocation candidates that put both on the same device. Not completely sure how this happens. Otherwise they would have to fail at cyborg, resulting in the same bubble/deallocate as above. -efried From dms at danplanet.com Thu Mar 29 18:04:26 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 29 Mar 2018 11:04:26 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <7475d530-9800-35bc-711d-3ba91b71a7d1@fried.cc> (Eric Fried's message of "Thu, 29 Mar 2018 11:43:15 -0500") References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7475d530-9800-35bc-711d-3ba91b71a7d1@fried.cc> Message-ID: > ==> Fully dynamic: You can program one region with one function, and > then still program a different region with a different function, etc. Note that this is also the case if you don't have virtualized multi-slot devices. Like, if you had one that only has one region. Consuming it consumes the one and only inventory. > ==> Single program: Once you program the card with a function, *all* its > virtual slots are *only* capable of that function until the card is > reprogrammed. And while any slot is in use, you can't reprogram. This > is Sundar's FPGA use case. It is also Sylvain's VGPU use case. > > The "fully dynamic" case is straightforward (in the sense of being what > placement was architected to handle). > * Model the PF/region as a resource provider. > * The RP has inventory of some generic resource class (e.g. "VGPU", > "SRIOV_NET_VF", "FPGA_FUNCTION"). Allocations consume that inventory, > plain and simple. > * As a region gets programmed dynamically, it's acceptable for the thing > doing the programming to set a trait indicating that that function is in > play. (Sundar, this is the thing I originally said would get > resistance; but we've agreed it's okay. No blood was shed :) > * Requests *may* use preferred traits to help them land on a card that > already has their function flashed on it. (Prerequisite: preferred > traits, which can be implemented in placement. Candidates with the most > preferred traits get sorted highest.) Yup. > The "single program" case needs to be handled more like what Alex > describes below. TL;DR: We do *not* support dynamic programming, > traiting, or inventorying at instance boot time - it all has to be done > "up front". > * The PFs can be initially modeled as "empty" resource providers. Or > maybe not at all. Either way, *they can not be deployed* in this state. > * An operator or admin (via a CLI, config file, agent like blazar or > cyborg, etc.) preprograms the PF to have the specific desired > function/configuration. > * This may be cyborg/blazar pre-programming devices to maintain an > available set of each function > * This may be in response to a user requesting some function, which > causes a new image to be laid down on a device so it will be available > for scheduling > * This may be a human doing it at cloud-build time > * This results in the resource provider being (created and) set up with > the inventory and traits appropriate to that function. > * Now deploys can happen, using required traits representing the desired > function. ...and it could be in response to something noticing that a recent nova boot failed to find any candidates with a particular function, which provisions that thing so it can be retried. This is kindof the "spot instances" approach -- that same workflow would work here as well, although I expect most people would fit into the above cases. --Dan From ed at leafe.com Thu Mar 29 18:11:58 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 29 Mar 2018 13:11:58 -0500 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7bb9a029-dccd-e92f-0a4b-cdc528ccc71a@intel.com> <90e1b624-3e8b-a0e9-25ea-f47b5a6d2ffe@gmail.com> Message-ID: <9CF54A6A-F482-49C7-9BAD-9D242F5DADA5@leafe.com> On Mar 29, 2018, at 12:57 PM, Eric Fried wrote: > >> That means that for the (re)-programming scenarios you need to >> dynamically adjust the inventory of a particular FPGA resource provider. > > Oh, see, this is something I had *thought* was a non-starter. I need to work on my communication skills. This is what I’ve been saying all along. -- Ed Leafe From gema at ggomez.me Thu Mar 29 18:48:41 2018 From: gema at ggomez.me (Gema Gomez) Date: Thu, 29 Mar 2018 18:48:41 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: <0102016273172b60-c118a282-d9e1-4899-8791-e69fcbb3f8d0-000000@eu-west-1.amazonses.com> On 29/03/18 18:26, Richard Wellum wrote: > Hi, > > So as a current Kolla-Kubernetes Core - I have a slightly different > opinion than most, I'll try to verbalize it coherently. > > Lets talk about what Kolla is: > > Kolla is a project that builds OpenStack docker images, stores them on > dockerhub, and provides tools to build your own images from your own > source. Both the images and the tools it provides, are widely used, very > popular and extremely stable; TripleO, openstack-helm and kolla-ansible > to name a few are all deployment methods that use Kolla. > > Kolla has two sub-projects, that both revolve around deployment methods; > kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and > used by many in the industry. Part of Kolla's quality is it's rock-solid > dependability in many scenarios. As Kubernetes took over most of the COE > world, it's only correct that the Kolla team created this sub-project; > if swarm became suddenly very popular then we should create a > kolla-swarm sub-project. > > So if we abandon kolla-kubernetes ('sunset' seems much more romantic > admittedly) - we are abandoning the core Kolla team's efforts in this > space. No matter how good openstack-helm is (and I've deployed it, know > a lot of the cores and it's truly excellent and well driven), what > happens down the line if openstack-helm decide to move on from Kolla - > say focussing on Loci images or a new flavor that comes along? Then > Kolla the core project, will no longer have any validation of it's > docker images/containers running on Kubernetes. That to me is the big > risk here. > > The key issue in my opinion is that the core Kolla team has focussed on > kolla-ansible almost exclusively, and have not migrated to using > kolla-kubernetes as well. As the code base has stagnated, the gates get > intro trouble, and new features and configurations added to > kolla-ansible are not translated to kolla-kubernetes. > > So I think the real question is not whether we should 'sunset' > kolla-kubernetes the sub-project, but should we drop Kolla support on > Kubernetes? Relying on a different team to do so is probably not the > answer; although it's the one championed in this thread. > > In my opinion we should set some realistic goals before we sunset: > > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to > get there. We have a long list of items, maybe pair this down to > something reasonable. Are you volunteering to drive this effort forward? I'd be happy to help define MVP for Rocky. > 2. Agreement within Kolla core team to learn kolla-kubernetes and start > to put a percentage of time into this sub-project. Whilst this would be ideal, we cannot really force people that have no interest in this sub-project to contribute. > 3. Identify the people who are genuinely interested in working with it > within the Kolla team. +1 if we find enough contributors to make the reasonable list of items happen during Rocky. > Without '2' I think sunsetting is the way forward, but the risks should > be fully understood and hopefully I've made a case for what those are above. How many contributors are necessary to make MVP? Cheers, Gema > > Thanks, > > ||Rich > > > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short > wrote: > > +1 > > Regards > chuck > On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang > > wrote: > > There are two projects to solve the issue that run OpenStack on  > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > leverage helm tool for orchestration. There is some different > perspective > at the beginning, which results in the two teams could not work > together.  > > But recently, the difference becomes too small. and there is > also no active > contributor in the kolla-kubernetes project.  > > So I propose to retire kolla-kubernetes project. If you are still > interested in running OpenStack on kubernetes, please refer to  > openstack-helm project. > > --  > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pabelanger at redhat.com Thu Mar 29 19:10:25 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 29 Mar 2018 15:10:25 -0400 Subject: [openstack-dev] All Hail our Newest Release Name - OpenStack Stein Message-ID: <20180329191025.GC1172@localhost.localdomain> Hi everybody! As the subject reads, the "S" release of OpenStack is officially "Stein". As been with previous elections this wasn't the first choice, that was "Solar". Solar was judged to have legal risk, so as per our name selection process, we moved to the next name on the list. Thanks to everybody who participated, and look forward to making OpenStack Stein a great release. Paul From emilien at redhat.com Thu Mar 29 21:32:57 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 29 Mar 2018 14:32:57 -0700 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap Message-ID: Greeting folks, During the last PTG we spent time discussing some ideas around an All-In-One installer, using 100% of the TripleO bits to deploy a single node OpenStack very similar with what we have today with the containerized undercloud and what we also have with other tools like Packstack or Devstack. https://etherpad.openstack.org/p/tripleo-rocky-all-in-one One of the problems that we're trying to solve here is to give a simple tool for developers so they can both easily and quickly deploy an OpenStack for their needs. "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and without complexity, reproducing the same exact same tooling as TripleO is using." "As a Neutron developer, I need to develop a feature in Neutron and test it with TripleO in my local env." "As a TripleO dev, I need to implement a new service and test its deployment in my local env." "As a developer, I need to reproduce a bug in TripleO CI that blocks the production chain, quickly and simply." Probably more use cases, but to me that's what came into my mind now. Dan kicked-off a doc patch a month ago: https://review.openstack.org/#/c/547038/ And I just went ahead and proposed a blueprint: https://blueprints.launchpad.net/tripleo/+spec/all-in-one So hopefully we can start prototyping something during Rocky. Before talking about the actual implementation, I would like to gather feedback from people interested by the use-cases. If you recognize yourself in these use-cases and you're not using TripleO today to test your things because it's too complex to deploy, we want to hear from you. I want to see feedback (positive or negative) about this idea. We need to gather ideas, use cases, needs, before we go design a prototype in Rocky. Thanks everyone who'll be involved, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Mar 29 22:07:22 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 29 Mar 2018 15:07:22 -0700 Subject: [openstack-dev] [ironic] baremetal firmware lifecycle management Message-ID: One of the topics that came up at during the Ironic sessions at the Rocky PTG was firmware management. During this discussion, we quickly reached the consensus that we lacked the ability to discuss and reach a forward direction without: * An understanding of capabilities and available vendor mechanisms that can be used to consistently determine and assert desired firmware to a baremetal node. Ideally, we could find a commonality of two or more vendor mechanisms that can be abstracted cleanly into high level actions. Ideally this would boil down to something a simple as "list_firmware()" and "set_firmware()". Additionally there are surely some caveats we need to understand, such as if the firmware update must be done in a particular state, and if a particular prior condition or next action is required for the particular update. * An understanding of several use cases where a deployed node may need to have specific firmware applied. We are presently aware of two cases. The first being specific firmware is needed to match an approved operational profile. The second being a desire to perform ad-hoc changes or have new versions of firmware asserted while a node has already been deployed. Naturally any insight that can be shared will help the community to best model the interaction so we can determine next steps and ultimately implementation details. -Julia From dmsimard at redhat.com Thu Mar 29 22:14:06 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Thu, 29 Mar 2018 18:14:06 -0400 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports Message-ID: Hi, By default, all jobs currently benefit from the generation of a static ARA report located in the "ara" directory at the root of the log directory. Due to scalability concerns, these reports were only generated when a job failed and were not available on successful runs. I'm happy to announce that you can expect ARA reports to be available for every job from now on -- including the successful ones ! You'll notice a subtle but important change: the report directory will henceforth be named "ara-report" instead of "ara". Instead of generating and saving a HTML report, we'll now only save the ARA database in the "ara-report" directory. This is a special directory from the perspective of the logs.openstack.org server and ARA databases located in such directories will be loaded dynamically by a WSGI middleware. You don't need to do anything to benefit from this change -- it will be pushed to all jobs that inherit from the base job by default. However, if you happen to be using a "nested" installation of ARA and Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this means that you can also leverage this feature. In order to do that, you'll want to create an "ara-report" directory and copy your ARA database inside before your logs are collected and uploaded. To help you visualize: /ara-report <-- This is the default Zuul report /logs/ara <-- This wouldn't be loaded dynamically /logs/ara-report <-- This would be loaded dynamically /logs/some/directory/ara-report <-- This would be loaded dynamically For more details on this feature of ARA, you can refer to the documentation [1]. Let me know if you have any questions ! [1]: https://ara.readthedocs.io/en/latest/advanced.html David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From pabelanger at redhat.com Thu Mar 29 23:12:35 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 29 Mar 2018 19:12:35 -0400 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports In-Reply-To: References: Message-ID: <20180329231235.GA15222@localhost.localdomain> On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote: > Hi, > > By default, all jobs currently benefit from the generation of a static > ARA report located in the "ara" directory at the root of the log > directory. > Due to scalability concerns, these reports were only generated when a > job failed and were not available on successful runs. > > I'm happy to announce that you can expect ARA reports to be available > for every job from now on -- including the successful ones ! > > You'll notice a subtle but important change: the report directory will > henceforth be named "ara-report" instead of "ara". > > Instead of generating and saving a HTML report, we'll now only save > the ARA database in the "ara-report" directory. > This is a special directory from the perspective of the > logs.openstack.org server and ARA databases located in such > directories will be loaded dynamically by a WSGI middleware. > > You don't need to do anything to benefit from this change -- it will > be pushed to all jobs that inherit from the base job by default. > > However, if you happen to be using a "nested" installation of ARA and > Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this > means that you can also leverage this feature. > In order to do that, you'll want to create an "ara-report" directory > and copy your ARA database inside before your logs are collected and > uploaded. > I believe this is an important task we should also push on for the projects you listed above. The main reason to do this is simplify job uploads and filesystemd demands (thanks clarkb). Lets see if we can update these projects in the coming week or two! Great work. > To help you visualize: > /ara-report <-- This is the default Zuul report > /logs/ara <-- This wouldn't be loaded dynamically > /logs/ara-report <-- This would be loaded dynamically > /logs/some/directory/ara-report <-- This would be loaded dynamically > > For more details on this feature of ARA, you can refer to the documentation [1]. > > Let me know if you have any questions ! > > [1]: https://ara.readthedocs.io/en/latest/advanced.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Thu Mar 29 23:18:21 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 16:18:21 -0700 Subject: [openstack-dev] [openstack-infra] Where did the ARA logs go? In-Reply-To: <20180328151313.vzfpm2jfgyabtw36@yuggoth.org> References: <20180328142649.GB22364@sm-xps> <20180328151313.vzfpm2jfgyabtw36@yuggoth.org> Message-ID: <1522365501.3772250.1320863440.2FEEB55D@webmail.messagingengine.com> On Wed, Mar 28, 2018, at 8:13 AM, Jeremy Stanley wrote: > On 2018-03-28 09:26:49 -0500 (-0500), Sean McGinnis wrote: > [...] > > I believe the ARA logs are only captured on failing jobs. > > Correct. This was a stop-gap some months ago when we noticed we were > overrunning our inode capacity on the logserver. ARA was was only > one of the various contributors to that increased consumption but > due to its original model based on numerous tiny files, limiting it > to job failures (where it was most useful) was one of the ways we > temporarily curtailed inode utilization. ARA has very recently grown > the ability to stuff all that data into a single sqlite file and > then handle it browser-side, so I expect we'll be able to switch > back to collecting it for all job runs again fairly soon. The switch has been flipped and you should start to see ara reports on all job logs again. Thank you dmsimard for making this happen. More details at http://lists.openstack.org/pipermail/openstack-dev/2018-March/128902.html Clark From cboylan at sapwetik.org Thu Mar 29 23:27:10 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 29 Mar 2018 16:27:10 -0700 Subject: [openstack-dev] [tap-as-a-service] publish on pypi In-Reply-To: References: Message-ID: <1522366030.3775323.1320870552.6204320D@webmail.messagingengine.com> On Wed, Mar 28, 2018, at 7:59 AM, Takashi Yamamoto wrote: > hi, > > i'm thinking about publishing the latest release of tap-as-a-service on pypi. > background: https://review.openstack.org/#/c/555788/ > iirc, the naming (tap-as-a-service vs neutron-taas) was one of concerns > when we talked about this topic last time. (long time ago. my memory is dim.) > do you have any ideas or suggestions? > probably i'll just use "tap-as-a-service" unless anyone has strong opinions. > because: > - it's the name we use the most frequently > - we are not neutron (yet?) http://git.openstack.org/cgit/openstack/tap-as-a-service/tree/setup.cfg#n2 shows that tap-as-a-service is the existing package name so probably a good one to go with as anyone that already has it installed from source should have pip do the right thing when talking to pypi. Clark From zhipengh512 at gmail.com Thu Mar 29 23:39:57 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 30 Mar 2018 07:39:57 +0800 Subject: [openstack-dev] All Hail our Newest Release Name - OpenStack Stein In-Reply-To: <20180329191025.GC1172@localhost.localdomain> References: <20180329191025.GC1172@localhost.localdomain> Message-ID: In hindsight, it would be much fun the R release named Ramm :P On Fri, Mar 30, 2018 at 3:10 AM, Paul Belanger wrote: > Hi everybody! > > As the subject reads, the "S" release of OpenStack is officially "Stein". > As > been with previous elections this wasn't the first choice, that was > "Solar". > > Solar was judged to have legal risk, so as per our name selection process, > we > moved to the next name on the list. > > Thanks to everybody who participated, and look forward to making OpenStack > Stein > a great release. > > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Mar 29 23:50:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 29 Mar 2018 18:50:10 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <79dbfa81-c62e-4db8-3799-abb41c5d57e2@gmail.com> References: <20180329142813.GA25762@sm-xps> <79dbfa81-c62e-4db8-3799-abb41c5d57e2@gmail.com> Message-ID: <20180329235009.GB5654@sm-xps> > > > > >Ideally, from my perspective, Nova would take care of the detach/attach portion > >and Cinder would only need to take care of imaging the volume. > > Agree. :) And yeah, I pointed this out in the nova spec for volume-backed > rebuild also. I think nova can basically handle this like it does for shelve > today, and we'd do something like this: > > 1. disconnect the volume from the host > 2. create a new empty volume attachment for the volume and instance - this > is needed so the volume stays 'reserved' while we re-image it > 3. delete the old volume attachment > 4. call the new cinder re-image API > 5. once the volume is available (TODO: how would we know?) May we can add a "Reimaging" state to the volume? Then Nova could poll for it to go from that back to Available? Since Nova is driving things, I would be hesitant to expect and assume that Cinder is appropriately configured to call back in to Nova. Or a notification? Or...? > 6. re-attach the volume by updating the attachment with the host connector, > connect on the host, and complete the attachment (marks the volume as in-use > again) > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From singh.surya64mnnit at gmail.com Fri Mar 30 00:14:32 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Fri, 30 Mar 2018 09:14:32 +0900 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: Dear All, Thanks Rich for putting thoughts on continuation with kolla-k8s. On Fri, Mar 30, 2018 at 2:26 AM, Richard Wellum wrote: > > Hi, > > So as a current Kolla-Kubernetes Core - I have a slightly different opinion than most, I'll try to verbalize it coherently. > > Lets talk about what Kolla is: > > Kolla is a project that builds OpenStack docker images, stores them on dockerhub, and provides tools to build your own images from your own source. Both the images and the tools it provides, are widely used, very popular and extremely stable; TripleO, openstack-helm and kolla-ansible to name a few are all deployment methods that use Kolla. > > Kolla has two sub-projects, that both revolve around deployment methods; kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and used by many in the industry. Part of Kolla's quality is it's rock-solid dependability in many scenarios. As Kubernetes took over most of the COE world, it's only correct that the Kolla team created this sub-project; if swarm became suddenly very popular then we should create a kolla-swarm sub-project. > > So if we abandon kolla-kubernetes ('sunset' seems much more romantic admittedly) - we are abandoning the core Kolla team's efforts in this space. No matter how good openstack-helm is (and I've deployed it, know a lot of the cores and it's truly excellent and well driven), what happens down the line if openstack-helm decide to move on from Kolla - say focussing on Loci images or a new flavor that comes along? Then Kolla the core project, will no longer have any validation of it's docker images/containers running on Kubernetes. That to me is the big risk here. > > The key issue in my opinion is that the core Kolla team has focussed on kolla-ansible almost exclusively, and have not migrated to using kolla-kubernetes as well. As the code base has stagnated, the gates get intro trouble, and new features and configurations added to kolla-ansible are not translated to kolla-kubernetes. > > So I think the real question is not whether we should 'sunset' kolla-kubernetes the sub-project, but should we drop Kolla support on Kubernetes? Relying on a different team to do so is probably not the answer; although it's the one championed in this thread. +1 > > In my opinion we should set some realistic goals before we sunset: > > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to get there. We have a long list of items, maybe pair this down to something reasonable. I am agree that we should have feature set for Rocky v1.0 release and AFAIK community already have that. > 2. Agreement within Kolla core team to learn kolla-kubernetes and start to put a percentage of time into this sub-project. > 3. Identify the people who are genuinely interested in working with it within the Kolla team. Though currently I am not the MVP in kolla-k8s but i would love to help with some concrete item for v1.0, IMHO before that we need a leader then identify volunteers. And for that if we need more thought on this https://review.openstack.org/#/c/552531 > > Without '2' I think sunsetting is the way forward, but the risks should be fully understood and hopefully I've made a case for what those are above. > > Thanks, > > ||Rich > > > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short wrote: >> >> +1 >> >> Regards >> chuck >> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang wrote: >>> >>> There are two projects to solve the issue that run OpenStack on >>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both >>> leverage helm tool for orchestration. There is some different perspective >>> at the beginning, which results in the two teams could not work together. >>> >>> But recently, the difference becomes too small. and there is also no active >>> contributor in the kolla-kubernetes project. >>> >>> So I propose to retire kolla-kubernetes project. If you are still >>> interested in running OpenStack on kubernetes, please refer to >>> openstack-helm project. >>> >>> -- >>> Regards, >>> Jeffrey Zhang >>> Blog: http://xcodest.me >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > --- Thanks Surya From zhang.lei.fly at gmail.com Fri Mar 30 01:05:23 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 30 Mar 2018 09:05:23 +0800 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports In-Reply-To: <20180329231235.GA15222@localhost.localdomain> References: <20180329231235.GA15222@localhost.localdomain> Message-ID: cool. kolla will try to implement it. On Fri, Mar 30, 2018 at 7:12 AM, Paul Belanger wrote: > On Thu, Mar 29, 2018 at 06:14:06PM -0400, David Moreau Simard wrote: > > Hi, > > > > By default, all jobs currently benefit from the generation of a static > > ARA report located in the "ara" directory at the root of the log > > directory. > > Due to scalability concerns, these reports were only generated when a > > job failed and were not available on successful runs. > > > > I'm happy to announce that you can expect ARA reports to be available > > for every job from now on -- including the successful ones ! > > > > You'll notice a subtle but important change: the report directory will > > henceforth be named "ara-report" instead of "ara". > > > > Instead of generating and saving a HTML report, we'll now only save > > the ARA database in the "ara-report" directory. > > This is a special directory from the perspective of the > > logs.openstack.org server and ARA databases located in such > > directories will be loaded dynamically by a WSGI middleware. > > > > You don't need to do anything to benefit from this change -- it will > > be pushed to all jobs that inherit from the base job by default. > > > > However, if you happen to be using a "nested" installation of ARA and > > Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this > > means that you can also leverage this feature. > > In order to do that, you'll want to create an "ara-report" directory > > and copy your ARA database inside before your logs are collected and > > uploaded. > > > I believe this is an important task we should also push on for the > projects you > listed above. The main reason to do this is simplify job uploads and > filesystemd > demands (thanks clarkb). > > Lets see if we can update these projects in the coming week or two! > > Great work. > > > To help you visualize: > > /ara-report <-- This is the default Zuul report > > /logs/ara <-- This wouldn't be loaded dynamically > > /logs/ara-report <-- This would be loaded dynamically > > /logs/some/directory/ara-report <-- This would be loaded > dynamically > > > > For more details on this feature of ARA, you can refer to the > documentation [1]. > > > > Let me know if you have any questions ! > > > > [1]: https://ara.readthedocs.io/en/latest/advanced.html > > > > David Moreau Simard > > Senior Software Engineer | OpenStack RDO > > > > dmsimard = [irc, github, twitter] > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From richwellum at gmail.com Fri Mar 30 01:32:44 2018 From: richwellum at gmail.com (Richard Wellum) Date: Fri, 30 Mar 2018 01:32:44 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: <0102016273172b60-c118a282-d9e1-4899-8791-e69fcbb3f8d0-000000@eu-west-1.amazonses.com> References: <0102016273172b60-c118a282-d9e1-4899-8791-e69fcbb3f8d0-000000@eu-west-1.amazonses.com> Message-ID: Hi Gema, On Thu, Mar 29, 2018 at 2:48 PM Gema Gomez wrote: > > > On 29/03/18 18:26, Richard Wellum wrote: > > Hi, > > > > So as a current Kolla-Kubernetes Core - I have a slightly different > > opinion than most, I'll try to verbalize it coherently. > > > > Lets talk about what Kolla is: > > > > Kolla is a project that builds OpenStack docker images, stores them on > > dockerhub, and provides tools to build your own images from your own > > source. Both the images and the tools it provides, are widely used, very > > popular and extremely stable; TripleO, openstack-helm and kolla-ansible > > to name a few are all deployment methods that use Kolla. > > > > Kolla has two sub-projects, that both revolve around deployment methods; > > kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and > > used by many in the industry. Part of Kolla's quality is it's rock-solid > > dependability in many scenarios. As Kubernetes took over most of the COE > > world, it's only correct that the Kolla team created this sub-project; > > if swarm became suddenly very popular then we should create a > > kolla-swarm sub-project. > > > > So if we abandon kolla-kubernetes ('sunset' seems much more romantic > > admittedly) - we are abandoning the core Kolla team's efforts in this > > space. No matter how good openstack-helm is (and I've deployed it, know > > a lot of the cores and it's truly excellent and well driven), what > > happens down the line if openstack-helm decide to move on from Kolla - > > say focussing on Loci images or a new flavor that comes along? Then > > Kolla the core project, will no longer have any validation of it's > > docker images/containers running on Kubernetes. That to me is the big > > risk here. > > > > The key issue in my opinion is that the core Kolla team has focussed on > > kolla-ansible almost exclusively, and have not migrated to using > > kolla-kubernetes as well. As the code base has stagnated, the gates get > > intro trouble, and new features and configurations added to > > kolla-ansible are not translated to kolla-kubernetes. > > > > So I think the real question is not whether we should 'sunset' > > kolla-kubernetes the sub-project, but should we drop Kolla support on > > Kubernetes? Relying on a different team to do so is probably not the > > answer; although it's the one championed in this thread. > > > > In my opinion we should set some realistic goals before we sunset: > > > > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to > > get there. We have a long list of items, maybe pair this down to > > something reasonable. > > Are you volunteering to drive this effort forward? I'd be happy to help > define MVP for Rocky. > Yes I am. > > > 2. Agreement within Kolla core team to learn kolla-kubernetes and start > > to put a percentage of time into this sub-project. > > Whilst this would be ideal, we cannot really force people that have no > interest in this sub-project to contribute. > That's not what I was inferring, more trying to get a shift in attitude within the Kolla team. For example, if I am working on kolla-kubernetes, and make a change that breaks kolla-ansible - the Kolla community would expect me to fix it of course? Same if I added a feature, and didn't apply and test it with kolla-ansible then the same no? So I'm just saying that the same should apply in the other direction; which it is currently not. As a community we should support both deployment methods. Or if we don't then we are all agreeing that Kolla will not have support on Kubernetes from the core Kolla community. > > > 3. Identify the people who are genuinely interested in working with it > > within the Kolla team. > > +1 if we find enough contributors to make the reasonable list of items > happen during Rocky. > > > Without '2' I think sunsetting is the way forward, but the risks should > > be fully understood and hopefully I've made a case for what those are > above. > > How many contributors are necessary to make MVP? > We were doing fairly well with 4-5 contributors imo. Thanks, ||Rich > > Cheers, > Gema > > > > > Thanks, > > > > ||Rich > > > > > > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short > > wrote: > > > > +1 > > > > Regards > > chuck > > On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang > > > wrote: > > > > There are two projects to solve the issue that run OpenStack on > > Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > > leverage helm tool for orchestration. There is some different > > perspective > > at the beginning, which results in the two teams could not work > > together. > > > > But recently, the difference becomes too small. and there is > > also no active > > contributor in the kolla-kubernetes project. > > > > So I propose to retire kolla-kubernetes project. If you are still > > interested in running OpenStack on kubernetes, please refer to > > openstack-helm project. > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richwellum at gmail.com Fri Mar 30 01:33:29 2018 From: richwellum at gmail.com (Richard Wellum) Date: Fri, 30 Mar 2018 01:33:29 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: On Thu, Mar 29, 2018 at 8:14 PM Surya Singh wrote: > Dear All, > > Thanks Rich for putting thoughts on continuation with kolla-k8s. > > > On Fri, Mar 30, 2018 at 2:26 AM, Richard Wellum > wrote: > > > > Hi, > > > > So as a current Kolla-Kubernetes Core - I have a slightly different > opinion than most, I'll try to verbalize it coherently. > > > > Lets talk about what Kolla is: > > > > Kolla is a project that builds OpenStack docker images, stores them on > dockerhub, and provides tools to build your own images from your own > source. Both the images and the tools it provides, are widely used, very > popular and extremely stable; TripleO, openstack-helm and kolla-ansible to > name a few are all deployment methods that use Kolla. > > > > Kolla has two sub-projects, that both revolve around deployment methods; > kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and > used by many in the industry. Part of Kolla's quality is it's rock-solid > dependability in many scenarios. As Kubernetes took over most of the COE > world, it's only correct that the Kolla team created this sub-project; if > swarm became suddenly very popular then we should create a kolla-swarm > sub-project. > > > > So if we abandon kolla-kubernetes ('sunset' seems much more romantic > admittedly) - we are abandoning the core Kolla team's efforts in this > space. No matter how good openstack-helm is (and I've deployed it, know a > lot of the cores and it's truly excellent and well driven), what happens > down the line if openstack-helm decide to move on from Kolla - say > focussing on Loci images or a new flavor that comes along? Then Kolla the > core project, will no longer have any validation of it's docker > images/containers running on Kubernetes. That to me is the big risk here. > > > > The key issue in my opinion is that the core Kolla team has focussed on > kolla-ansible almost exclusively, and have not migrated to using > kolla-kubernetes as well. As the code base has stagnated, the gates get > intro trouble, and new features and configurations added to kolla-ansible > are not translated to kolla-kubernetes. > > > > So I think the real question is not whether we should 'sunset' > kolla-kubernetes the sub-project, but should we drop Kolla support on > Kubernetes? Relying on a different team to do so is probably not the > answer; although it's the one championed in this thread. > > +1 > > > > > In my opinion we should set some realistic goals before we sunset: > > > > 1. Pick a feature set for a Rocky v1.0 release, and commit to trying to > get there. We have a long list of items, maybe pair this down to something > reasonable. > > I am agree that we should have feature set for Rocky v1.0 release and > AFAIK community already have that. > > > 2. Agreement within Kolla core team to learn kolla-kubernetes and start > to put a percentage of time into this sub-project. > > 3. Identify the people who are genuinely interested in working with it > within the Kolla team. > > Though currently I am not the MVP in kolla-k8s but i would love to > help with some concrete item for v1.0, IMHO before that we need a > leader then identify volunteers. > And for that if we need more thought on this > https://review.openstack.org/#/c/552531 > > I missed this and will review. Thanks, ||Rich > > > > Without '2' I think sunsetting is the way forward, but the risks should > be fully understood and hopefully I've made a case for what those are above. > > > > Thanks, > > > > ||Rich > > > > > > On Wed, Mar 28, 2018 at 1:54 PM Chuck Short wrote: > >> > >> +1 > >> > >> Regards > >> chuck > >> On Wed, Mar 28, 2018 at 11:47 AM, Jeffrey Zhang < > zhang.lei.fly at gmail.com> wrote: > >>> > >>> There are two projects to solve the issue that run OpenStack on > >>> Kubernetes, OpenStack-helm, and kolla-kubernetes. Them both > >>> leverage helm tool for orchestration. There is some different > perspective > >>> at the beginning, which results in the two teams could not work > together. > >>> > >>> But recently, the difference becomes too small. and there is also no > active > >>> contributor in the kolla-kubernetes project. > >>> > >>> So I propose to retire kolla-kubernetes project. If you are still > >>> interested in running OpenStack on kubernetes, please refer to > >>> openstack-helm project. > >>> > >>> -- > >>> Regards, > >>> Jeffrey Zhang > >>> Blog: http://xcodest.me > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > --- > Thanks > Surya > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Mar 30 01:36:22 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 29 Mar 2018 20:36:22 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180329235009.GB5654@sm-xps> References: <20180329142813.GA25762@sm-xps> <79dbfa81-c62e-4db8-3799-abb41c5d57e2@gmail.com> <20180329235009.GB5654@sm-xps> Message-ID: <299fbfd4-6895-7b92-8f63-953fc7fbcb30@gmail.com> On 3/29/2018 6:50 PM, Sean McGinnis wrote: > May we can add a "Reimaging" state to the volume? Then Nova could poll for it > to go from that back to Available? That would be fine with me, and maybe similar to how 'extending' and 'retyping' work for an attached volume? Nova wouldn't wait for the volume to go to 'available', we don't want it to go to 'available', we'd just wait for it to go back to 'reserved'. During a rebuild the instance still needs to keep the volume logically attached to it so another instance can't grab it. -- Thanks, Matt From zhang.yanxian at zte.com.cn Fri Mar 30 01:49:19 2018 From: zhang.yanxian at zte.com.cn (zhang.yanxian at zte.com.cn) Date: Fri, 30 Mar 2018 09:49:19 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?=5Bneutron=5D_About_the_metric_for_the_?= =?utf-8?q?routes?= Message-ID: <201803300949197939068@zte.com.cn> Hi all, A routing metric is a quantitative value used to evaluate the path cost. But neutron can't specify a different metric with the same destination address, which is useful to realize FRR(Fast Reroute) in Telecoms and NFV scenario. So we are going to introduce a new metric value for the routes. Any suggestion is welcome, the link is here:https://bugs.launchpad.net/neutron/+bug/1759790 Thanks in advance for suggestions. Best Regards, yanxian zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Fri Mar 30 02:44:40 2018 From: soulxu at gmail.com (Alex Xu) Date: Fri, 30 Mar 2018 10:44:40 +0800 Subject: [openstack-dev] [nova] The createBackup API Message-ID: There is spec proposal to fix a bug of createBackup API with microversion. ( https://review.openstack.org/#/c/511825/) When rotation parameter is '0', the createBackup API just do a snapshot, and then delete all the snapshots. That is meaningless behavier. But there is thing hope to get wider suggestion. Since we said before all the nova API should be primitive, the API shouldn't be another wrap of another API. So the createBackup sounds like just using the createImage API to create a snapshot, and upload the snapshot into the glance with index number in the image name, and rotation the image in after each snapshot. So it should be something can be done by the client scrips to do same thing with createImage API. We have two options here: #1. fix the bug with a microversion. And we aren't sure any people really use '0' in the real life. But we use microversion to fix that bug, not sure it is worth. #2. deprecate the backup API with a microversion, leave the bug along. Document that how the user can do that in the client script. Looking for your comments. Thanks Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Mar 30 06:46:39 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 30 Mar 2018 15:46:39 +0900 Subject: [openstack-dev] [nova] The createBackup API In-Reply-To: References: Message-ID: On Fri, Mar 30, 2018 at 11:44 AM, Alex Xu wrote: > There is spec proposal to fix a bug of createBackup API with microversion. > (https://review.openstack.org/#/c/511825/) > > When rotation parameter is '0', the createBackup API just do a snapshot, and > then delete all the snapshots. That is meaningless behavier. > > But there is thing hope to get wider suggestion. Since we said before all > the nova API should be primitive, the API shouldn't be another wrap of > another API. > > So the createBackup sounds like just using the createImage API to create a > snapshot, and upload the snapshot into the glance with index number in the > image name, and rotation the image in after each snapshot. > > So it should be something can be done by the client scrips to do same thing > with createImage API. > > We have two options here: > #1. fix the bug with a microversion. And we aren't sure any people really > use '0' in the real life. But we use microversion to fix that bug, not sure > it is worth. > #2. deprecate the backup API with a microversion, leave the bug along. > Document that how the user can do that in the client script. Thanks Alex for point of deprecating it. I tends to agree on option#2 as this is really proxy and wrapper API. Also it is not about just this bug fix, but it is more about how much backup features nova going to support via this API. In future, there might be request of incremental backup support, multiple server backups in this API and i do not think we are going to say yes to such requests. I remember of implementing script around createImage API for backup use case used in NEC cloud but very long back. In my recent PoC on OpenStack backup with Trilio data (openstack backup solution), Trilio also does not use this APIs as long as i remember but i am confirming it with Trilio team. There are not much tooling available for openstack backup in market and might be many people are using script based solution on top of createBackup or createImage API. Another point is that, this API is not complete backup solution as alone, it still need tooling or scripting around this API for other basic features of backup like incremental, backup of all VMs together etc. So deprecating this API and ask to use createImage API in their existing tooling/scripting should not be hard things to ask to users. -gmann > > Looking for your comments. > > Thanks > Alex > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From renat.akhmerov at gmail.com Fri Mar 30 08:08:42 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 30 Mar 2018 15:08:42 +0700 Subject: [openstack-dev] [requirements] Adding objgraph to global requirements In-Reply-To: <1522329549-sup-5818@lrrr.local> References: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> <1522329549-sup-5818@lrrr.local> Message-ID: <117c1a80-f258-4ea5-9ad3-3db69d28f3bd@Spark> On 29 Mar 2018, 20:20 +0700, Doug Hellmann , wrote: > What sorts of tools are you talking about building? Well, some more high level functions to diagnose memory usage, especially aids to help find memory leaks. Or not even necessary memory leaks, we may also want/need to reduce the memory footprint. Although objgraph is very helpful on its own it is still pretty low level and, for example, if we want to inject some diagnostic message somewhere in the code (e.g. “top 5 object types whose total size increased memory consumption since the last call”) we’d have to directly invoke objgraph and remember what exact parameters to use to achieve that. I think it’d be more convenient to have some more high-level functions that already encapsulate all necessary logic related to using objgraph and achieve what’s needed. Also, I wouldn’t want to spread out uses of objgraph all of the code, I would better have one module that uses it. Just to reduce coupling with a 3rd party lib. Sorry if that sounds too abstract now, I’ll try to come with concrete examples to demonstrate what I mean. Renat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Mar 30 08:37:15 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 30 Mar 2018 10:37:15 +0200 Subject: [openstack-dev] [tc] Technical Committee Status update, March 30th Message-ID: <173d5b4a-57de-c03d-3b63-766c79d537bb@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * Resolution for minimal SIG governance [1] * Update new projects reference WRT IRC meetings [2] * Reanmed repo: python-openstacksdk -> openstacksdk * New repos: python-cyborgclient, kolla-cli, ansible-role-python_venv_build, ansible-role-k8s-rabbitmq [1] https://review.openstack.org/#/c/554254/ [2] https://review.openstack.org/#/c/552728/ The main item this week is the approval of a minimal governance model for SIGs. Any escalated conflict inside a SIG or between SIGs should now be arbitrated by a "SIGs admin group" (formed of one TC member and one UC member, with the Foundation executive director breaking ties in case of need). A similar resolution was adopted by the UC: https://governance.openstack.org/tc/resolutions/2018-03-19-sig-governance.html == Voting in progress == A proposal is up to set the expectation early on that official projects will have to drop direct tagging (or branching) rights in their Gerrit ACLs once they are made official, as those actions will be handled by the Release Management team through the openstack/releases repository. Please check: https://review.openstack.org/557737 == Under discussion == A new tag is proposed, to track which deliverables implemented a lower dependency bounds check voting test job. There is some discussion on whether a tag is the best way to track that. Chime in on: https://review.openstack.org/557501 Jeffrey Zhang's proposal about splitting the Kolla-kubernetes team out of the Kolla/Kolla-ansible team has evolved into a proposal to merge kolla-kubernetes efforts with OpenStack-Helm efforts. The updated proposal is discussed on a mailing-list thread in addition to the review: https://review.openstack.org/#/c/552531/ http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html The discussion is also still on-going the Adjutant project team addition. Concerns about the scope of Adjutant, as well as fears that it would hurt interoperability between OpenStack deployments were raised. A deeper analysis and discussion needs to happen befoer the TC can make a final call on this one. You can jump in the discussion here: https://review.openstack.org/#/c/553643/ == TC member actions/focus/discussions for the coming week(s) == For the coming week I expect debate to continue around the three proposals under discussion. We'll be finalizing our list of proposed topics for the Forum in Vancouver, ahead of the April 15 submission deadline. You can add ideas to: https://etherpad.openstack.org/p/YVR-forum-TC-sessions == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From thierry at openstack.org Fri Mar 30 09:16:32 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 30 Mar 2018 11:16:32 +0200 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: Message-ID: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> Richard Wellum wrote: > So as a current Kolla-Kubernetes Core - I have a slightly different > opinion than most, I'll try to verbalize it coherently. Thanks Rich for posting this -- I was really feeling like we missed part of the story. > Lets talk about what Kolla is: > > Kolla is a project that builds OpenStack docker images, stores them on > dockerhub, and provides tools to build your own images from your own > source. Both the images and the tools it provides, are widely used, very > popular and extremely stable; TripleO, openstack-helm and kolla-ansible > to name a few are all deployment methods that use Kolla. > > Kolla has two sub-projects, that both revolve around deployment methods; > kolla-ansible and kolla-kubernetes. Kolla-ansible is proven, stable and > used by many in the industry. Part of Kolla's quality is it's rock-solid > dependability in many scenarios. As Kubernetes took over most of the COE > world, it's only correct that the Kolla team created this sub-project; > if swarm became suddenly very popular then we should create a > kolla-swarm sub-project. > > So if we abandon kolla-kubernetes ('sunset' seems much more romantic > admittedly) - we are abandoning the core Kolla team's efforts in this > space. No matter how good openstack-helm is (and I've deployed it, know > a lot of the cores and it's truly excellent and well driven), what > happens down the line if openstack-helm decide to move on from Kolla - > say focussing on Loci images or a new flavor that comes along? Then > Kolla the core project, will no longer have any validation of it's > docker images/containers running on Kubernetes. That to me is the big > risk here. There is a 3rd option, which I've been advocating for a while. The tension here lies in the fact that the Kolla team is both a low-level provider (Kolla the docker image provider) and a higher-level deployment provider (kolla-ansible, kolla-k8s). The low-level images are used outside of the team (TripleO, openstack-helm), in the team (kolla-ansible) and in the team but by a different group (kolla-k8s). The proposals on the table only partly resolve that tension. Keeping kolla-k8s in kolla, spinning it out or merging it with OSH still make kolla both a low-level and a high-level provider. As long as kolla-ansible shares naming and is produced by the exact same team producing Kolla itself, anything else than kolla-ansible will stay a second-class consumer, breeding validation fears like the one described above. For the structure to match the technical goals, I wonder if "Kolla" should not focus on low-level image production, with the various higher-level Kolla consumers being set up as separate projects on an equal footing. I understand that Kolla and Kolla-Ansible are currently mostly the same group of people, but nothing in OpenStack prevents anyone from being part of two teams. Setting up discrete groups would actually encourage people interested in Kolla but not so much in Kolla-Ansible to join the Kolla team. It would encourage the Kolla team to treat all consumers equally and test their images on those various consumers equally. So my 3rd option would be to split the current Kolla team into three teams with different names, matching the three deliverables that this team currently has. Then if kolla-k8s needs to be sunset or merged with OSH, so be it. -- Thierry Carrez (ttx) From colleen at gazlene.net Fri Mar 30 11:10:05 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 30 Mar 2018 13:10:05 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 26 March 2018 Message-ID: <1522408205.786990.1321244272.35B45BF5@webmail.messagingengine.com> # Keystone Team Update - Week of 26 March 2018 ## News ### JSON Web Tokens Lance found an interesting article denouncing JWT[1][2] which, in an ironic twist, also advocated fernet as an alternative. We're still plowing forward on the JWT spec[3], but we need to be very precise in our design and mindful not just of the RFCs but of our chosen library's implementation details. The spec is being expanded to more precisely define the payload (and some advantages the new payload format will give us[4]), and how and whether to encrypt or just sign is still an open question[5]. [1] https://paragonie.com/blog/2017/03/jwt-json-web-tokens-is-bad-standard-that-everyone-should-avoid [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-28.log.html#t2018-03-28T17:53:06 [3] https://review.openstack.org//541903 [4] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-28.log.html#t2018-03-28T15:04:01 [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-29.log.html#t2018-03-29T16:16:06 ### PostgreSQL support We have an open bug for a problem in one of the SQL migrations on PostgreSQL[6] which brought to mind a TC resolution about the current status of PostgreSQL in OpenStack[7]. We do test migrations on PostgreSQL, but not at scale and not in a rolling upgrade scenario. No one has proposed to drop support for PostgreSQL since it more or less works most of the time, but we do need to document within keystone that it is not a first class citizen and resolving some of these weirder bugs is only best effort[8]. [6] https://bugs.launchpad.net/keystone/+bug/1755906 [7] https://governance.openstack.org/tc/reference/help-most-needed.html [8] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-27.log.html#t2018-03-27T18:12:15 ### Help wanted lists Like other projects, keystone gets a lot of drive-by patches for typo fixes, URL updates, and lately PTI updates. In the last meeting, I suggested that perhaps we could steer these types of contributions toward something that would be more beneficial to keystone specifically. Low-investment tasks like resolving deprecation warnings, for example, would provide a bigger value to us than typo fixes. I started a list of the types of things we could direct these contributors toward[9], please feel free to add to it. I'll add it to our contributor guide. In discussing this "help wanted list", we also circled back to the possibiliy of requesting to add keystone to the TC's "help most needed" list[10]. This would not be about focusing drive-by patches constructively, but on gaining long-term maintainers who can help us with some of keystone's fundamental issues and feature backlog. We haven't yet been moved to action on this. [9] https://etherpad.openstack.org/p/keystone-help-wanted-list [10] https://governance.openstack.org/tc/reference/help-most-needed.html ## Open Specs Search query: https://goo.gl/hdD9Kw We merged our first spec for Rocky, which was for MFA improvements[11]. We also converged on some terminology decisions for the application credential improvement spec[12] and expect to merge it soon. [11] https://review.openstack.org/553670 [12] https://review.openstack.org/396331 ## Recently Merged Changes Search query: https://goo.gl/FLwpEf We merged 18 changes in the last week, including some significant bug fixes. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 38 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Among these are a couple of changes to python-keystoneclient[13][14] to add the ability to return a request ID to the caller, which have been making steady progress for a while and are now in good shape. [13] https://review.openstack.org/329913 [14] https://review.openstack.org/267456 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html We're about three weeks out from spec proposal freeze. If you have a feature you would like to work on in keystone, please propose it now. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From doug at doughellmann.com Fri Mar 30 11:35:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 30 Mar 2018 07:35:17 -0400 Subject: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests In-Reply-To: <1522257468-sup-81@lrrr.local> References: <1522257468-sup-81@lrrr.local> Message-ID: <4C87D8A7-9A50-4141-A667-6F7B1425B6E3@doughellmann.com> Anyone? > On Mar 28, 2018, at 1:26 PM, Doug Hellmann wrote: > > In the course of preparing the next release of oslo.config, Ben noticed > that nova's unit tests fail with oslo.config master [1]. > > The underlying issue is that the tests mock things that oslo.config > is now calling as part of determining where options are being set > in code. This isn't an API change in oslo.config, and it is all > transparent for normal uses of the library. But the mocks replace > os.path.exists() and open() for the entire duration of a test > function (not just for the isolated application code being tested), > and so the library behavior change surfaces as a test error. > > I'm not really in a position to go through and clean up the use of > mocks in those (and other?) tests myself, and I would like to not > have to revert the feature work in oslo.config, especially since > we did it for the placement API stuff for the nova team. > > I'm looking for ideas about what to do. > > Doug > > [1] http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pgrist at redhat.com Fri Mar 30 12:44:33 2018 From: pgrist at redhat.com (Paul Grist) Date: Fri, 30 Mar 2018 08:44:33 -0400 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote: > Greeting folks, > > During the last PTG we spent time discussing some ideas around an > All-In-One installer, using 100% of the TripleO bits to deploy a single > node OpenStack very similar with what we have today with the containerized > undercloud and what we also have with other tools like Packstack or > Devstack. > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > One of the problems that we're trying to solve here is to give a simple > tool for developers so they can both easily and quickly deploy an OpenStack > for their needs. > > Big +1 on the concept, thanks to all those putting effort into this. > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly > and without complexity, reproducing the same exact same tooling as TripleO > is using." > "As a Neutron developer, I need to develop a feature in Neutron and test > it with TripleO in my local env." > "As a TripleO dev, I need to implement a new service and test its > deployment in my local env." > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > production chain, quickly and simply." > > Would this also be an opportunity for CI to enable lighter weight sanity and preliminary tests? "As a project, I want to implement a TripleO CI gate to detect regressions early, but have resource and test execution time constraints" > Probably more use cases, but to me that's what came into my mind now. > > Dan kicked-off a doc patch a month ago: https://review.openstack.org/# > /c/547038/ > And I just went ahead and proposed a blueprint: > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > So hopefully we can start prototyping something during Rocky. > > Before talking about the actual implementation, I would like to gather > feedback from people interested by the use-cases. If you recognize yourself > in these use-cases and you're not using TripleO today to test your things > because it's too complex to deploy, we want to hear from you. > I want to see feedback (positive or negative) about this idea. We need to > gather ideas, use cases, needs, before we go design a prototype in Rocky. > > Thanks everyone who'll be involved, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Mar 30 14:26:43 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 30 Mar 2018 16:26:43 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Solar" release Message-ID: <20180330142643.ff3czxy35khmjakx@eukaryote> The last version bump was in "Pike" release (commit: b980df0, 11-Feb-2017), and we didn't do any bump during "Queens". So it's time to increment the versions (which will also makes us get rid of some backward compatibility cruft), and pick future versions of libvirt and QEMU. As it stands, during the "Pike" release the advertized NEXT_MIN versions were set to: libvirt 1.3.1 and QEMU 2.5.0 -- but they weren't actually bumped for the "Queens" release. So they will now be applied for the "Rocky" release. (Hmm, but note that libvirt 1.3.1 was released more than 2 years ago[1].) While at it, we should also discuss about what will be the NEXT_MIN libvirt and QEMU versions for the "Solar" release. To that end, I've spent going through different distributions and updated the DistroSupportMatrix Wiki[2]. Taking the DistroSupportMatrix into picture, for the sake of discussion, how about the following NEXT_MIN versions for "Solar" release: (a) libvirt: 3.2.0 (released on 23-Feb-2017) This satisfies most distributions, but will affect Debian "Stretch", as they only have 3.0.0 in the stable branch -- I've checked their repositories[3][4]. Although the latest update for the stable release "Stretch (9.4)" was released only on 10-March-2018, I don't think they increment libvirt and QEMU versions in stable. Is there another way for "Stretch (9.4)" users to get the relevant versions from elsewhere? (b) QEMU: 2.9.0 (released on 20-Apr-2017) This too satisfies most distributions but will affect Oracle Linux -- which seem to ship QEMU 1.5.3 (released in August 2013) with their "7", from the Wiki. And will also affect Debian "Stretch" -- as it only has 2.8.0 Can folks chime in here? [1] https://www.redhat.com/archives/libvirt-announce/2016-January/msg00002.html [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix [3] https://packages.qa.debian.org/libv/libvirt.html [4] https://packages.qa.debian.org/libv/libvirt.html -- /kashyap From sean.mcginnis at gmx.com Fri Mar 30 14:49:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 30 Mar 2018 09:49:17 -0500 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <20180330144917.GA7872@sm-xps> > While at it, we should also discuss about what will be the NEXT_MIN > libvirt and QEMU versions for the "Solar" release. To that end, I've > spent going through different distributions and updated the > DistroSupportMatrix Wiki[2]. > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > Correction - for the "Stein" release. :) From cdent+os at anticdent.org Fri Mar 30 14:50:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 30 Mar 2018 15:50:23 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-13 Message-ID: This is an "expand" style update. Meaning I'm actively searching for code and specs that didn't show up on last week's update (which was a "contract"). # Most Important What's pretty obvious from this expand session is we probably have way too much work in progress for the number of people available to review and write code. There's been some extended discussion on how to manage various bits of the interaction of Cyborg, Nova and Placement. It went kind of all over the place. Eric tried to summarize some of the discussion at: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128888.html That discussion (that is, the links from the summary) is probably worth reviewing if you are evaluating doing "clever" things with placement. (There's also been a _lot_ of discussion around NUMA handling and interactions between nova, neutron and placement. I hope someone will summarize that at some point.) Earlier in the week there was a spec review sprint and while some things merged there are still plenty of outstanding sprints. This week also saw the start of runways (which has an impact on when specs are going to be evaluated): https://etherpad.openstack.org/p/nova-runways-rocky Note that something _not_ being on a runway doesn't mean you shouldn't review it, rather that if you're trying to decide what to review, the runways help to indicate priorities and stuff that is ready now. There is always plenty of other stuff that is ready. Update provider tree and nested allocation candidates remain critical basic functionality on which much else is based. # What's Changed A spec modification was merged: to allow multiple 'member_of' parameters in resource provider and allocation candidated GET requests. This supports the in progress request filter work. 'member_of' support (without the multi-handling above) for allocation_candidates merged. Functional db tests that are on the placement side of the nova<->placement interaction have been moved under the placement hierarchy. Placement now uses microversion-perase 0.2.1, which extracted much of the microversion middleware and related code from placement to that library. So there's less code in placement now, and some code that other folk can use if they like. # Questions * If a trait traits in placement and nobody sees it, is it state or status? # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 16, +1 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 12, -1 on last week # Specs (There are too many of these) * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/418393/ Provide error codes for placement API * https://review.openstack.org/#/c/545057/ mirror nova host aggregates to placement API * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552927/ Spec for isolating configuration of placement database * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers * https://review.openstack.org/#/c/556873/ Handle nested providers for allocation candidates * https://review.openstack.org/#/c/556971/ Add Generation to Consumers * https://review.openstack.org/#/c/554305/ Mention (no) granular support for image traits * https://review.openstack.org/#/c/557912/ Update the vGPU spec * https://review.openstack.org/#/c/557065/ Proposes Multiple GPU types * https://review.openstack.org/#/c/555081/ Standardize CPU resource tracking * https://review.openstack.org/#/c/552722/ NUMA-aware live migration * https://review.openstack.org/#/c/502306/ Network bandwidth resource provider * https://review.openstack.org/#/c/509042/ Propose counting quota usage from placement * https://review.openstack.org/#/c/557580/ Fix endpoint URI /allocation_requests (reality fix) # Main Themes ## Update Provider Tree The ability of virt drivers to represent what resource providers they know about--whether that be numa, or clustered resources--is supported by the update_provider_tree method. This is still trucking along, is still critical: https://review.openstack.org/#/q/topic:bp/update-provider-tree ## Nested providers in allocation candidates Representing nested provides in the response to GET /allocation_candidates is required to actually make use of all the topology that update provider tree will report. https://review.openstack.org/#/q/topic:bp/nested-resource-providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates ## Request Filters A generic mechanism to allow the scheduler to futher refine the query made to /allocation_candidates to account for things like aggregates. https://review.openstack.org/#/q/topic:bp/placement-req-filter This drove the need for multiple member_of query params mentioned above. ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates It's part of what will make the req filters above useful. ## Forbidden Traits A way of expressing "I'd like resources that do _not_ have trait X". This is ready for review: https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits ## Consumer Generations There's a spec for this now: https://review.openstack.org/#/q/topic:bp/add-consumer-generation # Extraction No new code on the extraction front this week. Extraction related things continue to be associated with this topic: https://review.openstack.org/#/q/topic:bp/placement-extract The spec for optional database handling got some review during the spec sprint, but not from spec cores, so it still needs some attention: https://review.openstack.org/#/c/552927/ Jay has declared that he's going to start work on the os-resources-classes library. # Other Since this is an expand week, I've tried to add whatever else I can find to this list. Stuff that's already had at least a week on the list is at the front of the list. It would be nice if those things got some attention before the newer stuff (even if to say "no, kill this"). There are 21 entries here +10 on last week. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/533195/ Fix comments in get_all_with_shared() * https://review.openstack.org/#/q/topic:bug/1732731 Fixes related to shared providers * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent * https://review.openstack.org/#/c/557355/ Add to contributor docs about handler testing * https://review.openstack.org/#/c/556631/ doc: Upgrade placement first * https://review.openstack.org/#/c/556669/ Handle agg generation conflict in report client * https://review.openstack.org/#/c/556628/ Slugification utilities for placement names * https://review.openstack.org/#/c/557086/ Remove usage of [placement]os_region_name * https://review.openstack.org/#/c/556633/ Get rid of 406 paths in report client * https://review.openstack.org/#/c/537614/ Add unit test for non-placement resize * https://review.openstack.org/#/c/554357/ Address issues raised in adding member_of to GET /a-c * https://review.openstack.org/#/c/533396/ Fix allocation_candidates not to ignore shared RPs * https://review.openstack.org/#/q/topic:bug/1724613 Sharing-related bug fixes * https://review.openstack.org/#/q/topic:bug/1732731 More sharing related bug fixes * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests # End Too much. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From Richard.Pioso at dell.com Fri Mar 30 15:11:25 2018 From: Richard.Pioso at dell.com (Richard.Pioso at dell.com) Date: Fri, 30 Mar 2018 15:11:25 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: <0bc196b81da6400e93b1a666e4bbf90e@ausx13mpc123.AMER.DELL.COM> > -----Original Message----- > From: Emilien Macchi [mailto:emilien at redhat.com] > Sent: Thursday, March 29, 2018 5:33 PM > To: OpenStack Development Mailing List dev at lists.openstack.org> > Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: > recap & roadmap > > Greeting folks, > > During the last PTG we spent time discussing some ideas around an All-In- > One installer, using 100% of the TripleO bits to deploy a single node > OpenStack very similar with what we have today with the containerized > undercloud and what we also have with other tools like Packstack or > Devstack. > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > One of the problems that we're trying to solve here is to give a simple tool > for developers so they can both easily and quickly deploy an OpenStack for > their needs. > This would be awesome. Thank you so much! A bonus would be if the last known good bits, identified by continuous integration testing, could be requested. > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly > and without complexity, reproducing the same exact same tooling as TripleO > is using." > "As a Neutron developer, I need to develop a feature in Neutron and test it > with TripleO in my local env." > "As a TripleO dev, I need to implement a new service and test its deployment > in my local env." > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > production chain, quickly and simply." "As an Ironic developer, I need to modify a device driver and test it with TripleO in my local env." "As a technical writer, I need to deploy OpenStack in a VM on my laptop, so I could gain experience with OpenStack and TripleO without the need for a lab setup." > > Probably more use cases, but to me that's what came into my mind now. > > Dan kicked-off a doc patch a month ago: > https://review.openstack.org/#/c/547038/ > And I just went ahead and proposed a blueprint: > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > So hopefully we can start prototyping something during Rocky. > > Before talking about the actual implementation, I would like to gather > feedback from people interested by the use-cases. If you recognize yourself > in these use-cases and you're not using TripleO today to test your things > because it's too complex to deploy, we want to hear from you. > I want to see feedback (positive or negative) about this idea. We need to > gather ideas, use cases, needs, before we go design a prototype in Rocky. > > Thanks everyone who'll be involved, > -- > Emilien Macchi From whayutin at redhat.com Fri Mar 30 15:53:37 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 30 Mar 2018 15:53:37 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Fri, 30 Mar 2018 at 08:45 Paul Grist wrote: > On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi > wrote: > >> Greeting folks, >> >> During the last PTG we spent time discussing some ideas around an >> All-In-One installer, using 100% of the TripleO bits to deploy a single >> node OpenStack very similar with what we have today with the containerized >> undercloud and what we also have with other tools like Packstack or >> Devstack. >> >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one >> >> One of the problems that we're trying to solve here is to give a simple >> tool for developers so they can both easily and quickly deploy an OpenStack >> for their needs. >> >> Big +1 on the concept, thanks to all those putting effort into this. > > >> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly >> and without complexity, reproducing the same exact same tooling as TripleO >> is using." >> "As a Neutron developer, I need to develop a feature in Neutron and test >> it with TripleO in my local env." >> "As a TripleO dev, I need to implement a new service and test its >> deployment in my local env." >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the >> production chain, quickly and simply." >> >> > Would this also be an opportunity for CI to enable lighter weight sanity > and preliminary tests? > "As a project, I want to implement a TripleO CI gate to detect regressions > early, but have resource and test execution time constraints" > > Paul, You are 100% correct sir. That is the opportunity and intention we have here. Moving forward I see a single node installer that is comprable to devstack/packstack as a requirement for the project as we continue to improve the deployment to enable other projects to test/ci with TripleO in their check jobs. Thanks for responding and your support! > > >> Probably more use cases, but to me that's what came into my mind now. >> >> Dan kicked-off a doc patch a month ago: >> https://review.openstack.org/#/c/547038/ >> And I just went ahead and proposed a blueprint: >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one >> So hopefully we can start prototyping something during Rocky. >> >> Before talking about the actual implementation, I would like to gather >> feedback from people interested by the use-cases. If you recognize yourself >> in these use-cases and you're not using TripleO today to test your things >> because it's too complex to deploy, we want to hear from you. >> I want to see feedback (positive or negative) about this idea. We need to >> gather ideas, use cases, needs, before we go design a prototype in Rocky. >> >> Thanks everyone who'll be involved, >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Fri Mar 30 16:14:29 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 30 Mar 2018 16:14:29 +0000 Subject: [openstack-dev] [ironic] baremetal firmware lifecycle management In-Reply-To: References: Message-ID: We've experienced different firmware update approaches.. this is a wish list rather than a requirement since in the end, it can all be scripted if needed. Currently, these are manpower intensive and require a lot of co-ordination since the upgrade operation has to be performed by the hardware support team but the end user defines the intervention window. a. Some BMC updates can be applied out of band, over the network with appropriate BMC rights. It would be very nice if Ironic could orchestrate these updates since they can be painful to organise. One aspect of this would be for Ironic to orchestrate the updates and keep track of success/failure along with the current version of the BMC firmware (maybe as a property?). Typical example of this is when a security flaw is found in a particular hardware model BMC and we want to update to the latest version given an image provided by the vendor. b. A set of machines have been delivered but an incorrect BIOS setting is found. We want to reflash the BIOSes with the latest BIOS code/settings. This would generally be an operation requiring a reboot. We would ask our users to follow a procedure at their convenience to do so (within a window) and then we would force the change. An inventory of the current version would help to identify those who do not do the update and remind them. c. A disk firmware issue is found. Similar to b) but there is also the possibility for partial completion where some disks correctly update but others not. Overall, it would be great if we can find a way to allow self service hardware management where the end users can choose the right point to follow the firmware update process within a window and then we can force the upgrade if they do not do so. Tim -----Original Message----- From: Julia Kreger Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, 30 March 2018 at 00:09 To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [ironic] baremetal firmware lifecycle management One of the topics that came up at during the Ironic sessions at the Rocky PTG was firmware management. During this discussion, we quickly reached the consensus that we lacked the ability to discuss and reach a forward direction without: * An understanding of capabilities and available vendor mechanisms that can be used to consistently determine and assert desired firmware to a baremetal node. Ideally, we could find a commonality of two or more vendor mechanisms that can be abstracted cleanly into high level actions. Ideally this would boil down to something a simple as "list_firmware()" and "set_firmware()". Additionally there are surely some caveats we need to understand, such as if the firmware update must be done in a particular state, and if a particular prior condition or next action is required for the particular update. * An understanding of several use cases where a deployed node may need to have specific firmware applied. We are presently aware of two cases. The first being specific firmware is needed to match an approved operational profile. The second being a desire to perform ad-hoc changes or have new versions of firmware asserted while a node has already been deployed. Naturally any insight that can be shared will help the community to best model the interaction so we can determine next steps and ultimately implementation details. -Julia __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Fri Mar 30 16:24:44 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 30 Mar 2018 11:24:44 -0500 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: And just an aside, the All-in-ones are great tools for new operators to be able to get in and learn how to use OpenStack, even if the underlying configuration isn't a multi-node installation. Amy (spotz) On Fri, Mar 30, 2018 at 10:53 AM, Wesley Hayutin wrote: > > > On Fri, 30 Mar 2018 at 08:45 Paul Grist wrote: > >> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi >> wrote: >> >>> Greeting folks, >>> >>> During the last PTG we spent time discussing some ideas around an >>> All-In-One installer, using 100% of the TripleO bits to deploy a single >>> node OpenStack very similar with what we have today with the containerized >>> undercloud and what we also have with other tools like Packstack or >>> Devstack. >>> >>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one >>> >>> One of the problems that we're trying to solve here is to give a simple >>> tool for developers so they can both easily and quickly deploy an OpenStack >>> for their needs. >>> >>> Big +1 on the concept, thanks to all those putting effort into this. >> >> >>> "As a developer, I need to deploy OpenStack in a VM on my laptop, >>> quickly and without complexity, reproducing the same exact same tooling as >>> TripleO is using." >>> "As a Neutron developer, I need to develop a feature in Neutron and test >>> it with TripleO in my local env." >>> "As a TripleO dev, I need to implement a new service and test its >>> deployment in my local env." >>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the >>> production chain, quickly and simply." >>> >>> >> Would this also be an opportunity for CI to enable lighter weight sanity >> and preliminary tests? >> "As a project, I want to implement a TripleO CI gate to detect >> regressions early, but have resource and test execution time constraints" >> >> > Paul, > You are 100% correct sir. That is the opportunity and intention we have > here. Moving forward I see a single node installer that is comprable to > devstack/packstack as a requirement for the project as we continue to > improve the deployment to enable other projects to test/ci with TripleO in > their check jobs. > > Thanks for responding and your support! > > >> >> >>> Probably more use cases, but to me that's what came into my mind now. >>> >>> Dan kicked-off a doc patch a month ago: https://review.openstack.org/# >>> /c/547038/ >>> And I just went ahead and proposed a blueprint: >>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one >>> So hopefully we can start prototyping something during Rocky. >>> >>> Before talking about the actual implementation, I would like to gather >>> feedback from people interested by the use-cases. If you recognize yourself >>> in these use-cases and you're not using TripleO today to test your things >>> because it's too complex to deploy, we want to hear from you. >>> I want to see feedback (positive or negative) about this idea. We need >>> to gather ideas, use cases, needs, before we go design a prototype in Rocky. >>> >>> Thanks everyone who'll be involved, >>> -- >>> Emilien Macchi >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Fri Mar 30 16:36:26 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 30 Mar 2018 11:36:26 -0500 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330144917.GA7872@sm-xps> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> Message-ID: ;) On Fri, Mar 30, 2018 at 9:49 AM, Sean McGinnis wrote: > > While at it, we should also discuss about what will be the NEXT_MIN > > libvirt and QEMU versions for the "Solar" release. To that end, I've > > spent going through different distributions and updated the > > DistroSupportMatrix Wiki[2]. > > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > Correction - for the "Stein" release. :) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Mar 30 17:52:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 30 Mar 2018 13:52:29 -0400 Subject: [openstack-dev] [requirements] Adding objgraph to global requirements In-Reply-To: <117c1a80-f258-4ea5-9ad3-3db69d28f3bd@Spark> References: <31ade43d-37f7-4cdd-82ff-50b069491aac@Spark> <1522329549-sup-5818@lrrr.local> <117c1a80-f258-4ea5-9ad3-3db69d28f3bd@Spark> Message-ID: <368D7C0B-67EB-469A-AA18-6599B7FB60C0@doughellmann.com> On Mar 30, 2018, at 4:08 AM, Renat Akhmerov > wrote: > On 29 Mar 2018, 20:20 +0700, Doug Hellmann >, wrote: > >> What sorts of tools are you talking about building? > > Well, some more high level functions to diagnose memory usage, especially aids to help find memory leaks. Or not even necessary memory leaks, we may also want/need to reduce the memory footprint. Although objgraph is very helpful on its own it is still pretty low level and, for example, if we want to inject some diagnostic message somewhere in the code (e.g. “top 5 object types whose total size increased memory consumption since the last call”) we’d have to directly invoke objgraph and remember what exact parameters to use to achieve that. I think it’d be more convenient to have some more high-level functions that already encapsulate all necessary logic related to using objgraph and achieve what’s needed. Also, I wouldn’t want to spread out uses of objgraph all of the code, I would better have one module that uses it. Just to reduce coupling with a 3rd party lib. > > Sorry if that sounds too abstract now, I’ll try to come with concrete examples to demonstrate what I mean. > > Renat > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev It sounds like you have ideas for things that could either live in objgraph directly or a new library that wraps objgraph. Either way I think it’s a good idea to build something concrete so we can see the impact of integrating it. You might want to have a look at oslo.reports as an integration point. Doug -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Sat Mar 31 00:01:54 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 30 Mar 2018 19:01:54 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <299fbfd4-6895-7b92-8f63-953fc7fbcb30@gmail.com> References: <20180329142813.GA25762@sm-xps> <79dbfa81-c62e-4db8-3799-abb41c5d57e2@gmail.com> <20180329235009.GB5654@sm-xps> <299fbfd4-6895-7b92-8f63-953fc7fbcb30@gmail.com> Message-ID: <1cbfc37b-ac47-5b6f-5986-6a31950b9e41@gmail.com> On 3/29/2018 8:36 PM, Matt Riedemann wrote: > On 3/29/2018 6:50 PM, Sean McGinnis wrote: >> May we can add a "Reimaging" state to the volume? Then Nova could >> poll for it >> to go from that back to Available? > > That would be fine with me, and maybe similar to how 'extending' and > 'retyping' work for an attached volume? > > Nova wouldn't wait for the volume to go to 'available', we don't want > it to go to 'available', we'd just wait for it to go back to > 'reserved'. During a rebuild the instance still needs to keep the > volume logically attached to it so another instance can't grab it. > This all sounds reasonable to me. Thanks for hashing it out guys! Jay From openstack at fried.cc Sat Mar 31 00:34:12 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 30 Mar 2018 19:34:12 -0500 Subject: [openstack-dev] [placement] Anchor/Relay Providers Message-ID: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> Folks who care about placement (but especially Jay and Tetsuro)- I was reviewing [1] and was at first very unsatisfied that we were not returning the anchor providers in the results. But as I started digging into what it would take to fix it, I realized it's going to be nontrivial. I wanted to dump my thoughts before the weekend. It should be legal to have a configuration like: # CN1 (VCPU, MEMORY_MB) # / \ # /agg1 \agg2 # / \ # SS1 SS2 # (DISK_GB) (IPV4_ADDRESS) And make a request for DISK_GB,IPV4_ADDRESS; And have it return a candidate including SS1 and SS2. The CN1 resource provider acts as an "anchor" or "relay": a provider that doesn't provide any of the requested resource, but connects to one or more sharing providers that do so. This scenario doesn't work today (see bug [2]). Tetsuro has a partial fix [1]. However, whereas that fix will return you an allocation_request containing SS1 and SS2, neither the allocation_request nor the provider_summary mentions CN1. That's bad. Consider use cases like Nova's, where we have to land that allocation_request on a host: we have no good way of figuring out who that host is. Starting from the API, the response payload should look like: { "allocation_requests": [ {"allocations": { # This is missing ==> CN1_UUID: {"resources": {}}, # <== SS1_UUID: {"resources": {"DISK_GB": 1024}}, SS2_UUID: {"resources": {"IPV4_ADDRESS": 1}} }} ], "provider_summaries": { # This is missing ==> CN1_UUID: {"resources": { "VCPU": {"used": 123, "capacity": 456} }}, # <== SS1_UUID: {"resources": { "DISK_GB": {"used": 2048, "capacity": 1048576} }}, SS2_UUID: {"resources": { "IPV4_ADDRESS": {"used": 4, "capacity": 32} }} }, } Here's why it's not working currently: => CN1_UUID isn't in `summaries` [3] => because _build_provider_summaries [4] doesn't return it => because it's not in usages because _get_usages_by_provider_and_rc [5] only finds providers providing resource in that RC => and since CN1 isn't providing resource in any requested RC, it ain't included. But we have the anchor provider's (internal) ID; it's the ns_rp_id we're iterating on in this loop [6]. So let's just use that to get the summary and add it to the mix, right? Things that make that difficult: => We have no convenient helper that builds a summary object without specifying a resource class (which is a separate problem, because it means resources we didn't request don't show up in the provider summaries either - they should). => We internally build these gizmos inside out - an AllocationRequest contains a list of AllocationRequestResource, which contains a provider UUID, resource class, and amount. The latter two are required - but would be n/a for our anchor RP. I played around with this and came up with something that gets us most of the way there [7]. It's quick and dirty: there are functional holes (like returning "N/A" as a resource class; and traits are missing) and places where things could be made more efficient. But it's a start. -efried [1] https://review.openstack.org/#/c/533437/ [2] https://bugs.launchpad.net/nova/+bug/1732731 [3] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3308 [4] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3062 [5] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 2658 [6] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3303 [7] https://review.openstack.org/#/c/558014/ From stdake at cisco.com Sat Mar 31 03:13:01 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sat, 31 Mar 2018 03:13:01 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> Message-ID: On March 30, 2018 at 2:17:09 AM, Thierry Carrez (thierry at openstack.org) wrote: There is a 3rd option, which I've been advocating for a while. The tension here lies in the fact that the Kolla team is both a low-level provider (Kolla the docker image provider) and a higher-level deployment provider (kolla-ansible, kolla-k8s). The low-level images are used outside of the team (TripleO, openstack-helm), in the team (kolla-ansible) and in the team but by a different group (kolla-k8s). The proposals on the table only partly resolve that tension. Keeping kolla-k8s in kolla, spinning it out or merging it with OSH still make kolla both a low-level and a high-level provider. As long as kolla-ansible shares naming and is produced by the exact same team producing Kolla itself, anything else than kolla-ansible will stay a second-class consumer, breeding validation fears like the one described above. For the structure to match the technical goals, I wonder if "Kolla" should not focus on low-level image production, with the various higher-level Kolla consumers being set up as separate projects on an equal footing. I understand that Kolla and Kolla-Ansible are currently mostly the same group of people, but nothing in OpenStack prevents anyone from being part of two teams. Setting up discrete groups would actually encourage people interested in Kolla but not so much in Kolla-Ansible to join the Kolla team. It would encourage the Kolla team to treat all consumers equally and test their images on those various consumers equally. So my 3rd option would be to split the current Kolla team into three teams with different names, matching the three deliverables that this team currently has. Then if kolla-k8s needs to be sunset or merged with OSH, so be it. -- Thierry Carrez (ttx) Just got back from Ready Player One. Good Movie! Thanks Thierry for offering up your advocated position. When contributors joined the Kolla project, we had a clear mission of providing containers and deployment tools. Our ultimate objective was to make deployment *EASY* and solve from my perspective as PTL at the time what was OpenStack's number one pain point. What you're asking is for people to divide their time between two separate projects and take responsibility for making containers functional for other projects. Currently the core team has been generous with our time reviewing people's work in addition with +2/+As that want to use other deployment tools with Kolla containers, as referenced by TripleO, OSH, as well as a variety of other proprietary deployment tools. Some of these contributors are also core reviewers themselves. I don't expect this work would slow down under the current governance model of Kolla as we provide a very clear API to use when interfacing with Kolla. Hence we do not *make it hard* to contribute to the containers deliverable. The same cannot be said in your proposed governance model. What your proposal asks is for contributors that came to scratch an itch to scratch a bunch of other itches that may not be to their interest, attend twice as meetings, attend twice as many PTG sessions or midcycles and grow some amount of expertise in understanding the various problems of other deployment projects. Ultimately I have a big concern this would drive contributors away, rather than solve a perceived second order problem with our current governance model. Regards, -steve [1] is for more reading about the structure or Kolla. [1] https://www.ansible.com/blog/openstack-kolla __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/maila man/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Sat Mar 31 13:17:52 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 15:17:52 +0200 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330144917.GA7872@sm-xps> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> Message-ID: <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote: > > While at it, we should also discuss about what will be the NEXT_MIN > > libvirt and QEMU versions for the "Solar" release. To that end, I've > > spent going through different distributions and updated the > > DistroSupportMatrix Wiki[2]. > > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > Correction - for the "Stein" release. :) Darn, I should've triple-checked before I assumed it is to be "Solar". If "Stein" is confirmed; I'll re-send this email with the correct release name for clarity. Thanks for correcting. -- /kashyap From kchamart at redhat.com Sat Mar 31 13:41:44 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 15:41:44 +0200 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180330144917.GA7872@sm-xps> <20180331131752.rtdm3c3dw7iyyqyn@eukaryote> Message-ID: <20180331134144.zue2lpoz5o32zwjh@eukaryote> On Sat, Mar 31, 2018 at 03:17:52PM +0200, Kashyap Chamarthy wrote: > On Fri, Mar 30, 2018 at 09:49:17AM -0500, Sean McGinnis wrote: [...] > > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > > how about the following NEXT_MIN versions for "Solar" release: > > > > > Correction - for the "Stein" release. :) > > Darn, I should've triple-checked before I assumed it is to be "Solar". > If "Stein" is confirmed; I'll re-send this email with the correct > release name for clarity. It actually is: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128899.html -- All Hail our Newest Release Name - OpenStack Stein (That email went into 'openstack-operators' maildir for me; my filtering fault.) I won't start another thread;, will just leave this existing thread intact, as people will read it as: "whatever name the 'S' release ends up with" (as 'fungi' put it on IRC). [...] -- /kashyap From fungi at yuggoth.org Sat Mar 31 13:44:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Mar 2018 13:44:28 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> Message-ID: <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> On 2018-03-31 03:13:01 +0000 (+0000), Steven Dake (stdake) wrote: [...] > When contributors joined the Kolla project, we had a clear mission > of providing containers and deployment tools. Our ultimate > objective was to make deployment *EASY* and solve from my > perspective as PTL at the time what was OpenStack's number one > pain point. [...] So, if I understand what you're suggesting, Kolla is a deployment project. It uses Ansible and builds container images, but those are merely implementation details. Other projects have found the container images useful outside of Kolla and so the Kolla team has attempted to be helpful in supporting their direct in unrelated deployment tools but has no desire to decouple the deployment tooling and image building components any further than necessary. Given this, it sounds like the current Kolla mission statement of "provide production-ready containers and deployment tools for operating OpenStack clouds" could use some adjustment to drop the production-ready containers aspect for further clarity. Do you agree? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kchamart at redhat.com Sat Mar 31 14:09:29 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Sat, 31 Mar 2018 16:09:29 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <20180331140929.r5kj3qyrefvsovwf@eukaryote> [Meta comment: corrected the email subject: "Solar" --> "Stein"] On Fri, Mar 30, 2018 at 04:26:43PM +0200, Kashyap Chamarthy wrote: > The last version bump was in "Pike" release (commit: b980df0, > 11-Feb-2017), and we didn't do any bump during "Queens". So it's time > to increment the versions (which will also makes us get rid of some > backward compatibility cruft), and pick future versions of libvirt and > QEMU. > > As it stands, during the "Pike" release the advertized NEXT_MIN versions > were set to: libvirt 1.3.1 and QEMU 2.5.0 -- but they weren't actually > bumped for the "Queens" release. So they will now be applied for the > "Rocky" release. (Hmm, but note that libvirt 1.3.1 was released more > than 2 years ago[1].) > > While at it, we should also discuss about what will be the NEXT_MIN > libvirt and QEMU versions for the "Solar" release. To that end, I've > spent going through different distributions and updated the > DistroSupportMatrix Wiki[2]. > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) > > This satisfies most distributions, but will affect Debian "Stretch", > as they only have 3.0.0 in the stable branch -- I've checked their > repositories[3][4]. Although the latest update for the stable > release "Stretch (9.4)" was released only on 10-March-2018, I don't > think they increment libvirt and QEMU versions in stable. Is > there another way for "Stretch (9.4)" users to get the relevant > versions from elsewhere? > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) > > This too satisfies most distributions but will affect Oracle Linux > -- which seem to ship QEMU 1.5.3 (released in August 2013) with > their "7", from the Wiki. And will also affect Debian "Stretch" -- > as it only has 2.8.0 > > Can folks chime in here? > > [1] https://www.redhat.com/archives/libvirt-announce/2016-January/msg00002.html > [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix > [3] https://packages.qa.debian.org/libv/libvirt.html > [4] https://packages.qa.debian.org/libv/libvirt.html > > -- > /kashyap -- /kashyap From fungi at yuggoth.org Sat Mar 31 15:00:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Mar 2018 15:00:27 +0000 Subject: [openstack-dev] [infra][qa] Pip 10 is on the way Message-ID: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> According to a notice[1] posted to the pypa-announce and distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 is expected to be released in two weeks (over the April 14/15 weekend). We know it's at least going to start breaking[2] DevStack and we need to come up with a plan for addressing that, but we don't know how much more widespread the problem might end up being so encourage everyone to try it out now where they can. [1] https://mail.python.org/pipermail/distutils-sig/2018-March/032104.html [2] https://github.com/pypa/pip/issues/4805 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Sat Mar 31 17:09:26 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 31 Mar 2018 12:09:26 -0500 Subject: [openstack-dev] [infra][qa] Pip 10 is on the way In-Reply-To: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> Message-ID: <20180331170925.GA66621@smcginnis-mbp.local> On Sat, Mar 31, 2018 at 03:00:27PM +0000, Jeremy Stanley wrote: > According to a notice[1] posted to the pypa-announce and > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > is expected to be released in two weeks (over the April 14/15 > weekend). We know it's at least going to start breaking[2] DevStack > and we need to come up with a plan for addressing that, but we don't > know how much more widespread the problem might end up being so > encourage everyone to try it out now where they can. > > [1] https://mail.python.org/pipermail/distutils-sig/2018-March/032104.html > [2] https://github.com/pypa/pip/issues/4805 > -- > Jeremy Stanley One upcoming change is the inability of having "import pip" in code. That change snuck into 9.0.2 (and was worked around giving incorrect users a little more time with 9.0.3). I think we only found an issue with this in a library in use by neutron, but please be aware that any programmatic use of pip as a library will need to be fixed. From stdake at cisco.com Sat Mar 31 18:06:07 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sat, 31 Mar 2018 18:06:07 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> Message-ID: On March 31, 2018 at 6:45:03 AM, Jeremy Stanley (fungi at yuggoth.org) wrote: [...] Given this, it sounds like the current Kolla mission statement of "provide production-ready containers and deployment tools for operating OpenStack clouds" could use some adjustment to drop the production-ready containers aspect for further clarity. Do you agree? [...] I appreciate your personal interest in attempting to clarify the Kolla mission statement. The change in the Kolla mission statement you propose is unnecessary. Regards -steve Jeremy Stanley __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Sat Mar 31 18:39:51 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Sat, 31 Mar 2018 18:39:51 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> Message-ID: <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From inc007 at gmail.com Sat Mar 31 19:07:54 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Sat, 31 Mar 2018 12:07:54 -0700 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> Message-ID: So my take on the issue. I think splitting Kolla and Kolla-Ansible to completely new project (including name change and all) might look good from purity perspective (they're effectively separate), but would cause chaos and damage to production deployments people use. While code will be the same, do we scrub "kolla" name from kolla-ansible code? Do we change config paths? Configs lands in /etc/kolla so I guess new project shouldn't do that? Not to mention that operators are used to this nomenclature and build tools around it (for example Kayobe) and there is no telling how many production deployments would get hurt. At the same time I don't think there is much to gain from split like that, so that's not really practical. We can do this for Kolla-kubernetes as it hasn't released 1.0 so there won't (or shouldn't) be production environments based on it. We already have separate core teams for Kolla and Kolla-Ansible. From my experience organizing PTG and other events for both (or rather all 3 deliverables) together makes sense and makes scheduling of attendance much easier. On 31 March 2018 at 11:06, Steven Dake (stdake) wrote: > On March 31, 2018 at 6:45:03 AM, Jeremy Stanley (fungi at yuggoth.org) wrote: > > [...] > Given this, it sounds like the current Kolla mission statement of > "provide production-ready containers and deployment tools for > operating OpenStack clouds" could use some adjustment to drop the > production-ready containers aspect for further clarity. Do you > agree? > [...] > > I appreciate your personal interest in attempting to clarify the Kolla > mission statement. > > The change in the Kolla mission statement you propose is unnecessary. > > Regards > > -steve > > > > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Sat Mar 31 19:34:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Mar 2018 19:34:53 +0000 Subject: [openstack-dev] [kolla][kolla-kubernete][tc][openstack-helm]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> Message-ID: <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> On 2018-03-31 18:06:07 +0000 (+0000), Steven Dake (stdake) wrote: > I appreciate your personal interest in attempting to clarify the > Kolla mission statement. > > The change in the Kolla mission statement you propose is > unnecessary. [...] I should probably have been more clear. The Kolla mission statement right now says that the Kolla team produces two things: containers and deployment tools. This may make it challenging for the team to avoid tightly coupling their deployment tooling and images, creating a stratification of first-class (those created by the Kolla team) and second-class (those created by anyone else) support for deployment tools using those images. Is the intent to provide "a container-oriented deployment solution and the container images it uses" (kolla-ansible as first-class supported deployment engine for these images) or "container images for use by arbitrary deployment solutions, along with an example deployment solution for use with them" (kolla-ansible on equal footing with competing systems that make use of the same images)? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Sat Mar 31 20:22:09 2018 From: openstack at fried.cc (Eric Fried) Date: Sat, 31 Mar 2018 15:22:09 -0500 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> Message-ID: <513e628d-4405-2f70-2900-da60327988f5@fried.cc> /me responds to self Good progress has been made here. Tetsuro solved the piece where provider summaries were only showing resources that had been requested - with [8] they show usage information for *all* their resources. In order to make use of both [1] and [8], I had to shuffle them into the same series - I put [8] first - and then balance my (heretofore) WIP [7] on the top. So we now have a lovely 5-part series starting at [9]. Regarding the (heretofore) WIP [7], I cleaned it up and made it ready. QUESTION: Do we need a microversions for [8] and/or [1] and/or [7]? Each changes the response payload content of GET /allocation_candidates, so yes; but that content was arguably broken before, so no. Please comment on the patches accordingly. -efried > [1] https://review.openstack.org/#/c/533437/ > [2] https://bugs.launchpad.net/nova/+bug/1732731 > [3] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3308 > [4] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3062 > [5] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 2658 > [6] https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3303 > [7] https://review.openstack.org/#/c/558014/ [8] https://review.openstack.org/#/c/558045/ [9] https://review.openstack.org/#/c/558044/ On 03/30/2018 07:34 PM, Eric Fried wrote: > Folks who care about placement (but especially Jay and Tetsuro)- > > I was reviewing [1] and was at first very unsatisfied that we were not > returning the anchor providers in the results. But as I started digging > into what it would take to fix it, I realized it's going to be > nontrivial. I wanted to dump my thoughts before the weekend. > > > It should be legal to have a configuration like: > > # CN1 (VCPU, MEMORY_MB) > # / \ > # /agg1 \agg2 > # / \ > # SS1 SS2 > # (DISK_GB) (IPV4_ADDRESS) > > And make a request for DISK_GB,IPV4_ADDRESS; > And have it return a candidate including SS1 and SS2. > > The CN1 resource provider acts as an "anchor" or "relay": a provider > that doesn't provide any of the requested resource, but connects to one > or more sharing providers that do so. > > This scenario doesn't work today (see bug [2]). Tetsuro has a partial > fix [1]. > > However, whereas that fix will return you an allocation_request > containing SS1 and SS2, neither the allocation_request nor the > provider_summary mentions CN1. > > That's bad. Consider use cases like Nova's, where we have to land that > allocation_request on a host: we have no good way of figuring out who > that host is. > > > Starting from the API, the response payload should look like: > > { > "allocation_requests": [ > {"allocations": { > # This is missing ==> > CN1_UUID: {"resources": {}}, > # <== > SS1_UUID: {"resources": {"DISK_GB": 1024}}, > SS2_UUID: {"resources": {"IPV4_ADDRESS": 1}} > }} > ], > "provider_summaries": { > # This is missing ==> > CN1_UUID: {"resources": { > "VCPU": {"used": 123, "capacity": 456} > }}, > # <== > SS1_UUID: {"resources": { > "DISK_GB": {"used": 2048, "capacity": 1048576} > }}, > SS2_UUID: {"resources": { > "IPV4_ADDRESS": {"used": 4, "capacity": 32} > }} > }, > } > > Here's why it's not working currently: > > => CN1_UUID isn't in `summaries` [3] > => because _build_provider_summaries [4] doesn't return it > => because it's not in usages because _get_usages_by_provider_and_rc [5] > only finds providers providing resource in that RC > => and since CN1 isn't providing resource in any requested RC, it ain't > included. > > But we have the anchor provider's (internal) ID; it's the ns_rp_id we're > iterating on in this loop [6]. So let's just use that to get the > summary and add it to the mix, right? Things that make that difficult: > > => We have no convenient helper that builds a summary object without > specifying a resource class (which is a separate problem, because it > means resources we didn't request don't show up in the provider > summaries either - they should). > => We internally build these gizmos inside out - an AllocationRequest > contains a list of AllocationRequestResource, which contains a provider > UUID, resource class, and amount. The latter two are required - but > would be n/a for our anchor RP. > > I played around with this and came up with something that gets us most > of the way there [7]. It's quick and dirty: there are functional holes > (like returning "N/A" as a resource class; and traits are missing) and > places where things could be made more efficient. But it's a start. > > -efried > > [1] https://review.openstack.org/#/c/533437/ > [2] https://bugs.launchpad.net/nova/+bug/1732731 > [3] > https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3308 > [4] > https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3062 > [5] > https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 2658 > [6] > https://review.openstack.org/#/c/533437/6/nova/api/openstack/placement/objects/resource_provider.py at 3303 > [7] https://review.openstack.org/#/c/558014/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Sat Mar 31 21:12:22 2018 From: openstack at fried.cc (Eric Fried) Date: Sat, 31 Mar 2018 16:12:22 -0500 Subject: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests In-Reply-To: <4C87D8A7-9A50-4141-A667-6F7B1425B6E3@doughellmann.com> References: <1522257468-sup-81@lrrr.local> <4C87D8A7-9A50-4141-A667-6F7B1425B6E3@doughellmann.com> Message-ID: Hi Doug, I made this [2] for you. I tested it locally with oslo.config master, and whereas I started off with a slightly different set of errors than you show at [1], they were in the same suites. Since I didn't want to tox the world locally, I went ahead and added a Depends-On from [3]. Let's see how it plays out. >> [1] http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 [2] https://review.openstack.org/#/c/558084/ [3] https://review.openstack.org/#/c/557012/ -efried On 03/30/2018 06:35 AM, Doug Hellmann wrote: > Anyone? > >> On Mar 28, 2018, at 1:26 PM, Doug Hellmann wrote: >> >> In the course of preparing the next release of oslo.config, Ben noticed >> that nova's unit tests fail with oslo.config master [1]. >> >> The underlying issue is that the tests mock things that oslo.config >> is now calling as part of determining where options are being set >> in code. This isn't an API change in oslo.config, and it is all >> transparent for normal uses of the library. But the mocks replace >> os.path.exists() and open() for the entire duration of a test >> function (not just for the isolated application code being tested), >> and so the library behavior change surfaces as a test error. >> >> I'm not really in a position to go through and clean up the use of >> mocks in those (and other?) tests myself, and I would like to not >> have to revert the feature work in oslo.config, especially since >> we did it for the placement API stuff for the nova team. >> >> I'm looking for ideas about what to do. >> >> Doug >> >> [1] http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From stdake at cisco.com Sat Mar 31 22:07:03 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sat, 31 Mar 2018 22:07:03 +0000 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On March 31, 2018 at 12:35:31 PM, Jeremy Stanley (fungi at yuggoth.org) wrote: On 2018-03-31 18:06:07 +0000 (+0000), Steven Dake (stdake) wrote: > I appreciate your personal interest in attempting to clarify the > Kolla mission statement. > > The change in the Kolla mission statement you propose is > unnecessary. [...] I should probably have been more clear. The Kolla mission statement right now says that the Kolla team produces two things: containers and deployment tools. This may make it challenging for the team to avoid tightly coupling their deployment tooling and images, creating a stratification of first-class (those created by the Kolla team) and second-class (those created by anyone else) support for deployment tools using those images. The problems raised in this thread (tension - tight coupling - second class citizens - stratification) was predicted early on - prior to Kolla 1.0. That prediction led to the creation of a technical solution - the Kolla API. This API permits anyone to reuse the containers as they see fit if they conform their implementation to the API. The API is not specifically tied to the Ansible deployment technology. Instead the API is tied to the varying requirements that various deployment teams have had in the past around generalized requirements for making container lifecycle management a reality while running OpenStack services and their dependencies inside containers. Is the intent to provide "a container-oriented deployment solution and the container images it uses" (kolla-ansible as first-class supported deployment engine for these images) or "container images for use by arbitrary deployment solutions, along with an example deployment solution for use with them" (kolla-ansible on equal footing with competing systems that make use of the same images)? My viewpoint is as all deployments projects are already on an equal footing when using Kolla containers. I would invite the TripleO team who did integration with the Kolla API to provide their thoughts. I haven't kept up with OSH development, but perhaps that team could provide their viewpoint as well. Cheers -steve -- Jeremy Stanley __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Mar 31 23:16:43 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Mar 2018 23:16:43 +0000 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: <20180331231642.liyindpxke5t4qm5@yuggoth.org> On 2018-03-31 22:07:03 +0000 (+0000), Steven Dake (stdake) wrote: [...] > The problems raised in this thread (tension - tight coupling - > second class citizens - stratification) was predicted early on - > prior to Kolla 1.0. That prediction led to the creation of a > technical solution - the Kolla API. This API permits anyone to > reuse the containers as they see fit if they conform their > implementation to the API. The API is not specifically tied to > the Ansible deployment technology. Instead the API is tied to the > varying requirements that various deployment teams have had in the > past around generalized requirements for making container > lifecycle management a reality while running OpenStack services > and their dependencies inside containers. [...] Thanks! That's where my fuzzy thought process was leading. Existence of a stable API guarantee rather than treating the API as "whatever kolla-ansible does" significantly increases the chances of other projects being able to rely on kolla's images in the long term. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Sat Mar 31 23:24:01 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sat, 31 Mar 2018 18:24:01 -0500 Subject: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto Message-ID: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> Here's the current status. I'd like to ask the projects what's keeping them from removing pycrypto in facor of a maintained library. Open reviews barbican: - (merge conflict) https://review.openstack.org/#/c/458196 - (merge conflict) https://review.openstack.org/#/c/544873 nova-powervm: no open reviews - in test-requirements, but not actually used? - made https://review.openstack.org/558091 for it pyghmi: - (merge conflict) https://review.openstack.org/#/c/331828 - (merge conflict) https://review.openstack.org/#/c/545465 - (doesn't change the import) https://review.openstack.org/#/c/545182 solum: no open reviews - looks like only a couple of functions need changing trove: no open reviews - mostly uses the random feature -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: