From soulxu at gmail.com Fri Jun 1 01:27:39 2018 From: soulxu at gmail.com (Alex Xu) Date: Fri, 1 Jun 2018 09:27:39 +0800 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: <3fc4ed48-125f-7479-7ea7-a370e7450df3@fried.cc> References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> <3fc4ed48-125f-7479-7ea7-a370e7450df3@fried.cc> Message-ID: I can help on it. 2018-05-31 21:49 GMT+08:00 Eric Fried : > Yup. I'm sure reviewers will bikeshed the names, but the review is the > appropriate place for that to happen. > > A couple of test changes will also be required. You can have a look at > [1] as an example to follow. > > -efried > > [1] https://review.openstack.org/#/c/511180/ > > On 05/31/2018 01:02 AM, Nadathur, Sundar wrote: > > On 5/30/2018 1:18 PM, Eric Fried wrote: > >> This all sounds fully reasonable to me. One thing, though... > >> > >>>> * There is a resource class per device category e.g. > >>>> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > >> Let's propose standard resource classes for these ASAP. > >> > >> https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20 > eb5355a818/nova/rc_fields.py > >> > >> > >> -efried > > Makes sense, Eric. The obvious names would be ACCELERATOR_GPU and > > ACCELERATOR_FPGA. Do we just submit a patch to rc_fields.py? > > > > Thanks, > > Sundar > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Fri Jun 1 01:37:44 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 1 Jun 2018 09:37:44 +0800 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 On Fri, Jun 1, 2018 at 1:41 AM Mark Giles wrote: > > +1 > > On May 31, 2018 at 1:06:43 PM, Borne Mace (borne.mace at oracle.com) wrote: > > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli > core reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its > co-creator as probably half or more of the existing code was due to his > efforts. He has now been working diligently since it was pushed > upstream to improve the stability and testability of the cli and has the > second most commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, > for a total of 21. Steve therefore requires a minimum of 11 votes (so > just 10 more after my +1), with no veto -2 votes within a 7 day voting > window to end on June 6th. Voting will be closed immediately on a veto > or in the case of a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention > and timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Jeffrey Zhang Blog: http://xcodest.me From singh.surya64mnnit at gmail.com Fri Jun 1 02:06:20 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Fri, 1 Jun 2018 07:36:20 +0530 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 Thanks for work, got the good feedback and interest of kolla-cli in *kolla-rocky-ops-and-user-feedback* session in Vancouver. On Thu, May 31, 2018 at 10:32 PM, Borne Mace wrote: > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli core > reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its > co-creator as probably half or more of the existing code was due to his > efforts. He has now been working diligently since it was pushed upstream > to improve the stability and testability of the cli and has the second most > commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, > for a total of 21. Steve therefore requires a minimum of 11 votes (so just > 10 more after my +1), with no veto -2 votes within a 7 day voting window to > end on June 6th. Voting will be closed immediately on a veto or in the > case of a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention > and timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jun 1 03:58:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 31 May 2018 20:58:33 -0700 Subject: [openstack-dev] [tripleo] Containerized Undercloud by default Message-ID: Hi, During Rocky cycle we would like to switch tripleoclient to deploy containeirzed undercloud by default but before to get there, we want to switch all CI jobs to it, like it was done when enabling config-download by default. Right now we have 3 jobs which test the containerized undercloud: - tripleo-ci-centos-7-undercloud-containers: deploy a containerized undercloud and run Tempest - tripleo-ci-centos-7-containerized-undercloud-upgrades: deploy a non-containerized undercloud on Queens and upgrade to a containerized undercloud on Rocky - gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master: deploy a containerized undercloud and an overcloud with HA architecture and IPv4 network (with introspection, SSL, etc). What's next (target is Rocky M3): - tripleo-ci-centos-7-containers-multinode - currently blocked by https://bugs.launchpad.net/tripleo/+bug/1774297 - all multinode scenarios - current blocked by 1774297 as well but also https://review.openstack.org/#/c/571566/ - gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master - currently blocked by https://bugs.launchpad.net/tripleo/+bug/1774557 (with a potential fix: https://review.openstack.org/571620) - all other jobs that run on master, except tripleo-ci-centos-7-undercloud-oooq that we'll probably keep during Rocky and remove in Stein if we successfully switch the default. Once we've reached that point, we'll change tripleoclient's default, and hopefully all of this before m3 :-) Any question or feedback is highly welcome, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Jun 1 04:13:41 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 31 May 2018 21:13:41 -0700 Subject: [openstack-dev] [tripleo] Containerized Undercloud by default In-Reply-To: References: Message-ID: I forgot to mention Steve's effort to update the containers when deploying the undercloud, this is a critical piece if we want to continue to test changes in projects like tripleo-common that are embedded in Mistral containers for example. The patch that will enable it is https://review.openstack.org/#/c/571631/ and thanks to this work we'll make unify the container registry for both the undercloud and overcloud, using the same [updated] containers. It is important to have this feature enabled in our CI to maintain the parity with how we tested TripleO when undercloud wasn't containeirized, so this is something we want to achieve before switching all the TripleO CI jobs. On Thu, May 31, 2018 at 8:58 PM, Emilien Macchi wrote: > Hi, > > During Rocky cycle we would like to switch tripleoclient to deploy > containeirzed undercloud by default but before to get there, we want to > switch all CI jobs to it, like it was done when enabling config-download by > default. > Right now we have 3 jobs which test the containerized undercloud: > > - tripleo-ci-centos-7-undercloud-containers: deploy a containerized > undercloud and run Tempest > - tripleo-ci-centos-7-containerized-undercloud-upgrades: deploy a > non-containerized undercloud on Queens and upgrade to a containerized > undercloud on Rocky > - gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master: deploy a > containerized undercloud and an overcloud with HA architecture and IPv4 > network (with introspection, SSL, etc). > > What's next (target is Rocky M3): > - tripleo-ci-centos-7-containers-multinode - currently blocked by > https://bugs.launchpad.net/tripleo/+bug/1774297 > - all multinode scenarios - current blocked by 1774297 as well but also > https://review.openstack.org/#/c/571566/ > - gate-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset035-master - > currently blocked by https://bugs.launchpad.net/tripleo/+bug/1774557 > (with a potential fix: https://review.openstack.org/571620) > - all other jobs that run on master, except tripleo-ci-centos-7-undercloud-oooq > that we'll probably keep during Rocky and remove in Stein if we > successfully switch the default. > > Once we've reached that point, we'll change tripleoclient's default, and > hopefully all of this before m3 :-) > > Any question or feedback is highly welcome, > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Fri Jun 1 05:40:28 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 31 May 2018 23:40:28 -0600 Subject: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point In-Reply-To: <6992a8851a8349eeb194664c267a1a63@garmin.com> References: <6992a8851a8349eeb194664c267a1a63@garmin.com> Message-ID: <5B10DC4C.9040009@windriver.com> On 05/31/2018 04:14 PM, Moore, Curt wrote: > The challenge is that transferring the Glance image transfer is _glacially slow_ > when using the Glance HTTP API (~30 min for a 50GB Windows image (It’s Windows, > it’s huge with all of the necessary tools installed)). If libvirt can instead > perform an RBD export on the image using the image download functionality, it is > able to download the same image in ~30 sec. This seems oddly slow. I just downloaded a 1.6 GB image from glance in slightly under 10 seconds. That would map to about 5 minutes for a 50GB image. > We could look at attaching an additional ephemeral disk to the instance and have > cloudbase-init use it as the pagefile but it appears that if libvirt is using > rbd for its images_type, _all_ disks must then come from Ceph, there is no way > at present to allow the VM image to run from Ceph and have an ephemeral disk > mapped in from node-local storage. Even still, this would have the effect of > "wasting" Ceph IOPS for the VM disk itself which could be better used for other > purposes. > > Based on what I have explained about our use case, is there a better/different > way to accomplish the same goal without using the deprecated image download > functionality? If not, can we work to "un-deprecate" the download extension > point? Should I work to get the code for this RBD download into the upstream > repository? Have you considered using compute nodes configured for local storage but then use boot-from-volume with cinder and glance both using ceph? I *think* there's an optimization there such that the volume creation is fast. Assuming the volume creation is indeed fast, in this scenario you could then have a local ephemeral/swap disk for your pagefile. You'd still have your VM root disks on ceph though. Chris From ghanshyammann at gmail.com Fri Jun 1 06:12:36 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Fri, 1 Jun 2018 15:12:36 +0900 Subject: [openstack-dev] Questions about token scopes In-Reply-To: <7468c1ee-03ea-dbfc-ad79-552a2708f410@gmail.com> References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> <7468c1ee-03ea-dbfc-ad79-552a2708f410@gmail.com> Message-ID: On Thu, May 31, 2018 at 11:24 PM, Lance Bragstad wrote: > > > On 05/31/2018 12:09 AM, Ghanshyam Mann wrote: >> On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad wrote: >>> >>> On 05/30/2018 08:47 AM, Matt Riedemann wrote: >>>> I know the keystone team has been doing a lot of work on scoped tokens >>>> and Lance has been trying to roll that out to other projects (like nova). >>>> >>>> In Rocky the nova team is adding granular policy rules to the >>>> placement API [1] which is a good opportunity to set scope on those >>>> rules as well. >>>> >>>> For now, we've just said everything is system scope since resources in >>>> placement, for the most part, are managed by "the system". But we do >>>> have some resources in placement which have project/user information >>>> in them, so could theoretically also be scoped to a project, like GET >>>> /usages [2]. >> Just adding that this is same for nova policy also. As you might know >> spec[1] try to make nova policy more granular but on hold because of >> default roles things. We will do policy rule split with more better >> defaults values like read-only for GET APIs. >> >> Along with that, like you mentioned about scope setting for placement >> policy rules, we need to do same for nova policy also. That can be >> done later or together with nova policy granular. spec. >> >> [1] https://review.openstack.org/#/c/547850/ >> >>>> While going through this, I've been hammering Lance with questions but >>>> I had some more this morning and wanted to send them to the list to >>>> help spread the load and share the knowledge on working with scoped >>>> tokens in the other projects. >>> ++ good idea >>> >>>> So here goes with the random questions: >>>> >>>> * devstack has the admin project/user - does that by default get >>>> system scope tokens? I see the scope is part of the token create >>>> request [3] but it's optional, so is there a default value if not >>>> specified? >>> No, not necessarily. The keystone-manage bootstrap command is what >>> bootstraps new deployments with the admin user, an admin role, a project >>> to work in, etc. It also grants the newly created admin user the admin >>> role on a project and the system. This functionality was added in Queens >>> [0]. This should be backwards compatible and allow the admin user to get >>> tokens scoped to whatever they had authorization on previously. The only >>> thing they should notice is that they have another role assignment on >>> something called the "system". That being said, they can start >>> requesting system-scoped tokens from keystone. We have a document that >>> tries to explain the differences in scopes and what they mean [1]. >> Another related question is, does scope setting will impact existing >> operator? I mean when policy rule start setting scope, that might >> break the existing operator as their current token (say project >> scoped) might not be able to authorize the policy modified with >> setting the system scope. >> >> In that case, how we are going to avoid the upgrade break. One way can >> be to soft enforcement scope things for a cycle with warning and then >> start enforcing that after one cycle (like we do for any policy rule >> change)? but not sure at this point. > > Good question. This was the primary driver behind adding a new > configuration option to the oslo.policy library called `enforce_scope` > [0]. This let's operators turn off scope checking while they do a few > things. > > They'll need to audit their users and give administrators of the > deployment access to the system via a system role assignment (as opposed > to the 'admin' role on some random project). They also need to ensure > those people understand the concept of system scope. They might also > send emails or notifications explaining the incoming changes and why > they're being done, et cetera. Ideally, this should buy operators time > to clean things up by reassessing their policy situation with the new > defaults and scope types before enforcing those constraints. If > `enforce_scope` is False, then a warning is logged during the > enforcement check saying something along the lines of "someone used a > token scoped to X to do something in Y". > > [0] > https://docs.openstack.org/oslo.policy/latest/configuration/index.html#oslo_policy.enforce_scope > Thanks Lance, that is what i was looking for and it is default to False which keep the things safe without behavior change. -gmann >> >>> [0] https://review.openstack.org/#/c/530410/ >>> [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html >>> >>>> * Why don't the token create and show APIs return the scope? >>> Good question. In a way, they do. If you look at a response when you >>> authenticate for a token or validate a token, you should see an object >>> contained within the token reference for the purpose of scope. For >>> example, a project-scoped token will have a project object in the >>> response [2]. A domain-scoped token will have a domain object in the >>> response [3]. The same is true for system scoped tokens [4]. Unscoped >>> tokens do not have any of these objects present and do not contain a >>> service catalog [5]. While scope isn't explicitly denoted by an >>> attribute, it can be derived from the attributes of the token response. >>> >>> [2] http://paste.openstack.org/raw/722349/ >>> [3] http://paste.openstack.org/raw/722351/ >>> [4] http://paste.openstack.org/raw/722348/ >>> [5] http://paste.openstack.org/raw/722350/ >>> >>> >>>> * It looks like python-openstackclient doesn't allow specifying a >>>> scope when issuing a token, is that going to be added? >>> Yes, I have a patch up for it [6]. I wanted to get this in during >>> Queens, but it missed the boat. I believe this and a new release of >>> oslo.context are the only bits left in order for services to have >>> everything they need to easily consume system-scoped tokens. >>> Keystonemiddleware should know how to handle system-scoped tokens in >>> front of each service [7]. The oslo.context library should be smart >>> enough to handle system scope set by keystonemiddleware if context is >>> built from environment variables [8]. Both keystoneauth [9] and >>> python-keystoneclient [10] should have what they need to generate >>> system-scoped tokens. >>> >>> That should be enough to allow the service to pass a request environment >>> to oslo.context and use the context object to reason about the scope of >>> the request. As opposed to trying to understand different token scope >>> responses from keystone. We attempted to abstract that away in to the >>> context object. >>> >>> [6] https://review.openstack.org/#/c/524416/ >>> [7] https://review.openstack.org/#/c/564072/ >>> [8] https://review.openstack.org/#/c/530509/ >>> [9] https://review.openstack.org/#/c/529665/ >>> [10] https://review.openstack.org/#/c/524415/ >>> >>>> The reason I'm asking about OSC stuff is because we have the >>>> osc-placement plugin [4] which allows users with the admin role to >>>> work with resources in placement, which could be useful for things >>>> like fixing up incorrect or leaked allocations, i.e. fixing the >>>> fallout of a bug in nova. I'm wondering if we define all of the >>>> placement API rules as system scope and we're enforcing scope, will >>>> admins, as we know them today, continue to be able to use those APIs? >>>> Or will deployments just need to grow a system-scope admin >>>> project/user and per-project admin users, and then use the former for >>>> working with placement via the OSC plugin? >>> Uhm, if I understand your question, it depends on how you define the >>> scope types for those APIs. If you set them to system-scope, then an >>> operator will need to use a system-scoped token in order to access those >>> APIs iff the placement configuration file contains placement.conf >>> [oslo.policy] enforce_scope = True. Otherwise, setting that option to >>> false will log a warning to operators saying that someone is accessing a >>> system-scoped API with a project-scoped token (e.g. education needs to >>> happen). >>> >>>> [1] >>>> https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) >>>> [2] https://developer.openstack.org/api-ref/placement/#list-usages >>>> [3] >>>> https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization >>>> [4] https://docs.openstack.org/osc-placement/latest/index.html >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From m.andre at redhat.com Fri Jun 1 06:39:58 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Fri, 1 Jun 2018 08:39:58 +0200 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: If Steve wrote half of kolla-cli then it's a no brainer to me. +1! On Thu, May 31, 2018 at 7:02 PM, Borne Mace wrote: > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli core > reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its > co-creator as probably half or more of the existing code was due to his > efforts. He has now been working diligently since it was pushed upstream to > improve the stability and testability of the cli and has the second most > commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, for > a total of 21. Steve therefore requires a minimum of 11 votes (so just 10 > more after my +1), with no veto -2 votes within a 7 day voting window to end > on June 6th. Voting will be closed immediately on a veto or in the case of > a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention and > timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Fri Jun 1 06:57:35 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Fri, 01 Jun 2018 15:57:35 +0900 Subject: [openstack-dev] [Summit][qa] Vancouver Summit 2018 QA Recap Message-ID: <163ba232588.b987dac725067.3382696975507625953@ghanshyammann.com> Hi All, We had another good Summit in Vancouver and got good amount of feedback for QA which really important and helpful. I am summarizing the QA discussions during Summit. QA feedback sessions: ================= Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-ops-user-feedback We had good number of people this time and so does more feedback. Key points, improvement and features requested in QA: - AT&T Cloud QA is by AQuA API which is tooling around upstream tools like Tempest, Patrole, OpenStack Health etc. - Tempest, Patrole are widely used tool in Cloud testing. Patrole is being used with 10 Roles in parallel testing on containers. - There are few more support needed from Tempest which AT&T (Doug Schveninger) would like to see in upstream. Few of them are: - Better support for LDAP - Service available detection for plugins - Configure volume_type for Cinder multiple storage types tests - more tooling in Tempest like - tempest.conf generator, iproject_generator.py, advance cleanup/Leak detector, assembling tempest plugin in a docker container etc - Tempest gabbi support ACTION ITEM: gmann to follow up on each requested features and start discussion in separate thread/IRC. Tagging all the Tempest plugins along with Tempest tag ========================================= Currently, we tag Tempest on release, intermediately or EOL so that people can use that tag against particular openstack code base/release. Tempest plugins are not being tagged as such. So there are difficulty in using plugins with particular Tempest tag in compatible way. We discussed to tag all tempest plugins together everytime Tempest new tag is pushed. While writing this mail, I got to know that dmellado already doing the new tag for kuryr tempest plugin which is what we need. ACTION ITEM: gmann to start the ML thread to get the broader agreement from each plugins and then define the process and responsible team to tag all plugins and Tempest together. Patrole ====== This is one of the important project now which is being requested/talked by many people/operator. This was one the item in keystone Default Roles forum session[1] also to start gating patrole on keystone. Below is initial plan I discussed with Felipe: - Start gating patrole in keystone with non-voting/experimental job. This one - https://review.openstack.org/#/c/464678/ . Rocky. - multi-policy support - Rocky - Make stable release of Patrole. S cycle may be. This include various things about framework stability, plugin support etc - Start proposing the Patrole gating on other projects like nova, cinder etc - T Cycle or early if possible. ACTION ITEM: Felipe to work on above plan and gmann will be helping him on that. QA onboarding sessions: =================== Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver Around 6-7 people joined which gradually increasing since previous summits :). We started with asking people about their engagement in QA or what they are looking forward from QA. Doug Schveninger(AT&T) talked about his team members who can helps on QA things and the new features/tooling he would like to see in Tempest, Patrole etc. They might not be permanent but it is good to have more people in contribution. QA team will help to get them on-boarded in all perspective. Thanks Doug for your support. Other item fro this sessions was to have a centralized place (etherpad, document) for all the current feature or working items where we are looking for volunteer like CLI unit tests, schema validation etc. Where we document the enough background and helping material which will help new contributors to start working on those items. ACTION ITEM: - gmann to find the better place to document the working item with enough background for new contributors. - Doug to start his team member to get involve in QA. Extended Maintenance Stable Branch ============================= During discussion of Extended Maintenance sessions[2], we discussed about testing support of EM branch in QA and we all agreed on below points: - QA will keep doing the same number of stable branches support as it is doing now. Means support till "Maintained" phase branches. EM branch will not be in scope of guaranteed support of QA. - As Tempest is branchless, it should work for EM phase branches also but if anything new changes break EM branch testing then we stopped testing master Tempest on EM branches. Matt has already pushed the patch to document the above agreement [3]. Thanks for doing good documentation always :), Eris === Spec- https://review.openstack.org/#/c/443504/ It came up in feedback sessions also and people really want to see some progress on this. We have spec under review for that and need more volunteer to drive this forward. I will also check with SamP on this. Other than that there was not much discussion/progress on this in summit. ACTION ITEM: gmann to push the spec review in QA team and more follow up about progress. [1] https://etherpad.openstack.org/p/YVR-rocky-default-roles [2] https://etherpad.openstack.org/p/YVR-extended-maintenance [3] https://review.openstack.org/#/c/570620/ -gmann From inc007 at gmail.com Fri Jun 1 06:57:50 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Thu, 31 May 2018 23:57:50 -0700 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 from me:) On Thu, May 31, 2018, 11:40 PM Martin André wrote: > If Steve wrote half of kolla-cli then it's a no brainer to me. +1! > > On Thu, May 31, 2018 at 7:02 PM, Borne Mace wrote: > > Greetings all, > > > > I would like to propose the addition of Steve Noyes to the kolla-cli core > > reviewer team. Consider this nomination as my personal +1. > > > > Steve has a long history with the kolla-cli and should be considered its > > co-creator as probably half or more of the existing code was due to his > > efforts. He has now been working diligently since it was pushed > upstream to > > improve the stability and testability of the cli and has the second most > > commits on the project. > > > > The kolla core team consists of 19 people, and the kolla-cli team of 2, > for > > a total of 21. Steve therefore requires a minimum of 11 votes (so just > 10 > > more after my +1), with no veto -2 votes within a 7 day voting window to > end > > on June 6th. Voting will be closed immediately on a veto or in the case > of > > a unanimous vote. > > > > As I'm not sure how active all of the 19 kolla cores are, your attention > and > > timely vote is much appreciated. > > > > Thanks! > > > > -- Borne > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chkumar246 at gmail.com Fri Jun 1 07:51:08 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 1 Jun 2018 13:21:08 +0530 Subject: [openstack-dev] [Summit][qa] Vancouver Summit 2018 QA Recap In-Reply-To: <163ba232588.b987dac725067.3382696975507625953@ghanshyammann.com> References: <163ba232588.b987dac725067.3382696975507625953@ghanshyammann.com> Message-ID: Hello Ghanshyam, Thanks for putting this all together. Great summary :-) On Fri, Jun 1, 2018 at 12:27 PM, Ghanshyam wrote: > Hi All, > > We had another good Summit in Vancouver and got good amount of feedback for QA which really important and helpful. > I am summarizing the QA discussions during Summit. > > QA feedback sessions: > ================= > Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-ops-user-feedback > We had good number of people this time and so does more feedback. > > Key points, improvement and features requested in QA: > - AT&T Cloud QA is by AQuA API which is tooling around upstream tools like Tempest, Patrole, OpenStack Health etc. > - Tempest, Patrole are widely used tool in Cloud testing. Patrole is being used with 10 Roles in parallel testing on containers. > - There are few more support needed from Tempest which AT&T (Doug Schveninger) would like to see in upstream. Few of them are: > - Better support for LDAP > - Service available detection for plugins > - Configure volume_type for Cinder multiple storage types tests > - more tooling in Tempest like - tempest.conf generator, For generating tempest.conf, we have python-tempestconf , It might help. > iproject_generator.py, advance cleanup/Leak detector, > assembling tempest plugin in a docker container etc By the beginning of Rocky cycle, we have added all tempest plugins in Kolla tempest container and it is currently consumed in TripleO CI. https://hub.docker.com/r/kolla/centos-source-tempest/tags/ It might help. > - Tempest gabbi support > > ACTION ITEM: gmann to follow up on each requested features and start discussion in separate thread/IRC. > > Tagging all the Tempest plugins along with Tempest tag > ========================================= > Currently, we tag Tempest on release, intermediately or EOL so that people can use that tag against particular openstack code base/release. Tempest plugins are not being tagged as such. So there are difficulty in using plugins with particular Tempest tag in compatible way. We discussed to tag all tempest plugins together everytime Tempest new tag is pushed. While writing this mail, I got to know that dmellado already doing the new tag for kuryr tempest plugin which is what we need. > > ACTION ITEM: gmann to start the ML thread to get the broader agreement from each plugins and then define the process and responsible team to tag all plugins and Tempest together. > > Patrole > ====== > This is one of the important project now which is being requested/talked by many people/operator. This was one the item in keystone Default Roles forum session[1] also to start gating patrole on keystone. Below is initial plan I discussed with Felipe: > - Start gating patrole in keystone with non-voting/experimental job. This one - https://review.openstack.org/#/c/464678/ . Rocky. > - multi-policy support - Rocky > - Make stable release of Patrole. S cycle may be. This include various things about framework stability, plugin support etc > - Start proposing the Patrole gating on other projects like nova, cinder etc - T Cycle or early if possible. > > ACTION ITEM: Felipe to work on above plan and gmann will be helping him on that. > > QA onboarding sessions: > =================== > Etherpad: https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver > > Around 6-7 people joined which gradually increasing since previous summits :). We started with asking people about their engagement in QA or what they are looking forward from QA. > Doug Schveninger(AT&T) talked about his team members who can helps on QA things and the new features/tooling he would like to see in Tempest, Patrole etc. They might not be permanent but it is good to have more people in contribution. QA team will help to get them on-boarded in all perspective. Thanks Doug for your support. > > Other item fro this sessions was to have a centralized place (etherpad, document) for all the current feature or working items where we are looking for volunteer like CLI unit tests, schema validation etc. Where we document the enough background and helping material which will help new contributors to start working on those items. > > ACTION ITEM: > - gmann to find the better place to document the working item with enough background for new contributors. > - Doug to start his team member to get involve in QA. > > Extended Maintenance Stable Branch > ============================= > During discussion of Extended Maintenance sessions[2], we discussed about testing support of EM branch in QA and we all agreed on below points: > - QA will keep doing the same number of stable branches support as it is doing now. Means support till "Maintained" phase branches. EM branch will not be in scope of guaranteed support of QA. > - As Tempest is branchless, it should work for EM phase branches also but if anything new changes break EM branch testing then we stopped testing master Tempest on EM branches. > Matt has already pushed the patch to document the above agreement [3]. Thanks for doing good documentation always :), > > Eris > === > Spec- https://review.openstack.org/#/c/443504/ > It came up in feedback sessions also and people really want to see some progress on this. We have spec under review for that and need more volunteer to drive this forward. I will also check with SamP on this. Other than that there was not much discussion/progress on this in summit. > > ACTION ITEM: gmann to push the spec review in QA team and more follow up about progress. > > > [1] https://etherpad.openstack.org/p/YVR-rocky-default-roles > [2] https://etherpad.openstack.org/p/YVR-extended-maintenance > [3] https://review.openstack.org/#/c/570620/ Thanks, Chandan Kumar From dabarren at gmail.com Fri Jun 1 07:55:26 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Fri, 1 Jun 2018 09:55:26 +0200 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 2018-06-01 8:57 GMT+02:00 Michał Jastrzębski : > +1 from me:) > > On Thu, May 31, 2018, 11:40 PM Martin André wrote: > >> If Steve wrote half of kolla-cli then it's a no brainer to me. +1! >> >> On Thu, May 31, 2018 at 7:02 PM, Borne Mace >> wrote: >> > Greetings all, >> > >> > I would like to propose the addition of Steve Noyes to the kolla-cli >> core >> > reviewer team. Consider this nomination as my personal +1. >> > >> > Steve has a long history with the kolla-cli and should be considered its >> > co-creator as probably half or more of the existing code was due to his >> > efforts. He has now been working diligently since it was pushed >> upstream to >> > improve the stability and testability of the cli and has the second most >> > commits on the project. >> > >> > The kolla core team consists of 19 people, and the kolla-cli team of 2, >> for >> > a total of 21. Steve therefore requires a minimum of 11 votes (so just >> 10 >> > more after my +1), with no veto -2 votes within a 7 day voting window >> to end >> > on June 6th. Voting will be closed immediately on a veto or in the >> case of >> > a unanimous vote. >> > >> > As I'm not sure how active all of the 19 kolla cores are, your >> attention and >> > timely vote is much appreciated. >> > >> > Thanks! >> > >> > -- Borne >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jun 1 07:59:12 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 1 Jun 2018 08:59:12 +0100 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 On 1 June 2018 at 08:55, Eduardo Gonzalez wrote: > +1 > > 2018-06-01 8:57 GMT+02:00 Michał Jastrzębski : > >> +1 from me:) >> >> On Thu, May 31, 2018, 11:40 PM Martin André wrote: >> >>> If Steve wrote half of kolla-cli then it's a no brainer to me. +1! >>> >>> On Thu, May 31, 2018 at 7:02 PM, Borne Mace >>> wrote: >>> > Greetings all, >>> > >>> > I would like to propose the addition of Steve Noyes to the kolla-cli >>> core >>> > reviewer team. Consider this nomination as my personal +1. >>> > >>> > Steve has a long history with the kolla-cli and should be considered >>> its >>> > co-creator as probably half or more of the existing code was due to his >>> > efforts. He has now been working diligently since it was pushed >>> upstream to >>> > improve the stability and testability of the cli and has the second >>> most >>> > commits on the project. >>> > >>> > The kolla core team consists of 19 people, and the kolla-cli team of >>> 2, for >>> > a total of 21. Steve therefore requires a minimum of 11 votes (so >>> just 10 >>> > more after my +1), with no veto -2 votes within a 7 day voting window >>> to end >>> > on June 6th. Voting will be closed immediately on a veto or in the >>> case of >>> > a unanimous vote. >>> > >>> > As I'm not sure how active all of the 19 kolla cores are, your >>> attention and >>> > timely vote is much appreciated. >>> > >>> > Thanks! >>> > >>> > -- Borne >>> > >>> > >>> > ____________________________________________________________ >>> ______________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Jun 1 08:40:52 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 1 Jun 2018 10:40:52 +0200 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: <20180601084052.GA15905@paraplu> On Tue, May 22, 2018 at 05:41:18PM -0400, Brian Haley wrote: > On 05/22/2018 04:57 PM, Jay Pipes wrote: [...] > > Please don't take this the wrong way, Dean, but you aren't seriously > > suggesting that anyone outside of Windriver/Intel would ever contribute > > to these repos are you? > > > > What motivation would anyone outside of Windriver/Intel -- who must make > > money on this effort otherwise I have no idea why they are doing it -- > > have to commit any code at all to StarlingX? Yes, same question as Jay here. What this product-turned-project (i.e. "Downstream First") is implicitly asking for is the review time of the upstream community, which is already at a premium -- for a fork. > I read this the other way - the goal is to get all the forked code from > StarlingX into upstream repos. That seems backwards from how this should > have been done (i.e. upstream first), and I don't see how a project would > prioritize that over other work. > > > I'm truly wondering why was this even open-sourced to begin with? I'm as > > big a supporter of open source as anyone, but I'm really struggling to > > comprehend the business, technical, or marketing decisions behind this > > action. Please help me understand. What am I missing? > > I'm just as confused. Equally stupefied here. > > My personal opinion is that I don't think that any products, derivatives > > or distributions should be hosted on openstack.org infrastructure. Yes, it should be unmistakably clear that contributions to "upstream Nova", for example, means the 'primary' (this qualifier itself is redundant) upstream Nova. No slippery slope such as: "OpenStack-hosted Nova, but not exactly _that_ OpenStack Nova". -- /kashyap From berendt at betacloud-solutions.de Fri Jun 1 09:42:11 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Fri, 1 Jun 2018 11:42:11 +0200 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: <8C288514-DF3D-449B-8E5B-8B990D2143A7@betacloud-solutions.de> +1 > On 31. May 2018, at 19:02, Borne Mace wrote: > > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli core reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its co-creator as probably half or more of the existing code was due to his efforts. He has now been working diligently since it was pushed upstream to improve the stability and testability of the cli and has the second most commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, for a total of 21. Steve therefore requires a minimum of 11 votes (so just 10 more after my +1), with no veto -2 votes within a 7 day voting window to end on June 6th. Voting will be closed immediately on a veto or in the case of a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention and timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From clint at fewbar.com Fri Jun 1 09:51:02 2018 From: clint at fewbar.com (Clint Byrum) Date: Fri, 01 Jun 2018 02:51:02 -0700 Subject: [openstack-dev] [tc][all] CD tangent - was: A culture change (nitpicking) In-Reply-To: <4b09edbd-62f3-2c99-78ac-9b2721191c7d@gmx.com> References: <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> <20180531010957.GA1354@zeong> <3a59cb5f-599d-4a89-40ec-e2610ef1d821@openstack.org> <4b09edbd-62f3-2c99-78ac-9b2721191c7d@gmx.com> Message-ID: <152784666299.19120.14821215526251328331@ubuntu> Quoting Sean McGinnis (2018-05-31 09:54:46) > On 05/31/2018 03:50 AM, Thierry Carrez wrote: > > Right... There might be a reasonable middle ground between "every > > commit on master must be backward-compatible" and "rip out all > > testing" that allows us to routinely revert broken feature commits (as > > long as they don't cross a release boundary). > > > > To be fair, I'm pretty sure that's already the case: we did revert > > feature commits on master in the past, therefore breaking backward > > compatibility if someone started to use that feature right away. It's > > the issue with implicit rules: everyone interprets them the way they > > want... So I think that could use some explicit clarification. > > > > [ This tangent should probably gets its own thread to not disrupt the > > no-nitpicking discussion ] > > > Just one last one on this, then I'm hoping this tangent ends. > > I think what Thierry said is exactly what Dims and I were saying. I'm > not sure how that turned into > the idea of supporting committing broken code. The point (at least mine) > was just that we should > not have the mindset that HEAD~4 committed something that we realize was > not right, so we > should not have the mindset that "someone might have deployed that > broken behavior so we > need to make sure we don't break them." HEAD should always be > deployable, just not treated like > an official release that needs to be maintained. > We are what we test. We don't test upgrading from one commit to the next. We test upgrading from the previous stable release. And as such, that's what has to keep working. So no, a revert shouldn't ever be subject to "oh no somebody may have deployed this and you don't revert the db change". That's definitely a downstream consideration and those who CD things have ways of detecting and dealing with this on their end. That said, it would be nice if developers consider this corner case, and try not to make it a huge mess to unwind. From paul.bourke at oracle.com Fri Jun 1 10:08:58 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Fri, 1 Jun 2018 11:08:58 +0100 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 On 31/05/18 18:02, Borne Mace wrote: > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli > core reviewer team.  Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its > co-creator as probably half or more of the existing code was due to his > efforts.  He has now been working diligently since it was pushed > upstream to improve the stability and testability of the cli and has the > second most commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, > for a total of 21.  Steve therefore requires a minimum of 11 votes (so > just 10 more after my +1), with no veto -2 votes within a 7 day voting > window to end on June 6th.  Voting will be closed immediately on a veto > or in the case of a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention > and timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lajos.katona at ericsson.com Fri Jun 1 10:27:07 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Fri, 1 Jun 2018 12:27:07 +0200 Subject: [openstack-dev] [heat][neutron] Extraroute support Message-ID: <9a7cbe15-e678-24e6-9e77-86fdc1429dc6@ericsson.com> Hi, Could somebody help me out with Neutron's Extraroute support in Hot templates. The support status of the Extraroute is support.UNSUPPORTED in heat, and only create and delete are the supported operations. see: https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35 As I see the unsupported tag was added when the feature was moved from the contrib folder to in-tree (https://review.openstack.org/186608) Perhaps you can help me out why only create and delete are supported and update not. Thanks in advance for  the help. Regards Lajos From tpb at dyncloud.net Fri Jun 1 11:47:59 2018 From: tpb at dyncloud.net (Tom Barron) Date: Fri, 1 Jun 2018 07:47:59 -0400 Subject: [openstack-dev] [manila] Core team updates Message-ID: <20180601114759.t7kx56nvj6monols@barron.net> Hi all, Clinton Knight and Valeriy Ponomaryov have been focusing on projects outside Manila for some time so I'm removing them from the core team. Valeriy and Clinton made great contributions to Manila over the years both as reviewers and as contributors. We are fortunate to have been able to work with them and they are certainly welcome back to the core team in the future if they return to active reviewing. Clinton & Valeriy, thank you for your contributions! -- Tom From ramishra at redhat.com Fri Jun 1 11:55:52 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 1 Jun 2018 17:25:52 +0530 Subject: [openstack-dev] [heat][neutron] Extraroute support In-Reply-To: <9a7cbe15-e678-24e6-9e77-86fdc1429dc6@ericsson.com> References: <9a7cbe15-e678-24e6-9e77-86fdc1429dc6@ericsson.com> Message-ID: On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona wrote: > Hi, > > Could somebody help me out with Neutron's Extraroute support in Hot > templates. > The support status of the Extraroute is support.UNSUPPORTED in heat, and > only create and delete are the supported operations. > see: https://github.com/openstack/heat/blob/master/heat/engine/re > sources/openstack/neutron/extraroute.py#LC35 > > As I see the unsupported tag was added when the feature was moved from the > contrib folder to in-tree (https://review.openstack.org/186608) > Perhaps you can help me out why only create and delete are supported and > update not. > > I think most of the resources when moved from contrib to in-tree are marked as unsupported. Adding routes to an existing router by multiple stacks can be racy and is probably the reason use of this resource is not encouraged and hence it's not supported. You can see the discussion in the original patch that proposed this resource https://review.openstack.org/#/c/41044/ Not sure if things have changed on neutron side for us to revisit the concerns. Also it does not have any update_allowed properties, hence no handle_update(). It would be replaced if you change any property. Hope it helps. > Thanks in advance for the help. > > Regards > Lajos > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Jun 1 12:16:02 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 1 Jun 2018 14:16:02 +0200 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: Message-ID: <20180601121602.GB15905@paraplu> On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote: > StarlingX (aka STX) was announced this week at the summit, there is a > PR to create project repos in Gerrit at [0]. STX is basically Wind >From a cursory look at the libvirt fork, there are some questionable choices. E.g. the config code (libvirt/src/qemu/qemu.conf) is modified such that QEMU is launched as 'root'. That means a bug in QEMU == instant host compromise. All Linux distributions (that matter) configure libvirt to launch QEMU as a regular user ('qemu'). E.g. from Fedora's libvirt RPM spec file: libvirt.spec:%define qemu_user qemu libvirt.spec: --with-qemu-user=%{qemu_user} \ * * * There are multiple other such issues in the forked libvirt code. [...] -- /kashyap From j.harbott at x-ion.de Fri Jun 1 13:01:15 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Fri, 1 Jun 2018 13:01:15 +0000 Subject: [openstack-dev] Questions about token scopes In-Reply-To: <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> Message-ID: 2018-05-30 20:37 GMT+00:00 Matt Riedemann : > On 5/30/2018 9:53 AM, Lance Bragstad wrote: >> >> While scope isn't explicitly denoted by an >> attribute, it can be derived from the attributes of the token response. >> > > Yeah, this was confusing to me, which is why I reported it as a bug in the > API reference documentation: > > https://bugs.launchpad.net/keystone/+bug/1774229 > >>> * It looks like python-openstackclient doesn't allow specifying a >>> scope when issuing a token, is that going to be added? >> >> Yes, I have a patch up for it [6]. I wanted to get this in during >> Queens, but it missed the boat. I believe this and a new release of >> oslo.context are the only bits left in order for services to have >> everything they need to easily consume system-scoped tokens. >> Keystonemiddleware should know how to handle system-scoped tokens in >> front of each service [7]. The oslo.context library should be smart >> enough to handle system scope set by keystonemiddleware if context is >> built from environment variables [8]. Both keystoneauth [9] and >> python-keystoneclient [10] should have what they need to generate >> system-scoped tokens. >> >> That should be enough to allow the service to pass a request environment >> to oslo.context and use the context object to reason about the scope of >> the request. As opposed to trying to understand different token scope >> responses from keystone. We attempted to abstract that away in to the >> context object. >> >> [6]https://review.openstack.org/#/c/524416/ >> [7]https://review.openstack.org/#/c/564072/ >> [8]https://review.openstack.org/#/c/530509/ >> [9]https://review.openstack.org/#/c/529665/ >> [10]https://review.openstack.org/#/c/524415/ > > > I think your reply in IRC was more what I was looking for: > > lbragstad mriedem: if you install > https://review.openstack.org/#/c/524416/5 locally with devstack and setup a > clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin`` > should work 15:39 > lbragstad http://paste.openstack.org/raw/722357/ 15:39 > > So users with the system role will need to create a token using that role to > get the system-scoped token, as far as I understand. There is no --scope > option on the 'openstack token issue' CLI. IIUC there is no option to the "token issue" command because that command creates a token just like any other OSC command would do from the global authentication parameters specified, either on the command line, in the environment or via a clouds.yaml file. The "token issue" command simply outputs the token that is then received instead of using it as authentication for the "real" action taken by other commands. So the option to request a system scope would seem to be "--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE. And if you do that, the resulting system-scoped token will directly be used when you issue a command like "openstack server list". One thing to watch out for, however, is that that option seems to be silently ignored if the credentials also specify either a project or a domain. Maybe generating a warning or even an error in that situation would be a cleaner solution. From josephine.seifert at secustack.com Fri Jun 1 13:13:50 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Fri, 1 Jun 2018 15:13:50 +0200 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing Message-ID: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> Hi, our team has implemented a prototype for an osc-included image signing. We would like to propose a spec or something like this, but haven't found where to start at. So here is a brief concept of what we want to contribute: https://etherpad.openstack.org/p/osc-included_image_signing Please advise us which steps to take next! Regards, Josephine -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Jun 1 13:41:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 1 Jun 2018 08:41:21 -0500 Subject: [openstack-dev] Questions about token scopes In-Reply-To: References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> Message-ID: <456ac132-221a-d6c2-ff32-508f9306bcd1@gmail.com> It looks like I had a patch up to improve some developer documentation that is relevant to this discussion [0]. [0] https://review.openstack.org/#/c/554727/ On 06/01/2018 08:01 AM, Jens Harbott wrote: > 2018-05-30 20:37 GMT+00:00 Matt Riedemann : >> On 5/30/2018 9:53 AM, Lance Bragstad wrote: >>> While scope isn't explicitly denoted by an >>> attribute, it can be derived from the attributes of the token response. >>> >> Yeah, this was confusing to me, which is why I reported it as a bug in the >> API reference documentation: >> >> https://bugs.launchpad.net/keystone/+bug/1774229 >> >>>> * It looks like python-openstackclient doesn't allow specifying a >>>> scope when issuing a token, is that going to be added? >>> Yes, I have a patch up for it [6]. I wanted to get this in during >>> Queens, but it missed the boat. I believe this and a new release of >>> oslo.context are the only bits left in order for services to have >>> everything they need to easily consume system-scoped tokens. >>> Keystonemiddleware should know how to handle system-scoped tokens in >>> front of each service [7]. The oslo.context library should be smart >>> enough to handle system scope set by keystonemiddleware if context is >>> built from environment variables [8]. Both keystoneauth [9] and >>> python-keystoneclient [10] should have what they need to generate >>> system-scoped tokens. >>> >>> That should be enough to allow the service to pass a request environment >>> to oslo.context and use the context object to reason about the scope of >>> the request. As opposed to trying to understand different token scope >>> responses from keystone. We attempted to abstract that away in to the >>> context object. >>> >>> [6]https://review.openstack.org/#/c/524416/ >>> [7]https://review.openstack.org/#/c/564072/ >>> [8]https://review.openstack.org/#/c/530509/ >>> [9]https://review.openstack.org/#/c/529665/ >>> [10]https://review.openstack.org/#/c/524415/ >> >> I think your reply in IRC was more what I was looking for: >> >> lbragstad mriedem: if you install >> https://review.openstack.org/#/c/524416/5 locally with devstack and setup a >> clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin`` >> should work 15:39 >> lbragstad http://paste.openstack.org/raw/722357/ 15:39 >> >> So users with the system role will need to create a token using that role to >> get the system-scoped token, as far as I understand. There is no --scope >> option on the 'openstack token issue' CLI. > IIUC there is no option to the "token issue" command because that > command creates a token just like any other OSC command would do from > the global authentication parameters specified, either on the command > line, in the environment or via a clouds.yaml file. The "token issue" > command simply outputs the token that is then received instead of > using it as authentication for the "real" action taken by other > commands. > > So the option to request a system scope would seem to be > "--os-system-scope all" or the corresponding env var OS_SYSTEM_SCOPE. > And if you do that, the resulting system-scoped token will directly be > used when you issue a command like "openstack server list". > > One thing to watch out for, however, is that that option seems to be > silently ignored if the credentials also specify either a project or a > domain. Maybe generating a warning or even an error in that situation > would be a cleaner solution. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From curt.moore at garmin.com Fri Jun 1 13:50:16 2018 From: curt.moore at garmin.com (Moore, Curt) Date: Fri, 1 Jun 2018 13:50:16 +0000 Subject: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point References: <6992a8851a8349eeb194664c267a1a63@garmin.com> <5B10DC4C.9040009@windriver.com> Message-ID: On 6/1/2018 12:44 AM, Chris Friesen wrote: > On 05/31/2018 04:14 PM, Curt Moore wrote: >> The challenge is that transferring the Glance image transfer is >> _glacially slow_ when using the Glance HTTP API (~30 min for a 50GB >> Windows image (It’s Windows, it’s huge with all of the necessary >> tools installed)). If libvirt can instead perform an RBD export on >> the image using the image download functionality, it is able to >> download the same image in ~30 sec. > This seems oddly slow. I just downloaded a 1.6 GB image from glance in > slightly under 10 seconds. That would map to about 5 minutes for a > 50GB image. Agreed. There's nothing really special about the Glance API setup, we have multiple load balanced instances behind HAProxy. However, in our use case, we are very sensitive to node spin-up time so anything we can do to reduce this time is desired. If a VM lands on a compute node where the image isn't yet locally cached, paying an additional 5 min penalty is undesired. >> We could look at attaching an additional ephemeral disk to the >> instance and have cloudbase-init use it as the pagefile but it >> appears that if libvirt is using rbd for its images_type, _all_ disks >> must then come from Ceph, there is no way at present to allow the VM >> image to run from Ceph and have an ephemeral disk mapped in from >> node-local storage. Even still, this would have the effect of >> "wasting" Ceph IOPS for the VM disk itself which could be better used >> for other purposes. Based on what I have explained about our use >> case, is there a better/different way to accomplish the same goal >> without using the deprecated image download functionality? If not, >> can we work to "un-deprecate" the download extension point? Should I >> work to get the code for this RBD download into the upstream repository? > Have you considered using compute nodes configured for local storage > but then use boot-from-volume with cinder and glance both using ceph? > I *think* there's an optimization there such that the volume creation > is fast. Assuming the volume creation is indeed fast, in this scenario > you could then have a local ephemeral/swap disk for your pagefile. > You'd still have your VM root disks on ceph though. Understood. Booting directly from a Cinder volume would work, but as you mention, we'd still have the VM root disks in Ceph, using the expensive Ceph SSD IOPS for no good reason. I'm trying to get the best of both worlds by keeping the Glance images in Ceph and also keeping all VM I/O local to the compute node. -Curt ________________________________ CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient(s) and contain information that may be Garmin confidential and/or Garmin legally privileged. If you have received this email in error, please notify the sender by reply email and delete the message. Any disclosure, copying, distribution or use of this communication (including attachments) by someone other than the intended recipient is prohibited. Thank you. From sfinucan at redhat.com Fri Jun 1 13:58:02 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 01 Jun 2018 14:58:02 +0100 Subject: [openstack-dev] Updated PTI for documentation Message-ID: <8c0e7590b5501eb7f3c83ea7e75f0e79b007de85.camel@redhat.com> There have been a couple of threads about an updated "PTI" for documentation bouncing around the mailing list of late. * http://lists.openstack.org/pipermail/openstack-dev/2018-March/128817 .html * http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594 .html I've been told the reasoning behind this change and what is required has not been made clear so here goes my attempt at explaining it. In short, there are two problems we're trying to work around with this change. * The legacy 'build_sphinx' setuptools command provided by pbr has been found to be lacking. It's buggy as hell, frequently breaks with Sphinx version bumps, and is generally a PITA to maintain. We (the oslo team) want to remove this feature to ease our maintenance burden. * The recent move to zuul v3 has changed how documentation is built in the gate. Previously, zuul called the 'docs' target in tox (e.g. 'tox -e docs'), which would run whatever the project team had defined for that target. With zuul v3, zuul no longer calls this. Instead, it call either 'python setup.py build_sphinx' or 'sphinx- build' (more on this below). This means everything you wish to do as part of the documentation build must now be done via Sphinx extensions. Both the oslo and infra teams have a strong incentive to drop support for the 'build_sphinx' command (albeit for different reasons) but doing so isn't simply a case of calling 'sphinx-build' instead. In order to migrate, some steps are required: 1. pbr's 'build_sphinx' setuptools command provides some additional functionality on top of 'sphinx-build'. This must be replaced by Sphinx extensions. 2. Calls to 'python setup.py build_sphinx' must be replaced by additional calls to 'sphinx-build' 3. Documentation requirements must be moved to 'doc/requirements.txt' to avoid having to install every requirement of a project simply to build documentation. The first of these has already be achieved: 'openstackdocstheme' recently gained support for automatically configuring the project name and version in generated documentation [1], which replaced that aspect of the 'build_sphinx' command. Similarly, the 'sphinxcontrib-apidoc' Sphinx extension [2] was produced in order to provide a way to automatically generate API documentation as part of 'sphinx-build' rather than by having to make a secondary call to 'sphinx-apidoc' (which the gate, which, once again, no longer runs anything but 'sphinx-build' or 'python setup.py build_sphinx', would not do). The second step is the troublesome bit and has been the reason for most of the individual patches to various projects. The steps necessary to make this change have been documented multiple times on the list but they're listed here once again for posterity: * If necessary, enable 'sphinxcontrib.apidoc' as described at [3]. * Make sure you're using 'openstackdocstheme', assuming your project is an official OpenStack one. * Remove the 'build_sphinx' section from 'setup.cfg' (this is described at [3] but applies whether you need that or not). * Update your doc/releasenotes/api-guide targets in 'tox.ini' so you're using the same commands as the gate. The third change should be self-explanatory and infra have reasons for requesting it. It's generally easiest to do this as part of the above. Hopefully this clears things up for people. If anyone has any questions, feel free to reach out to me on IRC (stephenfin) and I'll be happy to help. Cheers, Stephen PS: For those that curious, the decision on whether to run 'python setup.py build_sphinx' command or 'sphinx-build' in the gate is based on the presence of a 'build_sphinx' section in 'setup.cfg'. If present, the former is run. If not, we use 'sphinx-build'. This is why it's necessary to remove that section from 'setup.cfg'. [1] https://docs.openstack.org/openstackdocstheme/latest/#using-the-theme [2] https://pypi.org/project/sphinxcontrib-apidoc/ [3] https://pypi.org/project/sphinxcontrib-apidoc/#migration-from-pbr From zbitter at redhat.com Fri Jun 1 14:10:31 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 1 Jun 2018 10:10:31 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: Message-ID: On 26/05/18 17:46, Mohammed Naser wrote: > Hi everyone! > > During the TC retrospective at the OpenStack summit last week, the > topic of the organizational diversity tag is becoming irrelevant was > brought up by Thierry (ttx)[1]. It seems that for projects that are > not very active, they can easily lose this tag with a few changes by > perhaps the infrastructure team for CI related fixes. > > As an action item, Thierry and I have paired up in order to look into > a way to resolve this issue. There have been ideas to switch this to > a report that is published at the end of the cycle rather than > continuously. Julia (TheJulia) suggested that we change or track > different types of diversity. > > Before we start diving into solutions, I wanted to bring this topic up > to the mailing list and ask for any suggestions. In digging the > codebase behind this[2], I've found that there are some knobs that we > can also tweak if need-be, or perhaps we can adjust those numbers > depending on the number of commits. Crazy idea: what if we dropped the idea of measuring the diversity and allowed teams to decide when they applied the tag to themselves like we do for other tags. (No wait! Come back!) Some teams enforce a requirement that the 2 core +2s come from reviewers with different affiliations. We would say that any project that enforces that rule would get the diversity tag. Then it's actually attached to something concrete, and teams could decide for themselves when to drop it (because they would start having difficulty merging stuff otherwise). I'm not entirely sold on this, but it's an idea I had that I wanted to throw out there :) cheers, Zane. From fungi at yuggoth.org Fri Jun 1 14:18:06 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Jun 2018 14:18:06 +0000 Subject: [openstack-dev] Updated PTI for documentation In-Reply-To: <8c0e7590b5501eb7f3c83ea7e75f0e79b007de85.camel@redhat.com> References: <8c0e7590b5501eb7f3c83ea7e75f0e79b007de85.camel@redhat.com> Message-ID: <20180601141806.gtapxhjwi54wski5@yuggoth.org> On 2018-06-01 14:58:02 +0100 (+0100), Stephen Finucane wrote: [...] > * The recent move to zuul v3 has changed how documentation is built in > the gate. Previously, zuul called the 'docs' target in tox (e.g. > 'tox -e docs'), which would run whatever the project team had > defined for that target. Nope, it never did that. It previously called `tox -evenv -- python setup.py build_sphinx` and those "docs" envs in tox were only ever for developer convenience, not used at all in any standard CI jobs. > With zuul v3, zuul no longer calls this. > Instead, it call either 'python setup.py build_sphinx' or 'sphinx- > build' (more on this below). This means everything you wish to do as > part of the documentation build must now be done via Sphinx > extensions. [...] You've got your cause and effect a bit backwards. The new docs jobs (which weren't really related to the move to Zuul v3 but happened around the same timeframe) were in service of the change to the PTI, not the other way around. The commit message for the change[*] which introduced the documentation section in the PTI has a fair bit to say about reasons, but the gist of it is that we wanted to switch to a workflow which 1. didn't assume you were a Python-oriented project, and 2. was more in line with how most projects outside OpenStack make use of Sphinx. [*] https://review.openstack.org/508694 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Fri Jun 1 15:11:43 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 1 Jun 2018 10:11:43 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: Sylvain- On 05/31/2018 02:41 PM, Sylvain Bauza wrote: > > > On Thu, May 31, 2018 at 8:26 PM, Eric Fried > wrote: > > > 1. Make everything perform the pivot on compute node start (which can be > >    re-used by a CLI tool for the offline case) > > 2. Make everything default to non-nested inventory at first, and provide > >    a way to migrate a compute node and its instances one at a time (in > >    place) to roll through. > > I agree that it sure would be nice to do ^ rather than requiring the > "slide puzzle" thing. > > But how would this be accomplished, in light of the current "separation > of responsibilities" drawn at the virt driver interface, whereby the > virt driver isn't supposed to talk to placement directly, or know > anything about allocations?  Here's a first pass: > > > > What we usually do is to implement either at the compute service level > or at the virt driver level some init_host() method that will reconcile > what you want. > For example, we could just imagine a non-virt specific method (and I > like that because it's non-virt specific) - ie. called by compute's > init_host() that would lookup the compute root RP inventories, see > whether one ore more inventories tied to specific resource classes have > to be moved from the root RP and be attached to a child RP. > The only subtility that would require a virt-specific update would be > the name of the child RP (as both Xen and libvirt plan to use the child > RP name as the vGPU type identifier) but that's an implementation detail > that a possible virt driver update by the resource tracker would > reconcile that. The question was rhetorical; my suggestion (below) was an attempt at designing exactly what you've described. Let me know if I can explain/clarify it further. I'm looking for feedback as to whether it's a viable approach. > The virt driver, via the return value from update_provider_tree, tells > the resource tracker that "inventory of resource class A on provider B > have moved to provider C" for all applicable AxBxC.  E.g. > > [ { 'from_resource_provider': , >     'moved_resources': [VGPU: 4], >     'to_resource_provider': >   }, >   { 'from_resource_provider': , >     'moved_resources': [VGPU: 4], >     'to_resource_provider': >   }, >   { 'from_resource_provider': , >     'moved_resources': [ >         SRIOV_NET_VF: 2, >         NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, >         NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, >     ], >     'to_resource_provider': >   } > ] > > As today, the resource tracker takes the updated provider tree and > invokes [1] the report client method update_from_provider_tree [2] to > flush the changes to placement.  But now update_from_provider_tree also > accepts the return value from update_provider_tree and, for each "move": > > - Creates provider C (as described in the provider_tree) if it doesn't > already exist. > - Creates/updates provider C's inventory as described in the > provider_tree (without yet updating provider B's inventory).  This ought > to create the inventory of resource class A on provider C. > - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. > - Updates provider B's inventory. > > (*There's a hole here: if we're splitting a glommed-together inventory > across multiple new child providers, as the VGPUs in the example, we > don't know which allocations to put where.  The virt driver should know > which instances own which specific inventory units, and would be able to > report that info within the data structure.  That's getting kinda close > to the virt driver mucking with allocations, but maybe it fits well > enough into this model to be acceptable?) > > Note that the return value from update_provider_tree is optional, and > only used when the virt driver is indicating a "move" of this ilk.  If > it's None/[] then the RT/update_from_provider_tree flow is the same as > it is today. > > If we can do it this way, we don't need a migration tool.  In fact, we > don't even need to restrict provider tree "reshaping" to release > boundaries.  As long as the virt driver understands its own data model > migrations and reports them properly via update_provider_tree, it can > shuffle its tree around whenever it wants. > > Thoughts? > > -efried > > [1] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 > > [2] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dougal at redhat.com Fri Jun 1 15:34:54 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 1 Jun 2018 16:34:54 +0100 Subject: [openstack-dev] [mistral] Mistral Monthly June 2018 Message-ID: Hey Mistralites! Welcome to the second edition of Mistral Monthly. # Summit Brad Crochet done a great job giving the Mistral project update talk. Check it out: https://www.youtube.com/watch?v=y9qieruccO4 Also checkout the Congress update, they discuss their recent support for Mistral. https://www.youtube.com/watch?v=5YYcysVyLCo # Releases Fairly quiet this month. Just a few bugfix releases. One still in flight. - Pike - Mistral 5.2.4 will be released soon: https://review.openstack.org/# /c/568881/ - Queens - Mistral 6.0.3 https://docs.openstack.org/releasenotes/mistral/queens. html Rocky Milestone 2 will be released next week. So there will be more release news next time. # Notable changes and additions - We now have Zun and Qinling OpenStack actions in master. - Two significant performance improvements were made relating to workflow environments and the deletion of objects. - Mistral is now using stestr in all repos. For more details, see: https://review.openstack.org/#/c/519751/ # Milestones, Reviews, Bugs and Blueprints - We have 105 open bugs (down from 109 last month). - Zero are untriaged - One is "critical" (but that is likely a lie as it has been critical and ignored for some time) - Rocky-2 still now has 58 bugs assigned to it (it was 44 last month!). Only 13 are fixed released. Most of these will move to Rocky-3 next week. - 4 blueprints are targeted at Rocky 2 (I have already bumped 4 that were inactive). Two are implemented. The other two will likely slip to Rocky-3 - 29 commits were merged. - There were 176 reviews in total, 126 of these from the core team. That's all for this time. See you next month! Dougal P.S. The format of this newsletter is still somewhat fluid. Feedback would be very welcome. What do you find interesting or useful? What is missing? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at benton.pub Fri Jun 1 16:04:09 2018 From: kevin at benton.pub (Kevin Benton) Date: Fri, 1 Jun 2018 09:04:09 -0700 Subject: [openstack-dev] [heat][neutron] Extraroute support In-Reply-To: References: <9a7cbe15-e678-24e6-9e77-86fdc1429dc6@ericsson.com> Message-ID: The neutron API now supports compare and swap updates with an If-Match header so the race condition can be avoided. https://bugs.launchpad.net/neutron/+bug/1703234 On Fri, Jun 1, 2018, 04:57 Rabi Mishra wrote: > > On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona > wrote: > >> Hi, >> >> Could somebody help me out with Neutron's Extraroute support in Hot >> templates. >> The support status of the Extraroute is support.UNSUPPORTED in heat, and >> only create and delete are the supported operations. >> see: >> https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35 >> >> > As I see the unsupported tag was added when the feature was moved from the >> contrib folder to in-tree (https://review.openstack.org/186608) >> Perhaps you can help me out why only create and delete are supported and >> update not. >> >> > I think most of the resources when moved from contrib to in-tree are > marked as unsupported. Adding routes to an existing router by multiple > stacks can be racy and is probably the reason use of this resource is not > encouraged and hence it's not supported. You can see the discussion in the > original patch that proposed this resource > https://review.openstack.org/#/c/41044/ > > Not sure if things have changed on neutron side for us to revisit the > concerns. > > Also it does not have any update_allowed properties, hence no > handle_update(). It would be replaced if you change any property. > > Hope it helps. > > > >> Thanks in advance for the help. >> >> Regards >> Lajos >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Regards, > Rabi Mishra > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harlowja at fastmail.com Fri Jun 1 16:12:18 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Fri, 01 Jun 2018 09:12:18 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <20180601121602.GB15905@paraplu> References: <20180601121602.GB15905@paraplu> Message-ID: <5B117062.6020505@fastmail.com> Slightly off topic but, Have you by any chance looked at what kata has forked for qemu: https://github.com/kata-containers/qemu/tree/qemu-lite-2.11.0 I'd be interested in an audit of that code for similar reasons to this libvirt fork (hard to know from my view point if there are new issues in that code like the ones you are finding in libvirt). Kashyap Chamarthy wrote: > On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote: >> StarlingX (aka STX) was announced this week at the summit, there is a >> PR to create project repos in Gerrit at [0]. STX is basically Wind > > From a cursory look at the libvirt fork, there are some questionable > choices. E.g. the config code (libvirt/src/qemu/qemu.conf) is modified > such that QEMU is launched as 'root'. That means a bug in QEMU == > instant host compromise. > > All Linux distributions (that matter) configure libvirt to launch QEMU > as a regular user ('qemu'). E.g. from Fedora's libvirt RPM spec file: > > libvirt.spec:%define qemu_user qemu > libvirt.spec: --with-qemu-user=%{qemu_user} \ > > * * * > > There are multiple other such issues in the forked libvirt code. > > [...] > From doug at doughellmann.com Fri Jun 1 16:18:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 01 Jun 2018 12:18:40 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: Message-ID: <1527869418-sup-3208@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400: > On 26/05/18 17:46, Mohammed Naser wrote: > > Hi everyone! > > > > During the TC retrospective at the OpenStack summit last week, the > > topic of the organizational diversity tag is becoming irrelevant was > > brought up by Thierry (ttx)[1]. It seems that for projects that are > > not very active, they can easily lose this tag with a few changes by > > perhaps the infrastructure team for CI related fixes. > > > > As an action item, Thierry and I have paired up in order to look into > > a way to resolve this issue. There have been ideas to switch this to > > a report that is published at the end of the cycle rather than > > continuously. Julia (TheJulia) suggested that we change or track > > different types of diversity. > > > > Before we start diving into solutions, I wanted to bring this topic up > > to the mailing list and ask for any suggestions. In digging the > > codebase behind this[2], I've found that there are some knobs that we > > can also tweak if need-be, or perhaps we can adjust those numbers > > depending on the number of commits. > > Crazy idea: what if we dropped the idea of measuring the diversity and > allowed teams to decide when they applied the tag to themselves like we > do for other tags. (No wait! Come back!) > > Some teams enforce a requirement that the 2 core +2s come from reviewers > with different affiliations. We would say that any project that enforces > that rule would get the diversity tag. Then it's actually attached to > something concrete, and teams could decide for themselves when to drop > it (because they would start having difficulty merging stuff otherwise). > > I'm not entirely sold on this, but it's an idea I had that I wanted to > throw out there :) > > cheers, > Zane. > The point of having the tags is to help consumers of the projects understand their health in some capacity. In this case we were trying to use measures of actual activity within the project to help spot projects that are really only maintained by one company, with the assumption that such projects are less healthy than others being maintained by contributors with more diverse backing. Does basing the tag definition on whether approvals need to come from people with diverse affiliation provide enough project health information that it would let us use it to replace the current tag? How many teams enforce the rule you describe? Is that rule a sign of a healthy team dynamic, that we would want to spread to the whole community? Doug From dms at danplanet.com Fri Jun 1 16:22:05 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 01 Jun 2018 09:22:05 -0700 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: (Jay Pipes's message of "Thu, 31 May 2018 15:35:57 -0400") References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: > So, you're saying the normal process is to try upgrading the Linux > kernel and associated low-level libs, wait the requisite amount of > time that takes (can be a long time) and just hope that everything > comes back OK? That doesn't sound like any upgrade I've ever seen. I'm saying I think it's a process practiced by some to install the new kernel and libs and then reboot to activate, yeah. > No, sorry if I wasn't clear. They can live-migrate the instances off > of the to-be-upgraded compute host. They would only need to > cold-migrate instances that use the aforementioned non-movable > resources. I don't think it's reasonable to force people to have to move every instance in their cloud (live or otherwise) in order to upgrade. That means that people who currently do their upgrades in-place in one step, now have to do their upgrade in N steps, for N compute nodes. That doesn't seem reasonable to me. > If we are going to go through the hassle of writing a bunch of > transformation code in order to keep operator action as low as > possible, I would prefer to consolidate all of this code into the > nova-manage (or nova-status) tool and put some sort of > attribute/marker on each compute node record to indicate whether a > "heal" operation has occurred for that compute node. We need to know details of each compute node in order to do that. We could make the tool external and something they run per-compute node, but that still makes it N steps, even if the N steps are lighter weight. > Someone (maybe Gibi?) on this thread had mentioned having the virt > driver (in update_provider_tree) do the whole set reserved = total > thing when first attempting to create the child providers. That would > work to prevent the scheduler from attempting to place workloads on > those child providers, but we would still need some marker on the > compute node to indicate to the nova-manage heal_nested_providers (or > whatever) command that the compute node has had its provider tree > validated/healed, right? So that means you restart your cloud and it's basically locked up until you perform the N steps to unlock N nodes? That also seems like it's not going to make us very popular on the playground :) I need to go read Eric's tome on how to handle the communication of things from virt to compute so that this translation can be done. I'm not saying I have the answer, I'm just saying that making this the problem of the operators doesn't seem like a solution to me, and that we should figure out how we're going to do this before we go down the rabbit hole. --Dan From sean.mcginnis at gmx.com Fri Jun 1 16:31:36 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 Jun 2018 11:31:36 -0500 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project Message-ID: <20180601163136.GA29961@sm-xps> Hello DragonFlow team, As part of reviewing release activities it was noticed that there was never a final Queens release for DragonFlow and there was never a stable/queens branch created. It appears there is still activity with this project [1], so I am wondering if we could get an update on the status of the DragonFlow. DragonFlow is under the "independent" release model, so it does not need to have regular cycle milestone releases [2], but we just want to make sure the project should continue under OpenStack governance and that we are not just missing communication on release needs. Thanks! Sean [1] https://github.com/openstack/dragonflow/compare/stable/pike...master [2] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/_independent/dragonflow.yaml From davanum at gmail.com Fri Jun 1 16:32:47 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Fri, 1 Jun 2018 12:32:47 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <5B117062.6020505@fastmail.com> References: <20180601121602.GB15905@paraplu> <5B117062.6020505@fastmail.com> Message-ID: Josh, The Kata team is talking to QEMU maintainers about how best to move forward. Specially around stripping down things that's not needed for their use case. They are not adding code from what i got to know (just removing stuff). -- Dims On Fri, Jun 1, 2018 at 12:12 PM, Joshua Harlow wrote: > Slightly off topic but, > > Have you by any chance looked at what kata has forked for qemu: > > https://github.com/kata-containers/qemu/tree/qemu-lite-2.11.0 > > I'd be interested in an audit of that code for similar reasons to this > libvirt fork (hard to know from my view point if there are new issues in > that code like the ones you are finding in libvirt). > > Kashyap Chamarthy wrote: >> >> On Tue, May 22, 2018 at 01:54:59PM -0500, Dean Troyer wrote: >>> >>> StarlingX (aka STX) was announced this week at the summit, there is a >>> PR to create project repos in Gerrit at [0]. STX is basically Wind >> >> >> From a cursory look at the libvirt fork, there are some questionable >> choices. E.g. the config code (libvirt/src/qemu/qemu.conf) is modified >> such that QEMU is launched as 'root'. That means a bug in QEMU == >> instant host compromise. >> >> All Linux distributions (that matter) configure libvirt to launch QEMU >> as a regular user ('qemu'). E.g. from Fedora's libvirt RPM spec file: >> >> libvirt.spec:%define qemu_user qemu >> libvirt.spec: --with-qemu-user=%{qemu_user} \ >> >> * * * >> >> There are multiple other such issues in the forked libvirt code. >> >> [...] >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From miguel at mlavalle.com Fri Jun 1 16:38:53 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 1 Jun 2018 11:38:53 -0500 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: <20180601163136.GA29961@sm-xps> References: <20180601163136.GA29961@sm-xps> Message-ID: There was an project update presentation in Vancouver: https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2 On Fri, Jun 1, 2018 at 11:31 AM, Sean McGinnis wrote: > Hello DragonFlow team, > > As part of reviewing release activities it was noticed that there was > never a > final Queens release for DragonFlow and there was never a stable/queens > branch > created. > > It appears there is still activity with this project [1], so I am > wondering if > we could get an update on the status of the DragonFlow. > > DragonFlow is under the "independent" release model, so it does not need to > have regular cycle milestone releases [2], but we just want to make sure > the > project should continue under OpenStack governance and that we are not just > missing communication on release needs. > > Thanks! > Sean > > [1] https://github.com/openstack/dragonflow/compare/stable/pike...master > [2] http://git.openstack.org/cgit/openstack/releases/tree/ > deliverables/_independent/dragonflow.yaml > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Jun 1 16:45:04 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 Jun 2018 11:45:04 -0500 Subject: [openstack-dev] [rally][tc] Tagging rights Message-ID: <20180601164504.GB29961@sm-xps> Hi Andrey, Sorry for the delay getting back to this. I had meant to wait for the responses from the other projects included in the original thread, but never made it back to follow up. Officially governed projects are required to use the releases repo for driving the automated release process. This ensure peer-reviewed releases and consistency through the release. So to be a governed project, we really do need to switch you over to this process. Some other notes inline below. Thanks, Sean > Hi Sean! > > Thanks for raising this question. > > As for Rally team, we are using self-tagging approach for several reasons: > > - Release notes > > Check the difference between > https://github.com/openstack/nova/releases/tag/17.0.2 and > https://github.com/openstack/rally-openstack/releases/tag/1.0.0. > The first one includes just autogenerated metadata. The second one > user-friendly notes (they are not ideal, but we are working on making them > better). > I do not find a way to add custom release notes via openstack/releases > project. Nearly all projects have standardized on reno for release notes. This is the preferred method for this and where general consumers of OpenStack deliverables are now used to looking for these details. I would strongly recommend doing that instead. > > - Time > > Self-tagging the repo allows me to schedule/reschedule the release in > whatever timeframe I decide without pinging anyone and waiting for folks to > return from summit/PTG. > I do not want to offend anyone, but we all know that such events take > much time for preparation, holding and resting after it. > > Since there are no official OpenStack projects built on top of Rally, > launching any of "integration" jobs while making Rally release is a wasting > of time and money(resources). > Also, such jobs can block to make a release. I remember sometimes it can > take weeks to pass all gates with tons of rechecks > > https://github.com/openstack/releases#release-approval == "Freezes and no > late releases". It is an opensource and I want to make releases on weekends > if there is any > reason for doing this (critical fix or the last blocking feature is > merged or whatever). We do generally avoid releasing on Friday's or weekends, but now that our requirements management has some checks, and especially for projects that are not dependencies for other projects, we can certainly do releases on these days as long as we are told of the urgency of getting them out there. The release team does not want to be a bottleneck for getting other work done. From doug at doughellmann.com Fri Jun 1 17:06:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 01 Jun 2018 13:06:32 -0400 Subject: [openstack-dev] [rally][tc] Tagging rights In-Reply-To: <20180601164504.GB29961@sm-xps> References: <20180601164504.GB29961@sm-xps> Message-ID: <1527872575-sup-1870@lrrr.local> Excerpts from Sean McGinnis's message of 2018-06-01 11:45:04 -0500: > Hi Andrey, > > Sorry for the delay getting back to this. I had meant to wait for the responses > from the other projects included in the original thread, but never made it back > to follow up. > > Officially governed projects are required to use the releases repo for driving > the automated release process. This ensure peer-reviewed releases and > consistency through the release. So to be a governed project, we really do need > to switch you over to this process. I'm curious about the relationship between rally and "xRally" (https://github.com/xrally). The repo there says the core of rally is going to be moved to github soon, can you elaborate on that? Is there a plan to remove Rally from OpenStack? Doug > > Some other notes inline below. > > Thanks, > Sean > > > Hi Sean! > > > > Thanks for raising this question. > > > > As for Rally team, we are using self-tagging approach for several reasons: > > > > - Release notes > > > > Check the difference between > > https://github.com/openstack/nova/releases/tag/17.0.2 and > > https://github.com/openstack/rally-openstack/releases/tag/1.0.0. > > The first one includes just autogenerated metadata. The second one > > user-friendly notes (they are not ideal, but we are working on making them > > better). > > I do not find a way to add custom release notes via openstack/releases > > project. > > Nearly all projects have standardized on reno for release notes. This is the > preferred method for this and where general consumers of OpenStack deliverables > are now used to looking for these details. I would strongly recommend doing > that instead. > > > > > - Time > > > > Self-tagging the repo allows me to schedule/reschedule the release in > > whatever timeframe I decide without pinging anyone and waiting for folks to > > return from summit/PTG. > > I do not want to offend anyone, but we all know that such events take > > much time for preparation, holding and resting after it. > > > > Since there are no official OpenStack projects built on top of Rally, > > launching any of "integration" jobs while making Rally release is a wasting > > of time and money(resources). > > Also, such jobs can block to make a release. I remember sometimes it can > > take weeks to pass all gates with tons of rechecks > > > > https://github.com/openstack/releases#release-approval == "Freezes and no > > late releases". It is an opensource and I want to make releases on weekends > > if there is any > > reason for doing this (critical fix or the last blocking feature is > > merged or whatever). > > We do generally avoid releasing on Friday's or weekends, but now that our > requirements management has some checks, and especially for projects that are > not dependencies for other projects, we can certainly do releases on these days > as long as we are told of the urgency of getting them out there. The release > team does not want to be a bottleneck for getting other work done. > From jaypipes at gmail.com Fri Jun 1 17:22:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 1 Jun 2018 13:22:18 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: <17b5546f-951e-5c59-a289-4783484695fc@gmail.com> Dan, you are leaving out the parts of my response where I am agreeing with you and saying that your "Option #2" is probably the things we should go with. -jay On 06/01/2018 12:22 PM, Dan Smith wrote: >> So, you're saying the normal process is to try upgrading the Linux >> kernel and associated low-level libs, wait the requisite amount of >> time that takes (can be a long time) and just hope that everything >> comes back OK? That doesn't sound like any upgrade I've ever seen. > > I'm saying I think it's a process practiced by some to install the new > kernel and libs and then reboot to activate, yeah. > >> No, sorry if I wasn't clear. They can live-migrate the instances off >> of the to-be-upgraded compute host. They would only need to >> cold-migrate instances that use the aforementioned non-movable >> resources. > > I don't think it's reasonable to force people to have to move every > instance in their cloud (live or otherwise) in order to upgrade. That > means that people who currently do their upgrades in-place in one step, > now have to do their upgrade in N steps, for N compute nodes. That > doesn't seem reasonable to me. > >> If we are going to go through the hassle of writing a bunch of >> transformation code in order to keep operator action as low as >> possible, I would prefer to consolidate all of this code into the >> nova-manage (or nova-status) tool and put some sort of >> attribute/marker on each compute node record to indicate whether a >> "heal" operation has occurred for that compute node. > > We need to know details of each compute node in order to do that. We > could make the tool external and something they run per-compute node, > but that still makes it N steps, even if the N steps are lighter > weight. > >> Someone (maybe Gibi?) on this thread had mentioned having the virt >> driver (in update_provider_tree) do the whole set reserved = total >> thing when first attempting to create the child providers. That would >> work to prevent the scheduler from attempting to place workloads on >> those child providers, but we would still need some marker on the >> compute node to indicate to the nova-manage heal_nested_providers (or >> whatever) command that the compute node has had its provider tree >> validated/healed, right? > > So that means you restart your cloud and it's basically locked up until > you perform the N steps to unlock N nodes? That also seems like it's not > going to make us very popular on the playground :) > > I need to go read Eric's tome on how to handle the communication of > things from virt to compute so that this translation can be done. I'm > not saying I have the answer, I'm just saying that making this the > problem of the operators doesn't seem like a solution to me, and that we > should figure out how we're going to do this before we go down the > rabbit hole. > > --Dan > From doug at doughellmann.com Fri Jun 1 17:29:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 01 Jun 2018 13:29:41 -0400 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: References: <20180601163136.GA29961@sm-xps> Message-ID: <1527873028-sup-1636@lrrr.local> That presentation says "Users should do their own tagging/release management" (6:31). I don't think that's really an approach we want to be encouraging project teams to take. I would suggest placing Dragonflow in maintenance mode, but if the team doesn't have the resources to participate in the normal community processes, maybe it should be moved out of the official project list instead? Do we have any sort of indication of how many deployments rely on Dragonflow? Does the neutron team have capacity to bring Dragonflow back in to their list of managed repos and help them with releases and other common process tasks? Excerpts from Miguel Lavalle's message of 2018-06-01 11:38:53 -0500: > There was an project update presentation in Vancouver: > https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2 > > On Fri, Jun 1, 2018 at 11:31 AM, Sean McGinnis > wrote: > > > Hello DragonFlow team, > > > > As part of reviewing release activities it was noticed that there was > > never a > > final Queens release for DragonFlow and there was never a stable/queens > > branch > > created. > > > > It appears there is still activity with this project [1], so I am > > wondering if > > we could get an update on the status of the DragonFlow. > > > > DragonFlow is under the "independent" release model, so it does not need to > > have regular cycle milestone releases [2], but we just want to make sure > > the > > project should continue under OpenStack governance and that we are not just > > missing communication on release needs. > > > > Thanks! > > Sean > > > > [1] https://github.com/openstack/dragonflow/compare/stable/pike...master > > [2] http://git.openstack.org/cgit/openstack/releases/tree/ > > deliverables/_independent/dragonflow.yaml > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From sean.mcginnis at gmx.com Fri Jun 1 17:54:23 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 1 Jun 2018 12:54:23 -0500 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: <1527873028-sup-1636@lrrr.local> References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> Message-ID: <20180601175423.GA1364@sm-xps> On Fri, Jun 01, 2018 at 01:29:41PM -0400, Doug Hellmann wrote: > That presentation says "Users should do their own tagging/release > management" (6:31). I don't think that's really an approach we want > to be encouraging project teams to take. > I hadn't had a chance to watch the presentation yet. It also states right aroung there that there is only one dev on the project. That really concerns me. And in very strong agreement - we definitely do not want to be encouraging project consumers to be the ones tagging and doing their own releases. We would certainly welcome anyone interested to get involved in the project and be added as an official release liaison so they can request official releases though. > I would suggest placing Dragonflow in maintenance mode, but if the > team doesn't have the resources to participate in the normal community > processes, maybe it should be moved out of the official project > list instead? > > Do we have any sort of indication of how many deployments rely on > Dragonflow? Does the neutron team have capacity to bring Dragonflow > back in to their list of managed repos and help them with releases > and other common process tasks? > > Excerpts from Miguel Lavalle's message of 2018-06-01 11:38:53 -0500: > > There was an project update presentation in Vancouver: > > https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2 > > From jaypipes at gmail.com Fri Jun 1 18:12:23 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 1 Jun 2018 14:12:23 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> On 05/31/2018 02:26 PM, Eric Fried wrote: >> 1. Make everything perform the pivot on compute node start (which can be >> re-used by a CLI tool for the offline case) >> 2. Make everything default to non-nested inventory at first, and provide >> a way to migrate a compute node and its instances one at a time (in >> place) to roll through. > > I agree that it sure would be nice to do ^ rather than requiring the > "slide puzzle" thing. > > But how would this be accomplished, in light of the current "separation > of responsibilities" drawn at the virt driver interface, whereby the > virt driver isn't supposed to talk to placement directly, or know > anything about allocations? FWIW, I don't have a problem with the virt driver "knowing about allocations". What I have a problem with is the virt driver *claiming resources for an instance*. That's what the whole placement claims resources things was all about, and I'm not interested in stepping back to the days of long racy claim operations by having the compute nodes be responsible for claiming resources. That said, once the consumer generation microversion lands [1], it should be possible to *safely* modify an allocation set for a consumer (instance) and move allocation records for an instance from one provider to another. [1] https://review.openstack.org/#/c/565604/ > Here's a first pass: > > The virt driver, via the return value from update_provider_tree, tells > the resource tracker that "inventory of resource class A on provider B > have moved to provider C" for all applicable AxBxC. E.g. > > [ { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [ > SRIOV_NET_VF: 2, > NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, > NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, > ], > 'to_resource_provider': > } > ] > > As today, the resource tracker takes the updated provider tree and > invokes [1] the report client method update_from_provider_tree [2] to > flush the changes to placement. But now update_from_provider_tree also > accepts the return value from update_provider_tree and, for each "move": > > - Creates provider C (as described in the provider_tree) if it doesn't > already exist. > - Creates/updates provider C's inventory as described in the > provider_tree (without yet updating provider B's inventory). This ought > to create the inventory of resource class A on provider C. Unfortunately, right here you'll introduce a race condition. As soon as this operation completes, the scheduler will have the ability to throw new instances on provider C and consume the inventory from it that you intend to give to the existing instance that is consuming from provider B. > - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. For each consumer of resources on rp B, right? > - Updates provider B's inventory. Again, this is problematic because the scheduler will have already begun to place new instances on B's inventory, which could very well result in incorrect resource accounting on the node. We basically need to have one giant new REST API call that accepts the list of "move instructions" and performs all of the instructions in a single transaction. :( > (*There's a hole here: if we're splitting a glommed-together inventory > across multiple new child providers, as the VGPUs in the example, we > don't know which allocations to put where. The virt driver should know > which instances own which specific inventory units, and would be able to > report that info within the data structure. That's getting kinda close > to the virt driver mucking with allocations, but maybe it fits well > enough into this model to be acceptable?) Well, it's not really the virt driver *itself* mucking with the allocations. It's more that the virt driver is telling something *else* the move instructions that it feels are needed... > Note that the return value from update_provider_tree is optional, and > only used when the virt driver is indicating a "move" of this ilk. If > it's None/[] then the RT/update_from_provider_tree flow is the same as > it is today. > > If we can do it this way, we don't need a migration tool. In fact, we > don't even need to restrict provider tree "reshaping" to release > boundaries. As long as the virt driver understands its own data model > migrations and reports them properly via update_provider_tree, it can > shuffle its tree around whenever it wants. Due to the many race conditions we would have in trying to fudge inventory amounts (the reserved/total thing) and allocation movement for >1 consumer at a time, I'm pretty sure the only safe thing to do is have a single new HTTP endpoint that would take this list of move operations and perform them atomically (on the placement server side of course). Here's a strawman for how that HTTP endpoint might look like: https://etherpad.openstack.org/p/placement-migrate-operations feel free to markup and destroy. Best, -jay > Thoughts? > > -efried > > [1] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 > [2] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dms at danplanet.com Fri Jun 1 19:02:10 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 01 Jun 2018 12:02:10 -0700 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <17b5546f-951e-5c59-a289-4783484695fc@gmail.com> (Jay Pipes's message of "Fri, 1 Jun 2018 13:22:18 -0400") References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <17b5546f-951e-5c59-a289-4783484695fc@gmail.com> Message-ID: > Dan, you are leaving out the parts of my response where I am agreeing > with you and saying that your "Option #2" is probably the things we > should go with. No, what you said was: >> I would vote for Option #2 if it comes down to it. Implying (to me at least) that you still weren't in favor of either, but would choose that as the least offensive option :) I didn't quote it because I didn't have any response. I just wanted to address the other assertions about what is and isn't a common upgrade scenario, which I think is the important data we need to consider when making a decision here. I didn't mean to imply or hide anything with my message trimming, so sorry if it came across as such. --Dan From jaypipes at gmail.com Fri Jun 1 19:12:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 1 Jun 2018 15:12:22 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <17b5546f-951e-5c59-a289-4783484695fc@gmail.com> Message-ID: <1839ce07-a60f-bd74-3496-b4db6e30993e@gmail.com> On 06/01/2018 03:02 PM, Dan Smith wrote: >> Dan, you are leaving out the parts of my response where I am agreeing >> with you and saying that your "Option #2" is probably the things we >> should go with. > > No, what you said was: > >>> I would vote for Option #2 if it comes down to it. > > Implying (to me at least) that you still weren't in favor of either, but > would choose that as the least offensive option :) > > I didn't quote it because I didn't have any response. I just wanted to > address the other assertions about what is and isn't a common upgrade > scenario, which I think is the important data we need to consider when > making a decision here. Understood. I've now accepted fact that we will need to do something to transform the data model without requiring operators to move workloads. > I didn't mean to imply or hide anything with my message trimming, so > sorry if it came across as such. No worries. Best, -jay From zbitter at redhat.com Fri Jun 1 19:19:46 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 1 Jun 2018 15:19:46 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527869418-sup-3208@lrrr.local> References: <1527869418-sup-3208@lrrr.local> Message-ID: On 01/06/18 12:18, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400: >> Crazy idea: what if we dropped the idea of measuring the diversity and >> allowed teams to decide when they applied the tag to themselves like we >> do for other tags. (No wait! Come back!) >> >> Some teams enforce a requirement that the 2 core +2s come from reviewers >> with different affiliations. We would say that any project that enforces >> that rule would get the diversity tag. Then it's actually attached to >> something concrete, and teams could decide for themselves when to drop >> it (because they would start having difficulty merging stuff otherwise). >> >> I'm not entirely sold on this, but it's an idea I had that I wanted to >> throw out there :) >> >> cheers, >> Zane. >> > > The point of having the tags is to help consumers of the projects > understand their health in some capacity. In this case we were > trying to use measures of actual activity within the project to > help spot projects that are really only maintained by one company, > with the assumption that such projects are less healthy than others > being maintained by contributors with more diverse backing. (Clarification for readers: there are actually 3 levels; getting the diverse-affiliations tag has a higher bar than dropping the single-vendor tag.) > Does basing the tag definition on whether approvals need to come > from people with diverse affiliation provide enough project health > information that it would let us use it to replace the current tag? Yes. Project teams will soon drop this rule if it's the only way to get patches in. A single-vendor project by definition cannot adopt this rule and continue to... exist as a project, really. It would tell potential users that if one organisation drops out it there is at least somebody left to review patches, and also guarantee that the project's direction is not down to the whim of one organisation. > How many teams enforce the rule you describe? I don't know. I do know that in Heat we never enforced it - at first because it was a single-vendor project, and then later because it was so diverse (and not subject to any particular cross-company animosity) that nobody particularly saw the need to change, and now that many of those vendors have pulled out of OpenStack because it would be an obstacle to getting patches approved again. I was kind of under the impression that all of the projects used this rule prior to Heat and Ceilometer being incubated. That may be incorrect. At least Nova and the projects that have a lot of vendor drivers (and are thus susceptible to suspicions of bias) - i.e. Cinder & Neutron mainly - may still follow this rule? I haven't yet found a mention of it in any of the contributor guides though, so possibly it was dropped OpenStack-wide and I never noticed. > Is that rule a sign of a healthy team dynamic, that we would want > to spread to the whole community? Yeah, this part I am pretty unsure about too. For some projects it probably is. For others it may just be an unnecessary obstacle, although I don't think it'd actually be *un*healthy for any project, assuming a big enough and diverse enough team (which should be a goal for the whole community). For most projects with small core teams it would obviously be a showstopper, but the idea would be for them to continue to opt out. cheers, Zane. From dms at danplanet.com Fri Jun 1 20:18:42 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 01 Jun 2018 13:18:42 -0700 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> (Jay Pipes's message of "Fri, 1 Jun 2018 14:12:23 -0400") References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> Message-ID: > FWIW, I don't have a problem with the virt driver "knowing about > allocations". What I have a problem with is the virt driver *claiming > resources for an instance*. +1000. > That's what the whole placement claims resources things was all about, > and I'm not interested in stepping back to the days of long racy claim > operations by having the compute nodes be responsible for claiming > resources. > > That said, once the consumer generation microversion lands [1], it > should be possible to *safely* modify an allocation set for a consumer > (instance) and move allocation records for an instance from one > provider to another. Agreed. I'm hesitant to have the compute nodes arguing with the scheduler even to patch things up, given the mess we just cleaned up. The thing that I think makes this okay is that one compute node cleaning/pivoting allocations for instances isn't going to be fighting anything else whilst doing it. Migrations and new instance builds where the source/destination or scheduler/compute aren't clear who owns the allocation is a problem. That said, we need to make sure we can handle the case where an instance is in resize_confirm state across a boundary where we go from non-NRP to NRP. It *should* be okay for the compute to handle this by updating the instance's allocation held by the migration instead of the instance itself, if the compute determines that it is the source. --Dan From colleen at gazlene.net Fri Jun 1 21:14:59 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 01 Jun 2018 23:14:59 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 28 May 2018 Message-ID: <1527887699.4066938.1393521288.15AE9E48@webmail.messagingengine.com> # Keystone Team Update - Week of 28 May 2018 ## News ### Summit Recap We had a productive summit last week. Lance has posted a recap[1]. [1] https://www.lbragstad.com/blog/openstack-summit-vancouver-recap ### Quota Models There was a productive discussion at the forum on hierarchical quotas (which I missed), but which resulted in some new thoughts about safely tracking quota which Adam captured[2]. We then discussed some performance implications for unlimited-depth project trees[3]. The spec for a strict two-level model still needs reviews[4]. [2] http://adam.younglogic.com/2018/05/tracking-quota/#more-5542 [3] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-29-16.02.log.html#l-9 [4] https://review.openstack.org/540803 ## Open Specs Search query: https://bit.ly/2G8Ai5q Last week we merged the Default Roles spec[5] after discussing it at the Summit. We still need to review and merge the update the hierarchical unified limits spec[6] which has been updated following discussions at the summit. [5] https://review.openstack.org/566377 [6] https://review.openstack.org/540803 ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 5 changes this week. One of those was to partially remove the deprecated TokenAuth middleware[7], which has implications for upgrades. [7] https://review.openstack.org/508412 ## Changes that need Attention Changes with no negative feedback: https://bit.ly/2wv7QLK Changes with only human negative feedback: https://bit.ly/2LeW1vC There are 42 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. This data is provided to highlight patches that are currently waiting for any feedback. There are 81 total changes that are ready for review. ## Bugs These week we opened 6 new bugs and closed 4. One of the bugs opened and fixed was for our docs builds which had broken since the latest docs PTI updates[8]. I also opened a bug regarding the usage of groups with application credentials[9], which has implications for federated users using application credentials. [8] https://bugs.launchpad.net/keystone/+bug/1774508 [9] https://bugs.launchpad.net/keystone/+bug/1773967 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Next week is specification freeze (I think unified limits is the only remaining specification that needs attention). Our next deadline after that is feature proposal freeze on June 22nd. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From cdent+os at anticdent.org Sat Jun 2 00:28:01 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 1 Jun 2018 17:28:01 -0700 (PDT) Subject: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad In-Reply-To: References: Message-ID: On Wed, 9 May 2018, Chris Dent wrote: > I've started an etherpad for the forum session in Vancouver devoted > to discussing the possibility of tracking and allocation resources > in Cinder using the Placement service. This is not a done deal. > Instead the session is to discuss if it could work and how to make > it happen if it seems like a good idea. > > The etherpad is at > > https://etherpad.openstack.org/p/YVR-cinder-placement The session went well. Some of the members of the cinder team who might have had more questions had not been able to be at summit so we were unable to get their input. We clarified some of the things that cinder wants to be able to accomplish (run multiple schedulers in active-active and avoid race conditions) and the fact that this is what placement is built for. We also made it clear that placement itself can be highly available (and scalable) because of its nature as a dead-simple web app over a database. The next steps are for the cinder team to talk amongst themselves and socialize the capabilities of placement (with the help of placement people) and see if it will be suitable. It is unlikely there will be much visible progress in this area before Stein. See the etherpad for a bit more detail. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zhipengh512 at gmail.com Sat Jun 2 00:43:56 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 2 Jun 2018 08:43:56 +0800 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <1527869418-sup-3208@lrrr.local> Message-ID: I agree with Zane's proposal here, it is a good rule to have 2 core reviewer from different companies to provide +2 for a patch. However it should not be very strict given that project in early stage usually have to rely on devs from one or two companies. But it should be recommended that project apply for the diversity-tag should at least expressed that they have adopted this rule. On Sat, Jun 2, 2018 at 3:19 AM, Zane Bitter wrote: > On 01/06/18 12:18, Doug Hellmann wrote: > >> Excerpts from Zane Bitter's message of 2018-06-01 10:10:31 -0400: >> >>> Crazy idea: what if we dropped the idea of measuring the diversity and >>> allowed teams to decide when they applied the tag to themselves like we >>> do for other tags. (No wait! Come back!) >>> >>> Some teams enforce a requirement that the 2 core +2s come from reviewers >>> with different affiliations. We would say that any project that enforces >>> that rule would get the diversity tag. Then it's actually attached to >>> something concrete, and teams could decide for themselves when to drop >>> it (because they would start having difficulty merging stuff otherwise). >>> >>> I'm not entirely sold on this, but it's an idea I had that I wanted to >>> throw out there :) >>> >>> cheers, >>> Zane. >>> >>> >> The point of having the tags is to help consumers of the projects >> understand their health in some capacity. In this case we were >> trying to use measures of actual activity within the project to >> help spot projects that are really only maintained by one company, >> with the assumption that such projects are less healthy than others >> being maintained by contributors with more diverse backing. >> > > (Clarification for readers: there are actually 3 levels; getting the > diverse-affiliations tag has a higher bar than dropping the single-vendor > tag.) > > Does basing the tag definition on whether approvals need to come >> from people with diverse affiliation provide enough project health >> information that it would let us use it to replace the current tag? >> > > Yes. Project teams will soon drop this rule if it's the only way to get > patches in. A single-vendor project by definition cannot adopt this rule > and continue to... exist as a project, really. > > It would tell potential users that if one organisation drops out it there > is at least somebody left to review patches, and also guarantee that the > project's direction is not down to the whim of one organisation. > > How many teams enforce the rule you describe? >> > > I don't know. > > I do know that in Heat we never enforced it - at first because it was a > single-vendor project, and then later because it was so diverse (and not > subject to any particular cross-company animosity) that nobody particularly > saw the need to change, and now that many of those vendors have pulled out > of OpenStack because it would be an obstacle to getting patches approved > again. > > I was kind of under the impression that all of the projects used this rule > prior to Heat and Ceilometer being incubated. That may be incorrect. At > least Nova and the projects that have a lot of vendor drivers (and are thus > susceptible to suspicions of bias) - i.e. Cinder & Neutron mainly - may > still follow this rule? I haven't yet found a mention of it in any of the > contributor guides though, so possibly it was dropped OpenStack-wide and I > never noticed. > > Is that rule a sign of a healthy team dynamic, that we would want >> to spread to the whole community? >> > > Yeah, this part I am pretty unsure about too. For some projects it > probably is. For others it may just be an unnecessary obstacle, although I > don't think it'd actually be *un*healthy for any project, assuming a big > enough and diverse enough team (which should be a goal for the whole > community). > > For most projects with small core teams it would obviously be a > showstopper, but the idea would be for them to continue to opt out. > > cheers, > Zane. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sat Jun 2 00:49:44 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 2 Jun 2018 08:49:44 +0800 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180531205517.xcqg7cfikswxqntn@yuggoth.org> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <20180531002300.5uff6i6mmot4lq72@yuggoth.org> <20180531205517.xcqg7cfikswxqntn@yuggoth.org> Message-ID: For me nitpicking during review is really not a good experience, however i do think we should tolerate at least one round of nitpicking. On another aspect, the nitpicking review culture also in some way encourage, and provide legitimacy in some way, to the padding activities. People are feeling ok about "fixing dictionary" as we joked. On Fri, Jun 1, 2018 at 4:55 AM, Jeremy Stanley wrote: > On 2018-05-31 16:49:13 -0400 (-0400), John Dennis wrote: > > On 05/30/2018 08:23 PM, Jeremy Stanley wrote: > > > I think this is orthogonal to the thread. The idea is that we should > > > avoid nettling contributors over minor imperfections in their > > > submissions (grammatical, spelling or typographical errors in code > > > comments and documentation, mild inefficiencies in implementations, > > > et cetera). Clearly we shouldn't merge broken features, changes > > > which fail tests/linters, and so on. For me the rule of thumb is, > > > "will the software be better or worse if this is merged?" It's not > > > about perfection or imperfection, it's about incremental > > > improvement. If a proposed change is an improvement, that's enough. > > > If it's not perfect... well, that's just opportunity for more > > > improvement later. > > > > I appreciate the sentiment concerning accepting any improvement yet on > the > > other hand waiting for improvements to the patch to occur later is > folly, it > > won't happen. > > > > Those of us familiar with working with large bodies of code from multiple > > authors spanning an extended time period will tell you it's very > confusing > > when it's obvious most of the code follows certain conventions but there > are > > odd exceptions (often without comments). This inevitably leads to > investing > > a lot of time trying to understand why the exception exists because > "clearly > > it's there for a reason and I'm just missing the rationale" At that point > > the reason for the inconsistency is lost. > > > > At the end of the day it is more important to keep the code base clean > and > > consistent for those that follow than it is to coddle in the near term. > > Sure, I suppose it comes down to your definition of "improvement." I > don't consider a change proposing incomplete or unmaintainable code > to be an improvement. On the other hand I think it's fine to approve > changes which are "good enough" even if there's room for > improvement, so long as they're "good enough" that you're fine with > them possibly never being improved on due to shifts in priorities. > I'm certainly not suggesting that it's a good idea to merge > technical debt with the expectation that someone will find time to > solve it later (any more than it's okay to merge obvious bugs in > hopes someone will come along and fix them for you). > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Sat Jun 2 05:58:59 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Sat, 2 Jun 2018 15:58:59 +1000 Subject: [openstack-dev] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87r2lrsenh.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <87r2lrsenh.fsf@meyer.lemoncheese.net> Message-ID: On Fri, Jun 1, 2018 at 7:23 AM, James E. Blair wrote: > Joshua Hesketh writes: > > > So the "winterscale infrastructure council"'s purview is quite limited in > > scope to just govern the services provided? > > > > If so, would you foresee a need to maintain some kind of "Infrastructure > > council" as it exists at the moment to be the technical design body? > > For the foreseeable future, I think the "winterscale infrastructure > team" can probably handle that. If it starts to sprawl again, we can > make a new body. > > > Specifically, wouldn't we still want somewhere for the "winterscale > > infrastructure team" to be represented and would that expand to any > > infrastructure-related core teams? > > Can you elaborate on this? I'm not following. > I think your first response answers this a little bit. That is, the "winterscale infrastructure team" serves the purpose of technical design (that is currently done by the "Infrastructure Council", so we've got some change in terminology that will be initially confusing). Currently though the "Infrastructure Council" includes "All members of any infrastructure project core team" which would include people from say git-review core. My question was how do we still include infrastructure-related core members (such as git-review-core) in the new world order? Hope that makes more sense. Cheers, Josh > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Sat Jun 2 17:23:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 02 Jun 2018 13:23:24 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <1527869418-sup-3208@lrrr.local> Message-ID: <1527960022-sup-7990@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: > On 01/06/18 12:18, Doug Hellmann wrote: [snip] > > Is that rule a sign of a healthy team dynamic, that we would want > > to spread to the whole community? > > Yeah, this part I am pretty unsure about too. For some projects it > probably is. For others it may just be an unnecessary obstacle, although > I don't think it'd actually be *un*healthy for any project, assuming a > big enough and diverse enough team (which should be a goal for the whole > community). It feels like we would be saying that we don't trust 2 core reviewers from the same company to put the project's goals or priorities over their employer's. And that doesn't feel like an assumption I would want us to encourage through a tag meant to show the health of the project. Maybe I'm reading too much into it? Or it is more of a problem than I have experienced? Doug From hongbin034 at gmail.com Sat Jun 2 17:57:28 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 2 Jun 2018 13:57:28 -0400 Subject: [openstack-dev] [Zun][Octavia] OpenStack Zun as Octavia driver Message-ID: Hi all, We are planning a feature about Zun integration in Octavia [1]. At highest level, the idea is to have Zun to provide docker containers for Octavia to host their software-based load balancing backend. An immediate benefit is to speed up the Octavia's gate so that they can run more test cases for a better coverage. Zun will be beneficial for collecting potential valuable feedback through this practical use case. If anyone interest in this work, please leave a note at the storyboard [1]. I believe the assignee will receive a good support from both Zun and Octavia team and the work will be greatly rewarded. [1] https://storyboard.openstack.org/#!/story/2002117 Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sat Jun 2 18:14:10 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 2 Jun 2018 13:14:10 -0500 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527960022-sup-7990@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> Message-ID: <972082d1-d3ba-e7c2-0904-e2ce1c4caaa9@gmx.com> On 06/02/2018 12:23 PM, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: >> On 01/06/18 12:18, Doug Hellmann wrote: > [snip] > >>> Is that rule a sign of a healthy team dynamic, that we would want >>> to spread to the whole community? >> Yeah, this part I am pretty unsure about too. For some projects it >> probably is. For others it may just be an unnecessary obstacle, although >> I don't think it'd actually be *un*healthy for any project, assuming a >> big enough and diverse enough team (which should be a goal for the whole >> community). > It feels like we would be saying that we don't trust 2 core reviewers > from the same company to put the project's goals or priorities over > their employer's. And that doesn't feel like an assumption I would > want us to encourage through a tag meant to show the health of the > project. I have to agree. In general, I have tried to at least give the opportunity for other cores from other companies to review patches before approving, but there have been times where I have approved patches in Cinder where the only other +2 was someone from the same company. I don't see anything wrong with this in most cases. As an exceptional example, I'm actually happy to see two +2's from Red Hat cores on Ceph related patches. I think it's a good thing to encourage a mix, but I have never considered it a hard and fast rule. > > Maybe I'm reading too much into it? Or it is more of a problem than > I have experienced? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Sat Jun 2 18:51:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 2 Jun 2018 18:51:47 +0000 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527960022-sup-7990@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> Message-ID: <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: [...] > It feels like we would be saying that we don't trust 2 core reviewers > from the same company to put the project's goals or priorities over > their employer's. And that doesn't feel like an assumption I would > want us to encourage through a tag meant to show the health of the > project. [...] That's one way of putting it. On the other hand, if we ostensibly have that sort of guideline (say, two core reviewers shouldn't be the only ones to review a change submitted by someone else from their same organization if the team is large and diverse enough to support such a pattern) then it gives our reviewers a better argument to push back on their management _if_ they're being strongly urged to review/approve certain patches. At least then they can say, "this really isn't going to fly because we have to get a reviewer from another organization to agree it's in the best interests of the project" rather than "fire me if you want but I'm not approving that change, no matter how much your product launch is going to be delayed." While I'd like to think a lot of us have the ability to push back on those sorts of adverse influences directly, I have a feeling not everyone can comfortably do so. On the other hand, it might also just be easy enough to give one of your fellow reviewers in another org a heads up that maybe they should take a look at that patch over there and provide some quick feedback... -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amrith.kumar at gmail.com Sat Jun 2 19:06:27 2018 From: amrith.kumar at gmail.com (amrith.kumar at gmail.com) Date: Sat, 2 Jun 2018 15:06:27 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> <1527614177-sup-1244@lrrr.local> <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> Message-ID: <021001d3faa4$cf6ddaa0$6e498fe0$@gmail.com> Every project on the one-way-trip to inactivity starts with what some people will wishfully call a 'transient period' of reduced activity. Once the transient nature is no longer the case (either it becomes active or the transient becomes permanent) the normal process of eviction can begin. As the guy who came up with the maintenance-mode tag, so as to apply it to Trove, I believe that both the diversity tag and the maintenance mode tag have a good reason to exist, and should both be retained independent of each other. The logic always was, and should remain, that diversity is a measure of wide multi-organizational support for a project; not measured in the total volume of commits but the fraction of commits. There was much discussion about the knobs in the diversity tag measurement when Flavio made the changes some years back. I'm sorry I didn't attend the session in Vancouver but I'll try and tune in to a TC office hours session and maybe get a rundown of what precipitated this decision to move away from the diversity tag. -amrith > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, May 29, 2018 5:43 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [tc] Organizational diversity tag > > On 2018-05-29 13:17:50 -0400 (-0400), Doug Hellmann wrote: > [...] > > We have the status:maintenance-mode tag[3] today. How would a new > > "low-activity" tag be differentiated from the existing one? > [...] > > status:maintenance-mode is (as it says on the tin) a subjective indicator that > a team has entered a transient period of reduced activity. By contrast, a low- > activity tag (maybe it should be something more innocuous like low-churn?) > would be an objective indicator that attempts to make contributor diversity > assertions are doomed to fail the statistical significance test. We could > consider overloading status:maintenance-mode for this purpose, but some > teams perhaps simply don't have large amounts of code change ever and > that's just a normal effect of how they operate. > -- > Jeremy Stanley From amrith.kumar at gmail.com Sat Jun 2 19:08:06 2018 From: amrith.kumar at gmail.com (amrith.kumar at gmail.com) Date: Sat, 2 Jun 2018 15:08:06 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: Message-ID: <022001d3faa5$0a58dca0$1f0a95e0$@gmail.com> > -----Original Message----- > From: Zane Bitter > Sent: Friday, June 1, 2018 10:11 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc] Organizational diversity tag > > On 26/05/18 17:46, Mohammed Naser wrote: > > Hi everyone! > > > > During the TC retrospective at the OpenStack summit last week, the > > topic of the organizational diversity tag is becoming irrelevant was > > brought up by Thierry (ttx)[1]. It seems that for projects that are > > not very active, they can easily lose this tag with a few changes by > > perhaps the infrastructure team for CI related fixes. > > > > As an action item, Thierry and I have paired up in order to look into > > a way to resolve this issue. There have been ideas to switch this to > > a report that is published at the end of the cycle rather than > > continuously. Julia (TheJulia) suggested that we change or track > > different types of diversity. > > > > Before we start diving into solutions, I wanted to bring this topic up > > to the mailing list and ask for any suggestions. In digging the > > codebase behind this[2], I've found that there are some knobs that we > > can also tweak if need-be, or perhaps we can adjust those numbers > > depending on the number of commits. > > Crazy idea: what if we dropped the idea of measuring the diversity and > allowed teams to decide when they applied the tag to themselves like we do > for other tags. (No wait! Come back!) > > Some teams enforce a requirement that the 2 core +2s come from reviewers > with different affiliations. We would say that any project that enforces that > rule would get the diversity tag. Then it's actually attached to something > concrete, and teams could decide for themselves when to drop it (because > they would start having difficulty merging stuff otherwise). > [Amrith Kumar] Isn't that what the current formula would flag as being a diverse project 😊 > I'm not entirely sold on this, but it's an idea I had that I wanted to throw out > there :) > > cheers, > Zane. > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Sat Jun 2 19:08:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 02 Jun 2018 15:08:28 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> Message-ID: <1527966421-sup-6019@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +0000: > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: > [...] > > It feels like we would be saying that we don't trust 2 core reviewers > > from the same company to put the project's goals or priorities over > > their employer's. And that doesn't feel like an assumption I would > > want us to encourage through a tag meant to show the health of the > > project. > [...] > > That's one way of putting it. On the other hand, if we ostensibly > have that sort of guideline (say, two core reviewers shouldn't be > the only ones to review a change submitted by someone else from > their same organization if the team is large and diverse enough to > support such a pattern) then it gives our reviewers a better > argument to push back on their management _if_ they're being > strongly urged to review/approve certain patches. At least then they > can say, "this really isn't going to fly because we have to get a > reviewer from another organization to agree it's in the best > interests of the project" rather than "fire me if you want but I'm > not approving that change, no matter how much your product launch is > going to be delayed." Do we have that problem? I honestly don't know how much pressure other folks are feeling. My impression is that we've mostly become good at finding the necessary compromises, but my experience doesn't cover all of our teams. > > While I'd like to think a lot of us have the ability to push back on > those sorts of adverse influences directly, I have a feeling not > everyone can comfortably do so. On the other hand, it might also > just be easy enough to give one of your fellow reviewers in another > org a heads up that maybe they should take a look at that patch over > there and provide some quick feedback... From doug at doughellmann.com Sat Jun 2 20:25:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Sat, 02 Jun 2018 16:25:40 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <021001d3faa4$cf6ddaa0$6e498fe0$@gmail.com> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> <1527614177-sup-1244@lrrr.local> <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> <021001d3faa4$cf6ddaa0$6e498fe0$@gmail.com> Message-ID: <1527970997-sup-2369@lrrr.local> Excerpts from amrith.kumar's message of 2018-06-02 15:06:27 -0400: > Every project on the one-way-trip to inactivity starts with what some people > will wishfully call a 'transient period' of reduced activity. Once the > transient nature is no longer the case (either it becomes active or the > transient becomes permanent) the normal process of eviction can begin. As > the guy who came up with the maintenance-mode tag, so as to apply it to > Trove, I believe that both the diversity tag and the maintenance mode tag > have a good reason to exist, and should both be retained independent of each > other. > > The logic always was, and should remain, that diversity is a measure of wide > multi-organizational support for a project; not measured in the total volume > of commits but the fraction of commits. There was much discussion about the > knobs in the diversity tag measurement when Flavio made the changes some > years back. I'm sorry I didn't attend the session in Vancouver but I'll try > and tune in to a TC office hours session and maybe get a rundown of what > precipitated this decision to move away from the diversity tag. We're talking about how to improve reporting on diversity, not stop doing it. Doug From amrith.kumar at gmail.com Sat Jun 2 20:50:08 2018 From: amrith.kumar at gmail.com (amrith.kumar at gmail.com) Date: Sat, 2 Jun 2018 16:50:08 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527970997-sup-2369@lrrr.local> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> <1527614177-sup-1244@lrrr.local> <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> <021001d3faa4$cf6ddaa0$6e498fe0$@gmail.com> <1527970997-sup-2369@lrrr.local> Message-ID: <034301d3fab3$4bc83ba0$e358b2e0$@gmail.com> > -----Original Message----- > From: Doug Hellmann > Sent: Saturday, June 2, 2018 4:26 PM > To: openstack-dev > Subject: Re: [openstack-dev] [tc] Organizational diversity tag > > Excerpts from amrith.kumar's message of 2018-06-02 15:06:27 -0400: > > Every project on the one-way-trip to inactivity starts with what some > > people will wishfully call a 'transient period' of reduced activity. > > Once the transient nature is no longer the case (either it becomes > > active or the transient becomes permanent) the normal process of > > eviction can begin. As the guy who came up with the maintenance-mode > > tag, so as to apply it to Trove, I believe that both the diversity tag > > and the maintenance mode tag have a good reason to exist, and should > > both be retained independent of each other. > > > > The logic always was, and should remain, that diversity is a measure > > of wide multi-organizational support for a project; not measured in > > the total volume of commits but the fraction of commits. There was > > much discussion about the knobs in the diversity tag measurement when > > Flavio made the changes some years back. I'm sorry I didn't attend the > > session in Vancouver but I'll try and tune in to a TC office hours > > session and maybe get a rundown of what precipitated this decision to > move away from the diversity tag. > > We're talking about how to improve reporting on diversity, not stop doing it. Why not just automate the thing that we have right now and have something kick a review automatically if the diversity in a team changes (per current formula)? > > Doug > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Sat Jun 2 23:13:23 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Sat, 2 Jun 2018 18:13:23 -0500 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... Message-ID: All, This note is to make everyone aware that I have submitted patches for a number of drivers that have not run 3rd party CI in 60 or more days.  The following is a list of the drivers, how long since their CI last ran and links to the patches which mark them as unsupported drivers: * DataCore CI – 99 Days - https://review.openstack.org/571533 * Dell EMC CorpHD CI – 121 Days - https://review.openstack.org/571555 * HGST Solutions CI – 306 Days - https://review.openstack.org/571560 * IBM GPFS CI – 212 Days - https://review.openstack.org/571590 * Itri Disco CI – 110 Days - https://review.openstack.org/571592 * Nimble Storage CI – 78 Days - https://review.openstack.org/571599 * StorPool – Unknown - https://review.openstack.org/571935 * Vedams –HPMSA – 442 Days - https://review.openstack.org/571940 * Brocade OpenStack – CI – 261 Days - https://review.openstack.org/571943 All of these drivers will be marked unsupported for the Rocky release and will be removed in the Stein release if the 3rd Party CI is not returned to a working state. If your driver is on the list and you have questions please respond to this thread and we can discuss what needs to be done to return support for your driver. Thank you for your attention to this matter! Jay (jungleboy) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sun Jun 3 01:50:41 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 3 Jun 2018 10:50:41 +0900 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: Updates on Django 2.0 support. * 18 of 29 affected repositories now support Django 2.0 * 4 repositories have pending patches. * 3 repositories below need help from individual project teams as I don't have actual running environments of them. * heat-dashboard https://review.openstack.org/#/c/567591/ * murano-dashboard https://review.openstack.org/#/c/571950/ * watcher-dashboard * 4 repositories below needs more help as there seems no python3 support or projects looks inactive. monasca-ui, cloudkitty-dashboard, karbor-dashboard, group-based-policy-ui global-requirements and upper-constraints changes are also proposed. Considering good progress in general, I believe we can land requirements changes soon. https://review.openstack.org/#/q/topic:django-version+(status:open+OR+status:merged) Detail progress is found at https://etherpad.openstack.org/p/django20-support Thanks, Akihiro 2018年5月15日(火) 4:21 Ivan Kolodyazhny : > Hi all, > > From the Horizon's perspective, it would be good to support Django 1.11 as > long as we can since it's an LTS release [2]. > Django 2.0 support is also extremely important because of it's the first > step in a python3-only environment and step forward on supporting > next Django 2.2 LTS release which will be released next April. > > We have to be careful to not break existing plugins and deployments by > introducing new Django version requirement. > We need to work more closely with plugins teams to getting everything > ready for Django 2.0+ before we change our requirements.txt. > I don't want to introduce any breaking changes for current plugins so we > need to to be sure that each plugin supports Django 2.0. It means > plugins have to have voting Django 2.0 jobs on their gates at least. I'll > do my best on this effort and will work with plugins teams to do as > much as we can in Rocky timeframe. > > [2] https://www.djangoproject.com/download/ > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki wrote: > >> >> >> 2018年5月14日(月) 21:42 Doug Hellmann : >> >>> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: >>> > 2018年5月12日(土) 3:04 Doug Hellmann : >>> > >>> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: >>> > > > Hi zigo and horizon plugin maintainers, >>> > > > >>> > > > Horizon itself already supports Django 2.0 and horizon unit test >>> covers >>> > > > Django 2.0 with Python 3.5. >>> > > > >>> > > > A question to all is whether we change the upper bound of Django >>> from >>> > > <2.0 >>> > > > to <2.1. >>> > > > My proposal is to bump the upper bound of Django to <2.1 in >>> Rocky-2. >>> > > > (Note that Django 1.11 will continue to be used for python 2.7 >>> > > environment.) >>> > > >>> > > Do we need to cap it at all? We've been trying to express our >>> > > dependencies without caps and rely on the constraints list to >>> > > test using a common version because this offers the most flexibility >>> as >>> > > we move to newer versions over time. >>> > > >>> > >>> > The main reason we cap django version so far is that django minor >>> version >>> > releases >>> > contain some backward incompatible changes and also drop deprecated >>> > features. >>> > A new django minor version release like 1.11 usually breaks horizon and >>> > plugins >>> > as horizon developers are not always checking django deprecations. >>> >>> OK. Having the cap in place makes it more complicated to test >>> upgrading, and then upgrade. Because we no longer synchronize >>> requirements, changing openstack/requirements does not trigger the >>> bot to propose the same change to all of the projects using the >>> dependency. Someone will have to do that by hand in the future, as we >>> are doing with eventlet right now >>> (https://review.openstack.org/#/q/topic:uncap-eventlet). >>> >>> Without the cap, we can test the upgrade by proposing a constraint >>> update and running the horizon (and/or plugin) unit tests. When those >>> tests pass, we can then step forward all at once by approving the >>> constraint change. >>> >> >> Thanks for the detail context. >> >> Honestly I am not sure which is better to cap or uncap the django version. >> We can try uncapping now and see what happens in the community. >> >> cross-horizon-(py27|py35) jobs of openstack/requirements checks >> if horizon works with a new version. it works for horizon, but perhaps it >> potentially >> break horizon plugins as it takes time to catch up with such changes. >> On the other hand, a version bump in upper-constraints.txt would be >> a good trigger for horizon plugin maintainers to sync all requirements. >> >> In addition, requirements are not synchronized automatically, >> so it seems not feasible to propose requirements changes per django >> version change. >> >> >>> >>> > >>> > I have a question on uncapping the django version. >>> > How can users/operators know which versions are supported? >>> > Do they need to check upper-constraints.txt? >>> >>> We do tell downstream consumers that the upper-constraints.txt file is >>> the set of things we test with, and that any other combination of >>> packages would need to be tested on their systems separately. >>> >>> > >>> > > > There are several points we should consider: >>> > > > - If we change it in global-requirements.txt, it means Django 2.0 >>> will be >>> > > > used for python3.5 environment. >>> > > > - Not a small number of horizon plugins still do not support >>> Django 2.0, >>> > > so >>> > > > bumping the upper bound to <2.1 will break their py35 tests. >>> > > > - From my experience of Django 2.0 support in some plugins, the >>> required >>> > > > changes are relatively simple like [1]. >>> > > > >>> > > > I created an etherpad page to track Django 2.0 support in horizon >>> > > plugins. >>> > > > https://etherpad.openstack.org/p/django20-support >>> > > > >>> > > > I proposed Django 2.0 support patches to several projects which I >>> think >>> > > are >>> > > > major. >>> > > > # Do not blame me if I don't cover your project :) >>> > > > >>> > > > Thought? >>> > > >>> > > It seems like a good goal for the horizon-plugin author community >>> > > to bring those projects up to date by supporting a current version >>> > > of Django (and any other dependencies), especially as we discuss >>> > > the impending switch over to python-3-first and then python-3-only. >>> > > >>> > >>> > Yes, python 3 support is an important topic. >>> > We also need to switch the default python version in mod_wsgi in >>> DevStack >>> > environment sooner or later. >>> >>> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor >>> the variable that tells devstack to use Python 3? >>> >> >> Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi >> (libapache2-mod-wsgi >> and libapache2-mod-wsgi-py3) and as a quick look the only difference is a >> module >> specified in LoadModule apache directive. >> I haven't tested it yet, but it seems worth explored. >> >> Akihiro >> >> >>> > >>> > > If this is an area where teams need help, updating that etherpad >>> > > with notes and requests for assistance will help us split up the >>> > > work. >>> > > >>> > >>> > Each team can help testing in Django 2.0 and/or python 3 support. >>> > We need to enable corresponding server projects in development >>> environments, >>> > but it is not easy to setup all projects by horizon team. Individual >>> > projects must be >>> > more familiar with their own projects. >>> > I sent several patches, but I actually tested them by unit tests. >>> > >>> > Thanks, >>> > Akihiro >>> > >>> > > >>> > > Doug >>> > > >>> > > > >>> > > > Thanks, >>> > > > Akihiro >>> > > > >>> > > > [1] https://review.openstack.org/#/c/566476/ >>> > > > >>> > > > 2018年5月8日(火) 17:45 Thomas Goirand : >>> > > > >>> > > > > Hi, >>> > > > > >>> > > > > It has been decided that, in Debian, we'll switch to Django 2.0 >>> after >>> > > > > Buster will be released. Buster is to be frozen next February. >>> This >>> > > > > means that we have roughly one more year before Django 1.x goes >>> away. >>> > > > > >>> > > > > Hopefully, Horizon will be ready for it, right? >>> > > > > >>> > > > > Hoping this helps, >>> > > > > Cheers, >>> > > > > >>> > > > > Thomas Goirand (zigo) >>> > > > > >>> > > > > >>> > > >>> __________________________________________________________________________ >>> > > > > OpenStack Development Mailing List (not for usage questions) >>> > > > > Unsubscribe: >>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > > > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > > > > >>> > > >>> > > >>> __________________________________________________________________________ >>> > > OpenStack Development Mailing List (not for usage questions) >>> > > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > > >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zulcss at gmail.com Sun Jun 3 01:56:19 2018 From: zulcss at gmail.com (Chuck Short) Date: Sat, 2 Jun 2018 21:56:19 -0400 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: Hi On Sat, Jun 2, 2018 at 9:50 PM, Akihiro Motoki wrote: > Updates on Django 2.0 support. > > * 18 of 29 affected repositories now support Django 2.0 > * 4 repositories have pending patches. > * 3 repositories below need help from individual project teams as I don't > have actual running environments of them. > * heat-dashboard https://review.openstack.org/#/c/567591/ > * murano-dashboard https://review.openstack.org/#/c/571950/ > * watcher-dashboard > * 4 repositories below needs more help as there seems no python3 support > or projects looks inactive. > monasca-ui, cloudkitty-dashboard, karbor-dashboard, > group-based-policy-ui > > Monasca-ui has python3 support however the CI hasn't been enabled. > global-requirements and upper-constraints changes are also proposed. > Considering good progress in general, I believe we can land requirements > changes soon. > https://review.openstack.org/#/q/topic:django-version+( > status:open+OR+status:merged) > > Detail progress is found at https://etherpad.openstack. > org/p/django20-support > > Thanks, > Akihiro > > 2018年5月15日(火) 4:21 Ivan Kolodyazhny : > >> Hi all, >> >> From the Horizon's perspective, it would be good to support Django 1.11 >> as long as we can since it's an LTS release [2]. >> Django 2.0 support is also extremely important because of it's the first >> step in a python3-only environment and step forward on supporting >> next Django 2.2 LTS release which will be released next April. >> >> We have to be careful to not break existing plugins and deployments by >> introducing new Django version requirement. >> We need to work more closely with plugins teams to getting everything >> ready for Django 2.0+ before we change our requirements.txt. >> I don't want to introduce any breaking changes for current plugins so we >> need to to be sure that each plugin supports Django 2.0. It means >> plugins have to have voting Django 2.0 jobs on their gates at least. I'll >> do my best on this effort and will work with plugins teams to do as >> much as we can in Rocky timeframe. >> >> [2] https://www.djangoproject.com/download/ >> >> Regards, >> Ivan Kolodyazhny, >> http://blog.e0ne.info/ >> >> On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki >> wrote: >> >>> >>> >>> 2018年5月14日(月) 21:42 Doug Hellmann : >>> >>>> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: >>>> > 2018年5月12日(土) 3:04 Doug Hellmann : >>>> > >>>> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: >>>> > > > Hi zigo and horizon plugin maintainers, >>>> > > > >>>> > > > Horizon itself already supports Django 2.0 and horizon unit test >>>> covers >>>> > > > Django 2.0 with Python 3.5. >>>> > > > >>>> > > > A question to all is whether we change the upper bound of Django >>>> from >>>> > > <2.0 >>>> > > > to <2.1. >>>> > > > My proposal is to bump the upper bound of Django to <2.1 in >>>> Rocky-2. >>>> > > > (Note that Django 1.11 will continue to be used for python 2.7 >>>> > > environment.) >>>> > > >>>> > > Do we need to cap it at all? We've been trying to express our >>>> > > dependencies without caps and rely on the constraints list to >>>> > > test using a common version because this offers the most >>>> flexibility as >>>> > > we move to newer versions over time. >>>> > > >>>> > >>>> > The main reason we cap django version so far is that django minor >>>> version >>>> > releases >>>> > contain some backward incompatible changes and also drop deprecated >>>> > features. >>>> > A new django minor version release like 1.11 usually breaks horizon >>>> and >>>> > plugins >>>> > as horizon developers are not always checking django deprecations. >>>> >>>> OK. Having the cap in place makes it more complicated to test >>>> upgrading, and then upgrade. Because we no longer synchronize >>>> requirements, changing openstack/requirements does not trigger the >>>> bot to propose the same change to all of the projects using the >>>> dependency. Someone will have to do that by hand in the future, as we >>>> are doing with eventlet right now >>>> (https://review.openstack.org/#/q/topic:uncap-eventlet). >>>> >>>> Without the cap, we can test the upgrade by proposing a constraint >>>> update and running the horizon (and/or plugin) unit tests. When those >>>> tests pass, we can then step forward all at once by approving the >>>> constraint change. >>>> >>> >>> Thanks for the detail context. >>> >>> Honestly I am not sure which is better to cap or uncap the django >>> version. >>> We can try uncapping now and see what happens in the community. >>> >>> cross-horizon-(py27|py35) jobs of openstack/requirements checks >>> if horizon works with a new version. it works for horizon, but perhaps >>> it potentially >>> break horizon plugins as it takes time to catch up with such changes. >>> On the other hand, a version bump in upper-constraints.txt would be >>> a good trigger for horizon plugin maintainers to sync all requirements. >>> >>> In addition, requirements are not synchronized automatically, >>> so it seems not feasible to propose requirements changes per django >>> version change. >>> >>> >>>> >>>> > >>>> > I have a question on uncapping the django version. >>>> > How can users/operators know which versions are supported? >>>> > Do they need to check upper-constraints.txt? >>>> >>>> We do tell downstream consumers that the upper-constraints.txt file is >>>> the set of things we test with, and that any other combination of >>>> packages would need to be tested on their systems separately. >>>> >>>> > >>>> > > > There are several points we should consider: >>>> > > > - If we change it in global-requirements.txt, it means Django 2.0 >>>> will be >>>> > > > used for python3.5 environment. >>>> > > > - Not a small number of horizon plugins still do not support >>>> Django 2.0, >>>> > > so >>>> > > > bumping the upper bound to <2.1 will break their py35 tests. >>>> > > > - From my experience of Django 2.0 support in some plugins, the >>>> required >>>> > > > changes are relatively simple like [1]. >>>> > > > >>>> > > > I created an etherpad page to track Django 2.0 support in horizon >>>> > > plugins. >>>> > > > https://etherpad.openstack.org/p/django20-support >>>> > > > >>>> > > > I proposed Django 2.0 support patches to several projects which I >>>> think >>>> > > are >>>> > > > major. >>>> > > > # Do not blame me if I don't cover your project :) >>>> > > > >>>> > > > Thought? >>>> > > >>>> > > It seems like a good goal for the horizon-plugin author community >>>> > > to bring those projects up to date by supporting a current version >>>> > > of Django (and any other dependencies), especially as we discuss >>>> > > the impending switch over to python-3-first and then python-3-only. >>>> > > >>>> > >>>> > Yes, python 3 support is an important topic. >>>> > We also need to switch the default python version in mod_wsgi in >>>> DevStack >>>> > environment sooner or later. >>>> >>>> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor >>>> the variable that tells devstack to use Python 3? >>>> >>> >>> Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi >>> (libapache2-mod-wsgi >>> and libapache2-mod-wsgi-py3) and as a quick look the only difference is >>> a module >>> specified in LoadModule apache directive. >>> I haven't tested it yet, but it seems worth explored. >>> >>> Akihiro >>> >>> >>>> > >>>> > > If this is an area where teams need help, updating that etherpad >>>> > > with notes and requests for assistance will help us split up the >>>> > > work. >>>> > > >>>> > >>>> > Each team can help testing in Django 2.0 and/or python 3 support. >>>> > We need to enable corresponding server projects in development >>>> environments, >>>> > but it is not easy to setup all projects by horizon team. Individual >>>> > projects must be >>>> > more familiar with their own projects. >>>> > I sent several patches, but I actually tested them by unit tests. >>>> > >>>> > Thanks, >>>> > Akihiro >>>> > >>>> > > >>>> > > Doug >>>> > > >>>> > > > >>>> > > > Thanks, >>>> > > > Akihiro >>>> > > > >>>> > > > [1] https://review.openstack.org/#/c/566476/ >>>> > > > >>>> > > > 2018年5月8日(火) 17:45 Thomas Goirand : >>>> > > > >>>> > > > > Hi, >>>> > > > > >>>> > > > > It has been decided that, in Debian, we'll switch to Django 2.0 >>>> after >>>> > > > > Buster will be released. Buster is to be frozen next February. >>>> This >>>> > > > > means that we have roughly one more year before Django 1.x goes >>>> away. >>>> > > > > >>>> > > > > Hopefully, Horizon will be ready for it, right? >>>> > > > > >>>> > > > > Hoping this helps, >>>> > > > > Cheers, >>>> > > > > >>>> > > > > Thomas Goirand (zigo) >>>> > > > > >>>> > > > > >>>> > > ____________________________________________________________ >>>> ______________ >>>> > > > > OpenStack Development Mailing List (not for usage questions) >>>> > > > > Unsubscribe: >>>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>> openstack-dev >>>> > > > > >>>> > > >>>> > > ____________________________________________________________ >>>> ______________ >>>> > > OpenStack Development Mailing List (not for usage questions) >>>> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>> unsubscribe >>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > > >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>> unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sun Jun 3 02:45:51 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 3 Jun 2018 11:45:51 +0900 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: 2018年6月3日(日) 10:56 Chuck Short : > Hi > > On Sat, Jun 2, 2018 at 9:50 PM, Akihiro Motoki wrote: > >> Updates on Django 2.0 support. >> >> * 18 of 29 affected repositories now support Django 2.0 >> * 4 repositories have pending patches. >> * 3 repositories below need help from individual project teams as I don't >> have actual running environments of them. >> * heat-dashboard https://review.openstack.org/#/c/567591/ >> * murano-dashboard https://review.openstack.org/#/c/571950/ >> * watcher-dashboard >> * 4 repositories below needs more help as there seems no python3 support >> or projects looks inactive. >> monasca-ui, cloudkitty-dashboard, karbor-dashboard, >> group-based-policy-ui >> >> > Monasca-ui has python3 support however the CI hasn't been enabled. > Considering my bandwidth, it would be nice if monasca-ui team can work on django2.0 support. > > >> global-requirements and upper-constraints changes are also proposed. >> Considering good progress in general, I believe we can land requirements >> changes soon. >> >> https://review.openstack.org/#/q/topic:django-version+(status:open+OR+status:merged) >> >> Detail progress is found at >> https://etherpad.openstack.org/p/django20-support >> >> Thanks, >> Akihiro >> >> 2018年5月15日(火) 4:21 Ivan Kolodyazhny : >> >>> Hi all, >>> >>> From the Horizon's perspective, it would be good to support Django 1.11 >>> as long as we can since it's an LTS release [2]. >>> Django 2.0 support is also extremely important because of it's the first >>> step in a python3-only environment and step forward on supporting >>> next Django 2.2 LTS release which will be released next April. >>> >>> We have to be careful to not break existing plugins and deployments by >>> introducing new Django version requirement. >>> We need to work more closely with plugins teams to getting everything >>> ready for Django 2.0+ before we change our requirements.txt. >>> I don't want to introduce any breaking changes for current plugins so we >>> need to to be sure that each plugin supports Django 2.0. It means >>> plugins have to have voting Django 2.0 jobs on their gates at least. >>> I'll do my best on this effort and will work with plugins teams to do as >>> much as we can in Rocky timeframe. >>> >>> [2] https://www.djangoproject.com/download/ >>> >>> Regards, >>> Ivan Kolodyazhny, >>> http://blog.e0ne.info/ >>> >>> On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki >>> wrote: >>> >>>> >>>> >>>> 2018年5月14日(月) 21:42 Doug Hellmann : >>>> >>>>> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: >>>>> > 2018年5月12日(土) 3:04 Doug Hellmann : >>>>> > >>>>> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 >>>>> +0900: >>>>> > > > Hi zigo and horizon plugin maintainers, >>>>> > > > >>>>> > > > Horizon itself already supports Django 2.0 and horizon unit test >>>>> covers >>>>> > > > Django 2.0 with Python 3.5. >>>>> > > > >>>>> > > > A question to all is whether we change the upper bound of Django >>>>> from >>>>> > > <2.0 >>>>> > > > to <2.1. >>>>> > > > My proposal is to bump the upper bound of Django to <2.1 in >>>>> Rocky-2. >>>>> > > > (Note that Django 1.11 will continue to be used for python 2.7 >>>>> > > environment.) >>>>> > > >>>>> > > Do we need to cap it at all? We've been trying to express our >>>>> > > dependencies without caps and rely on the constraints list to >>>>> > > test using a common version because this offers the most >>>>> flexibility as >>>>> > > we move to newer versions over time. >>>>> > > >>>>> > >>>>> > The main reason we cap django version so far is that django minor >>>>> version >>>>> > releases >>>>> > contain some backward incompatible changes and also drop deprecated >>>>> > features. >>>>> > A new django minor version release like 1.11 usually breaks horizon >>>>> and >>>>> > plugins >>>>> > as horizon developers are not always checking django deprecations. >>>>> >>>>> OK. Having the cap in place makes it more complicated to test >>>>> upgrading, and then upgrade. Because we no longer synchronize >>>>> requirements, changing openstack/requirements does not trigger the >>>>> bot to propose the same change to all of the projects using the >>>>> dependency. Someone will have to do that by hand in the future, as we >>>>> are doing with eventlet right now >>>>> (https://review.openstack.org/#/q/topic:uncap-eventlet). >>>>> >>>>> Without the cap, we can test the upgrade by proposing a constraint >>>>> update and running the horizon (and/or plugin) unit tests. When those >>>>> tests pass, we can then step forward all at once by approving the >>>>> constraint change. >>>>> >>>> >>>> Thanks for the detail context. >>>> >>>> Honestly I am not sure which is better to cap or uncap the django >>>> version. >>>> We can try uncapping now and see what happens in the community. >>>> >>>> cross-horizon-(py27|py35) jobs of openstack/requirements checks >>>> if horizon works with a new version. it works for horizon, but perhaps >>>> it potentially >>>> break horizon plugins as it takes time to catch up with such changes. >>>> On the other hand, a version bump in upper-constraints.txt would be >>>> a good trigger for horizon plugin maintainers to sync all requirements. >>>> >>>> In addition, requirements are not synchronized automatically, >>>> so it seems not feasible to propose requirements changes per django >>>> version change. >>>> >>>> >>>>> >>>>> > >>>>> > I have a question on uncapping the django version. >>>>> > How can users/operators know which versions are supported? >>>>> > Do they need to check upper-constraints.txt? >>>>> >>>>> We do tell downstream consumers that the upper-constraints.txt file is >>>>> the set of things we test with, and that any other combination of >>>>> packages would need to be tested on their systems separately. >>>>> >>>>> > >>>>> > > > There are several points we should consider: >>>>> > > > - If we change it in global-requirements.txt, it means Django >>>>> 2.0 will be >>>>> > > > used for python3.5 environment. >>>>> > > > - Not a small number of horizon plugins still do not support >>>>> Django 2.0, >>>>> > > so >>>>> > > > bumping the upper bound to <2.1 will break their py35 tests. >>>>> > > > - From my experience of Django 2.0 support in some plugins, the >>>>> required >>>>> > > > changes are relatively simple like [1]. >>>>> > > > >>>>> > > > I created an etherpad page to track Django 2.0 support in horizon >>>>> > > plugins. >>>>> > > > https://etherpad.openstack.org/p/django20-support >>>>> > > > >>>>> > > > I proposed Django 2.0 support patches to several projects which >>>>> I think >>>>> > > are >>>>> > > > major. >>>>> > > > # Do not blame me if I don't cover your project :) >>>>> > > > >>>>> > > > Thought? >>>>> > > >>>>> > > It seems like a good goal for the horizon-plugin author community >>>>> > > to bring those projects up to date by supporting a current version >>>>> > > of Django (and any other dependencies), especially as we discuss >>>>> > > the impending switch over to python-3-first and then python-3-only. >>>>> > > >>>>> > >>>>> > Yes, python 3 support is an important topic. >>>>> > We also need to switch the default python version in mod_wsgi in >>>>> DevStack >>>>> > environment sooner or later. >>>>> >>>>> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor >>>>> the variable that tells devstack to use Python 3? >>>>> >>>> >>>> Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi >>>> (libapache2-mod-wsgi >>>> and libapache2-mod-wsgi-py3) and as a quick look the only difference is >>>> a module >>>> specified in LoadModule apache directive. >>>> I haven't tested it yet, but it seems worth explored. >>>> >>>> Akihiro >>>> >>>> >>>>> > >>>>> > > If this is an area where teams need help, updating that etherpad >>>>> > > with notes and requests for assistance will help us split up the >>>>> > > work. >>>>> > > >>>>> > >>>>> > Each team can help testing in Django 2.0 and/or python 3 support. >>>>> > We need to enable corresponding server projects in development >>>>> environments, >>>>> > but it is not easy to setup all projects by horizon team. Individual >>>>> > projects must be >>>>> > more familiar with their own projects. >>>>> > I sent several patches, but I actually tested them by unit tests. >>>>> > >>>>> > Thanks, >>>>> > Akihiro >>>>> > >>>>> > > >>>>> > > Doug >>>>> > > >>>>> > > > >>>>> > > > Thanks, >>>>> > > > Akihiro >>>>> > > > >>>>> > > > [1] https://review.openstack.org/#/c/566476/ >>>>> > > > >>>>> > > > 2018年5月8日(火) 17:45 Thomas Goirand : >>>>> > > > >>>>> > > > > Hi, >>>>> > > > > >>>>> > > > > It has been decided that, in Debian, we'll switch to Django >>>>> 2.0 after >>>>> > > > > Buster will be released. Buster is to be frozen next February. >>>>> This >>>>> > > > > means that we have roughly one more year before Django 1.x >>>>> goes away. >>>>> > > > > >>>>> > > > > Hopefully, Horizon will be ready for it, right? >>>>> > > > > >>>>> > > > > Hoping this helps, >>>>> > > > > Cheers, >>>>> > > > > >>>>> > > > > Thomas Goirand (zigo) >>>>> > > > > >>>>> > > > > >>>>> > > >>>>> __________________________________________________________________________ >>>>> > > > > OpenStack Development Mailing List (not for usage questions) >>>>> > > > > Unsubscribe: >>>>> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> > > > > >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> > > > > >>>>> > > >>>>> > > >>>>> __________________________________________________________________________ >>>>> > > OpenStack Development Mailing List (not for usage questions) >>>>> > > Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> > > >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oaanson at gmail.com Sun Jun 3 06:10:26 2018 From: oaanson at gmail.com (Omer Anson) Date: Sun, 3 Jun 2018 09:10:26 +0300 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: <20180601175423.GA1364@sm-xps> References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> <20180601175423.GA1364@sm-xps> Message-ID: Hi, If the issue is just the tagging, I'll tag the releases today/tomorrow. I figured that since Dragonflow has an independent release cycle, and we have very little manpower, regular tagging makes less sense and would save us a little time. Thanks, Omer On Fri, 1 Jun 2018 at 20:54, Sean McGinnis wrote: > On Fri, Jun 01, 2018 at 01:29:41PM -0400, Doug Hellmann wrote: > > That presentation says "Users should do their own tagging/release > > management" (6:31). I don't think that's really an approach we want > > to be encouraging project teams to take. > > > I hadn't had a chance to watch the presentation yet. It also states right > aroung there that there is only one dev on the project. That really > concerns > me. > > And in very strong agreement - we definitely do not want to be encouraging > project consumers to be the ones tagging and doing their own releases. > > We would certainly welcome anyone interested to get involved in the > project and > be added as an official release liaison so they can request official > releases > though. > > > I would suggest placing Dragonflow in maintenance mode, but if the > > team doesn't have the resources to participate in the normal community > > processes, maybe it should be moved out of the official project > > list instead? > > > > Do we have any sort of indication of how many deployments rely on > > Dragonflow? Does the neutron team have capacity to bring Dragonflow > > back in to their list of managed repos and help them with releases > > and other common process tasks? > > > > Excerpts from Miguel Lavalle's message of 2018-06-01 11:38:53 -0500: > > > There was an project update presentation in Vancouver: > > > > https://www.openstack.org/videos/vancouver-2018/dragonflow-project-update-2 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhu.bingbing at 99cloud.net Mon Jun 4 06:23:25 2018 From: zhu.bingbing at 99cloud.net (zhubingbing) Date: Mon, 4 Jun 2018 14:23:25 +0800 (CST) Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: <488ad9c8.386a.163c976f0f0.Coremail.zhu.bingbing@99cloud.net> +1 At 2018-06-01 01:02:27, "Borne Mace" wrote: >Greetings all, > >I would like to propose the addition of Steve Noyes to the kolla-cli >core reviewer team. Consider this nomination as my personal +1. > >Steve has a long history with the kolla-cli and should be considered its >co-creator as probably half or more of the existing code was due to his >efforts. He has now been working diligently since it was pushed >upstream to improve the stability and testability of the cli and has the >second most commits on the project. > >The kolla core team consists of 19 people, and the kolla-cli team of 2, >for a total of 21. Steve therefore requires a minimum of 11 votes (so >just 10 more after my +1), with no veto -2 votes within a 7 day voting >window to end on June 6th. Voting will be closed immediately on a veto >or in the case of a unanimous vote. > >As I'm not sure how active all of the 19 kolla cores are, your attention >and timely vote is much appreciated. > >Thanks! > >-- Borne > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Jun 4 06:43:07 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 3 Jun 2018 23:43:07 -0700 Subject: [openstack-dev] [tripleo] Containerized Undercloud by default In-Reply-To: References: Message-ID: On Thu, May 31, 2018 at 9:13 PM, Emilien Macchi wrote: > > - all multinode scenarios - current blocked by 1774297 as well but also >> https://review.openstack.org/#/c/571566/ >> > This part is done and ready for review (CI team + others): https://review.openstack.org/#/c/571529/ Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From duonghq at vn.fujitsu.com Mon Jun 4 08:30:26 2018 From: duonghq at vn.fujitsu.com (Ha Quang, Duong) Date: Mon, 4 Jun 2018 08:30:26 +0000 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: <87af9d91b6e84b83a8f5f639bdf11cb1@G07SGEXCMSGPS05.g07.fujitsu.local> Hi, +1 from me, thanks for your works. Duong > -----Original Message----- > From: Borne Mace [mailto:borne.mace at oracle.com] > Sent: Friday, June 01, 2018 12:02 AM > To: openstack-dev > Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli > core reviewer > > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli core > reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered its co- > creator as probably half or more of the existing code was due to his efforts. > He has now been working diligently since it was pushed upstream to improve > the stability and testability of the cli and has the second most commits on the > project. > > The kolla core team consists of 19 people, and the kolla-cli team of 2, for a > total of 21. Steve therefore requires a minimum of 11 votes (so just 10 more > after my +1), with no veto -2 votes within a 7 day voting window to end on > June 6th. Voting will be closed immediately on a veto or in the case of a > unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your attention and > timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kysung at devstack.co.kr Mon Jun 4 09:39:39 2018 From: kysung at devstack.co.kr (KiYoun Sung) Date: Mon, 4 Jun 2018 18:39:39 +0900 Subject: [openstack-dev] [queens][ceilometer][gnocchi] no network resource, after installation openstack queens Message-ID: Hello. I installed Openstack Queens version using Openstack-ansible. and I set gnocchi, ceilometer for metering. After installation, I got metric from instance, image, swift, etc. but, there is no metric by network I did by gnocchi cli, like this $ gnocchi resource-type list the network resource-type is exist. but, "$ gnocchi resource list" is empty. I made a external network and created floating ip. but, there is no network resource. neutron.conf has [oslo_messaging_notifications] field below: [oslo_messaging_notifications] notification_topics = notifications driver = messagingv2 transport_url = rabbit://neutron:xxxx at 172.29.238.44:5671//neutron How can I get network resoruce(especially floating ip)? What is problem? Thank you. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jun 4 09:51:30 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 Jun 2018 11:51:30 +0200 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> <20180601175423.GA1364@sm-xps> Message-ID: Omer Anson wrote: > If the issue is just the tagging, I'll tag the releases today/tomorrow. > I figured that since Dragonflow has an independent release cycle, and we > have very little manpower, regular tagging makes less sense and would > save us a little time. Thanks Omer! For tagging, I suggest you use a change proposed to the openstack/releases repository, so that we can test that the release will work. Don't hesitate to ping us on #openstack-releases or read the doc at: http://git.openstack.org/cgit/openstack/releases/tree/README.rst -- Thierry Carrez (ttx) From thierry at openstack.org Mon Jun 4 10:06:49 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 Jun 2018 12:06:49 +0200 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <034301d3fab3$4bc83ba0$e358b2e0$@gmail.com> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> <1527614177-sup-1244@lrrr.local> <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> <021001d3faa4$cf6ddaa0$6e498fe0$@gmail.com> <1527970997-sup-2369@lrrr.local> <034301d3fab3$4bc83ba0$e358b2e0$@gmail.com> Message-ID: amrith.kumar at gmail.com wrote: >> -----Original Message----- >> From: Doug Hellmann >> Sent: Saturday, June 2, 2018 4:26 PM >> To: openstack-dev >> Subject: Re: [openstack-dev] [tc] Organizational diversity tag >> >> Excerpts from amrith.kumar's message of 2018-06-02 15:06:27 -0400: >>> Every project on the one-way-trip to inactivity starts with what some >>> people will wishfully call a 'transient period' of reduced activity. >>> Once the transient nature is no longer the case (either it becomes >>> active or the transient becomes permanent) the normal process of >>> eviction can begin. As the guy who came up with the maintenance-mode >>> tag, so as to apply it to Trove, I believe that both the diversity tag >>> and the maintenance mode tag have a good reason to exist, and should >>> both be retained independent of each other. >>> >>> The logic always was, and should remain, that diversity is a measure >>> of wide multi-organizational support for a project; not measured in >>> the total volume of commits but the fraction of commits. There was >>> much discussion about the knobs in the diversity tag measurement when >>> Flavio made the changes some years back. I'm sorry I didn't attend the >>> session in Vancouver but I'll try and tune in to a TC office hours >>> session and maybe get a rundown of what precipitated this decision to >> move away from the diversity tag. >> >> We're talking about how to improve reporting on diversity, not stop doing it. > > Why not just automate the thing that we have right now and have something kick a review automatically if the diversity in a team changes (per current formula)? That is what we did: get the thing we have right now to propose changes. But we always had a quick human pass to check that what the script proposed corresponded to a reality. Lately (with lower activity in a number of teams), more and more automatically-proposed changes did not match a reality anymore, to the point where a majority of the proposed changes need to be dropped. Example: a low-activity single-vendor project team suddenly loses the tag because one person pushes a patch to fix zuul jobs and another pushes a doc build fix. Example 2: a team with 3 core reveiwers flaps between diverse affiliation and single-vendor depending on who does the core reviewing on its 3 patches per month. Hence the suggestion to either improve our metrics to better support low-activity teams, or switch to a more qualitative/prose report instead of quantitative/tags. -- Thierry Carrez (ttx) From witold.bedyk at est.fujitsu.com Mon Jun 4 11:27:14 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Mon, 4 Jun 2018 11:27:14 +0000 Subject: [openstack-dev] [monasca] Nominating Doug Szumski as Monasca core Message-ID: <04d9c25994e041869ca51607c47feb67@R01UKEXCASM126.r01.fujitsu.local> Hello Monasca team, I would like to nominate Doug Szumski (dougsz) for Monasca core team. He actively contributes to the project and works on adding Monasca to kolla-ansible. He has good project overview which he shares in his reviews. Best greetings Witek From gdubreui at redhat.com Mon Jun 4 11:57:33 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 4 Jun 2018 21:57:33 +1000 Subject: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core) Message-ID: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> Hi, Can someone from the core team request infra to create a feature branch for the Proof of Concept we agreed to do during API SIG forum session [1] a Vancouver? Thanks, Gilles [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum From jungleboyj at gmail.com Mon Jun 4 12:05:00 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 4 Jun 2018 07:05:00 -0500 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527966421-sup-6019@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> Message-ID: <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> On 6/2/2018 2:08 PM, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +0000: >> On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: >> [...] >>> It feels like we would be saying that we don't trust 2 core reviewers >>> from the same company to put the project's goals or priorities over >>> their employer's. And that doesn't feel like an assumption I would >>> want us to encourage through a tag meant to show the health of the >>> project. >> [...] >> >> That's one way of putting it. On the other hand, if we ostensibly >> have that sort of guideline (say, two core reviewers shouldn't be >> the only ones to review a change submitted by someone else from >> their same organization if the team is large and diverse enough to >> support such a pattern) then it gives our reviewers a better >> argument to push back on their management _if_ they're being >> strongly urged to review/approve certain patches. At least then they >> can say, "this really isn't going to fly because we have to get a >> reviewer from another organization to agree it's in the best >> interests of the project" rather than "fire me if you want but I'm >> not approving that change, no matter how much your product launch is >> going to be delayed." > Do we have that problem? I honestly don't know how much pressure other > folks are feeling. My impression is that we've mostly become good at > finding the necessary compromises, but my experience doesn't cover all > of our teams. In my experience this hasn't been a problem for quite some time.  In the past, at least for Cinder, there were some minor cases of this but as projects have matured this has been less of an issue. >> While I'd like to think a lot of us have the ability to push back on >> those sorts of adverse influences directly, I have a feeling not >> everyone can comfortably do so. On the other hand, it might also >> just be easy enough to give one of your fellow reviewers in another >> org a heads up that maybe they should take a look at that patch over >> there and provide some quick feedback... > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Jun 4 12:20:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 4 Jun 2018 08:20:33 -0400 Subject: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core) In-Reply-To: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> References: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> Message-ID: <69FC568F-D687-4EC8-AAEE-9FB3C5695F1A@doughellmann.com> > On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil wrote: > > Hi, > > Can someone from the core team request infra to create a feature branch for the Proof of Concept we agreed to do during API SIG forum session [1] a Vancouver? > > Thanks, > Gilles > > [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum You can do this through the releases repo now. See the README for instructions. Doug From sean.mcginnis at gmx.com Mon Jun 4 12:25:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 07:25:14 -0500 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> <20180601175423.GA1364@sm-xps> Message-ID: <20180604122513.GA63851@smcginnis-mbp.local> On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote: > Hi, > > If the issue is just the tagging, I'll tag the releases today/tomorrow. I > figured that since Dragonflow has an independent release cycle, and we have > very little manpower, regular tagging makes less sense and would save us a > little time. > > Thanks, > Omer > Thanks Omer. I part of the concern, and the thing that caught our attention, was that although the project is using the independent release model it still had stable branches created up until queens. So this was mostly a check to make sure nothing else has changed and that the project should still be considered "active". Since it has been quite a while since the last official release from the project, it would be good if you proposed a release to make sure nothing has broken with the release process for DragonFlow and to make all of the code changes since the last release available to potential consumers. Sean From jungleboyj at gmail.com Mon Jun 4 12:57:50 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 4 Jun 2018 07:57:50 -0500 Subject: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad In-Reply-To: References: Message-ID: <3cf75c77-d513-c779-7c74-3211ec9724e8@gmail.com> On 6/1/2018 7:28 PM, Chris Dent wrote: > On Wed, 9 May 2018, Chris Dent wrote: > >> I've started an etherpad for the forum session in Vancouver devoted >> to discussing the possibility of tracking and allocation resources >> in Cinder using the Placement service. This is not a done deal. >> Instead the session is to discuss if it could work and how to make >> it happen if it seems like a good idea. >> >> The etherpad is at >> >>    https://etherpad.openstack.org/p/YVR-cinder-placement > > The session went well. Some of the members of the cinder team who > might have had more questions had not been able to be at summit so > we were unable to get their input. > > We clarified some of the things that cinder wants to be able to > accomplish (run multiple schedulers in active-active and avoid race > conditions) and the fact that this is what placement is built for. > We also made it clear that placement itself can be highly available > (and scalable) because of its nature as a dead-simple web app over a > database. > > The next steps are for the cinder team to talk amongst themselves > and socialize the capabilities of placement (with the help of > placement people) and see if it will be suitable. It is unlikely > there will be much visible progress in this area before Stein. Chris, Thanks for this update.  I have it on the agenda for the Cinder team to discuss this further.  We ran out of time in last week's meeting but will hopefully get some time to discuss it this week.  We will keep you updated as to how things progress on our end and pull in the placement guys as necessary. Jay > > See the etherpad for a bit more detail. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jun 4 13:13:47 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 Jun 2018 15:13:47 +0200 Subject: [openstack-dev] [stable][EM] Summary of forum session(s) on extended maintenance Message-ID: <9401d333-64a1-6a5c-3f0a-04fb1880f5bd@openstack.org> Hi! We had a double session on extended maintenance at the Forum in Vancouver, here is a late summary of it. Feel free to add to it if you remember extra things. The first part of the session was to present the Extended Maintenance process as implemented after the discussion at the PTG in Dublin, and answer questions around it. The process was generally well received, with question on how to sign up (no real sign up required, just start helping and join #openstack-stable). There were also a number of questions around the need to maintain all releases up to an old maintained release, with explanation of the FFU process and the need to avoid regressions from release to release. The second part of the session was taking a step back and discuss extended maintenance in the context of release cycles and upgrade pain. A summary of the Dublin discussion was given. Some questions were raised on the need for fast-forward upgrades (vs. skip-level upgrades), as well as a bit of a brainstorm around how to encourage people to gather around popular EM releases (a wiki page was considered a good trade-off). The EM process mandates that no releases would be tagged after the end of the 18-month official "maintenance" period. There was a standing question on the need to still release libraries (since tests of HEAD changes are by default run against released versions of libraries). The consensus in the room was that when extended maintenance starts, we should switch to testing stable/$foo HEAD changes against stable/$foo HEAD of libraries. This should be first done when Ocata switches to extended maintenance in August. The discussion then switched to how to further ease upgrade pain, with reports of progress on the Upgrades SIG on better documenting the Fast Forward Upgrade process. We discussed how minimal cold upgrades capabilities should be considered the minimum to be considered an official OpenStack component, and whether we could use the Goals mechanism to push it. We also discussed testing database migrations with real production data (what turbo-hipster did) and the challenges to share deidentified data to that purpose. Cheers, -- Thierry Carrez (ttx) From bodenvmw at gmail.com Mon Jun 4 13:27:16 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 4 Jun 2018 07:27:16 -0600 Subject: [openstack-dev] [neutron] Bug deputy report May 28 - June 3 Message-ID: <4db08079-ea80-e46a-e3a5-321dbd393413@gmail.com> Last week we had a total of 14 bugs come in [1]; 2 of which are RFEs. Only 1 defect is high priority [2] and is already in progress. There are still a few bugs under discussion/investigation: - 1774257 "neutron-openvswitch-agent RuntimeError: Switch connection timeout" could use some input from folks skilled with OVS and affects multiple people. - 1773551 "Error loading interface driver 'neutron.agent.linux.interface.BridgeInterfaceDriver'" is still waiting for input from the submitter. - 1773282 "errors occured when create vpnservice with flavor_id:Flavors plugin not Found" still under investigation and could use an eye from the VPNaaS team. [1] https://bugs.launchpad.net/neutron/+bugs?orderby=-datecreated&start=0 [2] https://bugs.launchpad.net/neutron/+bug/1774006 From oaanson at gmail.com Mon Jun 4 13:53:54 2018 From: oaanson at gmail.com (Omer Anson) Date: Mon, 4 Jun 2018 16:53:54 +0300 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: <20180604122513.GA63851@smcginnis-mbp.local> References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> <20180601175423.GA1364@sm-xps> <20180604122513.GA63851@smcginnis-mbp.local> Message-ID: Sure. No worries. The project is still active :) I tagged and branched out Queens. Thanks, Omer. On Mon, 4 Jun 2018 at 15:25, Sean McGinnis wrote: > On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote: > > Hi, > > > > If the issue is just the tagging, I'll tag the releases today/tomorrow. I > > figured that since Dragonflow has an independent release cycle, and we > have > > very little manpower, regular tagging makes less sense and would save us > a > > little time. > > > > Thanks, > > Omer > > > > Thanks Omer. I part of the concern, and the thing that caught our > attention, > was that although the project is using the independent release model it > still > had stable branches created up until queens. > > So this was mostly a check to make sure nothing else has changed and that > the > project should still be considered "active". > > Since it has been quite a while since the last official release from the > project, it would be good if you proposed a release to make sure nothing > has > broken with the release process for DragonFlow and to make all of the code > changes since the last release available to potential consumers. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Mon Jun 4 14:08:12 2018 From: sam47priya at gmail.com (Sam P) Date: Mon, 4 Jun 2018 23:08:12 +0900 Subject: [openstack-dev] [masakari] weekly meeting time changed Message-ID: Hi All, Gentle reminder about next Masakari meeting time. Form next meeting (5th June), meeting will be start at 0300UTC. Please find more details at [1]. [1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting --- Regards, Sampath -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Jun 4 14:13:48 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 4 Jun 2018 10:13:48 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> On 31/05/18 14:35, Julia Kreger wrote: > Back to the topic of nitpicking! > > I virtually sat down with Doug today and we hammered out the positive > aspects that we feel like are the things that we as a community want > to see as part of reviews coming out of this effort. The principles > change[1] in governance has been updated as a result. > > I think we are at a point where we have to state high level > principles, and then also update guidelines or other context providing > documentation to re-enforce some of items covered in this > discussion... not just to educate new contributors, but to serve as a > checkpoint for existing reviewers when making the decision as to how > to vote change set. The question then becomes where would such > guidelines or documentation best fit? I think the contributor guide is the logical place for it. Kendall pointed out this existing section: https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes It could go in there, or perhaps we separate out the parts about when to use which review scores into a separate page from the mechanics of how to use Gerrit. > Should we explicitly detail the > cause/effect that occurs? Should we convey contributor perceptions, or > maybe even just link to this thread as there has been a massive amount > of feedback raising valid cases, points, and frustrations. > > Personally, I'd lean towards a blended approach, but the question of > where is one I'm unsure of. Thoughts? Let's crowdsource a set of heuristics that reviewers and contributors should keep in mind when they're reviewing or having their changes reviewed. I made a start on collecting ideas from this and past threads, as well as my own reviewing experience, into a document that I've presumptuously titled "How to Review Changes the OpenStack Way" (but might be more accurately called "The Frank Sinatra Guide to Code Review" at the moment): https://etherpad.openstack.org/p/review-the-openstack-way It's in an etherpad to make it easier for everyone to add their suggestions and comments (folks in #openstack-tc have made some tweaks already). After a suitable interval has passed to collect feedback, I'll turn this into a contributor guide change. Have at it! cheers, Zane. > -Julia > > [1]: https://review.openstack.org/#/c/570940/ From doug at doughellmann.com Mon Jun 4 14:14:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 10:14:54 -0400 Subject: [openstack-dev] [Release-job-failures][release][dragonflow] Release of openstack/dragonflow failed In-Reply-To: References: Message-ID: <1528121647-sup-6533@lrrr.local> Excerpts from zuul's message of 2018-06-04 14:01:19 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/3b/3b7ca98ce56d1e71efe95eaa10d0884487411307/release/release-openstack-python/c381399/ : FAILURE in 3m 46s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > It looks like Dragonflow has some extra dependencies that are not available under the current release job. http://logs.openstack.org/3b/3b7ca98ce56d1e71efe95eaa10d0884487411307/release/release-openstack-python/c381399/job-output.txt.gz#_2018-06-04_14_00_35_073390 From doug at doughellmann.com Mon Jun 4 14:15:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 10:15:53 -0400 Subject: [openstack-dev] [DragonFlow][TC] State of the DragonFlow project In-Reply-To: References: <20180601163136.GA29961@sm-xps> <1527873028-sup-1636@lrrr.local> <20180601175423.GA1364@sm-xps> <20180604122513.GA63851@smcginnis-mbp.local> Message-ID: <1528121725-sup-4967@lrrr.local> Excerpts from Omer Anson's message of 2018-06-04 16:53:54 +0300: > Sure. No worries. The project is still active :) > > I tagged and branched out Queens. The build failed, see the other email thread for details. Doug > > Thanks, > Omer. > > On Mon, 4 Jun 2018 at 15:25, Sean McGinnis wrote: > > > On Sun, Jun 03, 2018 at 09:10:26AM +0300, Omer Anson wrote: > > > Hi, > > > > > > If the issue is just the tagging, I'll tag the releases today/tomorrow. I > > > figured that since Dragonflow has an independent release cycle, and we > > have > > > very little manpower, regular tagging makes less sense and would save us > > a > > > little time. > > > > > > Thanks, > > > Omer > > > > > > > Thanks Omer. I part of the concern, and the thing that caught our > > attention, > > was that although the project is using the independent release model it > > still > > had stable branches created up until queens. > > > > So this was mostly a check to make sure nothing else has changed and that > > the > > project should still be considered "active". > > > > Since it has been quite a while since the last official release from the > > project, it would be good if you proposed a release to make sure nothing > > has > > broken with the release process for DragonFlow and to make all of the code > > changes since the last release available to potential consumers. > > > > Sean > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From amy at demarco.com Mon Jun 4 14:19:22 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 4 Jun 2018 07:19:22 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: Zane, I'll read in more detail, but do we want to add rollcall-vote? Amy (spotz) On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter wrote: > On 31/05/18 14:35, Julia Kreger wrote: > >> Back to the topic of nitpicking! >> >> I virtually sat down with Doug today and we hammered out the positive >> aspects that we feel like are the things that we as a community want >> to see as part of reviews coming out of this effort. The principles >> change[1] in governance has been updated as a result. >> >> I think we are at a point where we have to state high level >> principles, and then also update guidelines or other context providing >> documentation to re-enforce some of items covered in this >> discussion... not just to educate new contributors, but to serve as a >> checkpoint for existing reviewers when making the decision as to how >> to vote change set. The question then becomes where would such >> guidelines or documentation best fit? >> > > I think the contributor guide is the logical place for it. Kendall pointed > out this existing section: > > https://docs.openstack.org/contributors/code-and-documentati > on/using-gerrit.html#reviewing-changes > > It could go in there, or perhaps we separate out the parts about when to > use which review scores into a separate page from the mechanics of how to > use Gerrit. > > Should we explicitly detail the >> cause/effect that occurs? Should we convey contributor perceptions, or >> maybe even just link to this thread as there has been a massive amount >> of feedback raising valid cases, points, and frustrations. >> >> Personally, I'd lean towards a blended approach, but the question of >> where is one I'm unsure of. Thoughts? >> > > Let's crowdsource a set of heuristics that reviewers and contributors > should keep in mind when they're reviewing or having their changes > reviewed. I made a start on collecting ideas from this and past threads, as > well as my own reviewing experience, into a document that I've > presumptuously titled "How to Review Changes the OpenStack Way" (but might > be more accurately called "The Frank Sinatra Guide to Code Review" at the > moment): > > https://etherpad.openstack.org/p/review-the-openstack-way > > It's in an etherpad to make it easier for everyone to add their > suggestions and comments (folks in #openstack-tc have made some tweaks > already). After a suitable interval has passed to collect feedback, I'll > turn this into a contributor guide change. > > Have at it! > > cheers, > Zane. > > > -Julia >> >> [1]: https://review.openstack.org/#/c/570940/ >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Mon Jun 4 14:26:41 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Mon, 4 Jun 2018 14:26:41 +0000 Subject: [openstack-dev] [openstack-helm] OSH Storyboard Migration Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8965E0157F@MOSTLS1MSGUSRFF.ITServices.sbc.com> OpenStack-Helm team, Heads-up: we are targeting migration of OpenStack-Helm into Storyboard for this Friday, 6/8! We've been discussing this for a while, and will sync on it in tomorrow's team meeting to ensure there are no surprises as we move. Following this move, please use Storyboard instead of Launchpad for OpenStack-Helm. Thanks, Matt McEuen From zbitter at redhat.com Mon Jun 4 14:29:33 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 4 Jun 2018 10:29:33 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: On 04/06/18 10:19, Amy Marrich wrote: > Zane, > > I'll read in more detail, but do we want to add rollcall-vote? Is it used anywhere other than in the governance repo? We certainly could add it, but it didn't seem like a top priority. - ZB > Amy (spotz) > > On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter > wrote: > > On 31/05/18 14:35, Julia Kreger wrote: > > Back to the topic of nitpicking! > > I virtually sat down with Doug today and we hammered out the > positive > aspects that we feel like are the things that we as a community want > to see as part of reviews coming out of this effort. The principles > change[1] in governance has been updated as a result. > > I think we are at a point where we have to state high level > principles, and then also update guidelines or other context > providing > documentation to re-enforce some of items covered in this > discussion... not just to educate new contributors, but to serve > as a > checkpoint for existing reviewers when making the decision as to how > to vote change set. The question then becomes where would such > guidelines or documentation best fit? > > > I think the contributor guide is the logical place for it. Kendall > pointed out this existing section: > > https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes > > > It could go in there, or perhaps we separate out the parts about > when to use which review scores into a separate page from the > mechanics of how to use Gerrit. > > Should we explicitly detail the > cause/effect that occurs? Should we convey contributor > perceptions, or > maybe even just link to this thread as there has been a massive > amount > of feedback raising valid cases, points, and frustrations. > > Personally, I'd lean towards a blended approach, but the question of > where is one I'm unsure of. Thoughts? > > > Let's crowdsource a set of heuristics that reviewers and > contributors should keep in mind when they're reviewing or having > their changes reviewed. I made a start on collecting ideas from this > and past threads, as well as my own reviewing experience, into a > document that I've presumptuously titled "How to Review Changes the > OpenStack Way" (but might be more accurately called "The Frank > Sinatra Guide to Code Review" at the moment): > > https://etherpad.openstack.org/p/review-the-openstack-way > > > It's in an etherpad to make it easier for everyone to add their > suggestions and comments (folks in #openstack-tc have made some > tweaks already). After a suitable interval has passed to collect > feedback, I'll turn this into a contributor guide change. > > Have at it! > > cheers, > Zane. > > > -Julia > > [1]: https://review.openstack.org/#/c/570940/ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Jun 4 15:30:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 11:30:17 -0400 Subject: [openstack-dev] [tc] Technical Committee Update, 4 June Message-ID: <1528125921-sup-618@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at:https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: * Import ansible-role-tripleo-modify-image https://review.openstack.org/568727 * retire tripleo-incubator https://review.openstack.org/#/c/565843/ * charms: add Glance Simplestreams Sync charm https://review.openstack.org/566958 * PowerVMStackers following stable policy https://review.openstack.org/562591 Other approved changes: * Include a rationale for tracking base services https://review.openstack.org/#/c/568941/ * Note that the old incubation/graduation process is obsolete https://review.openstack.org/#/c/569164/ * Provide more detail about the expecations we place on goal champions https://review.openstack.org/#/c/564060/ Office hour logs from this week: * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-30-01.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html The board agreed that we should resolve the long-standing issue with the section of the bylaws that describes the TC electorate. I will provide more detail about that in the summary from the joint leadership session, which I intend to send separately in the next day or two. == Ongoing Discussions == Zane summarized the Forum discussion about the Adjutant team's application as a comment on the review. The work to update the scope/mission statement for the project is ongoing. * https://review.openstack.org/#/c/553643/ We discussed the Python 2 deprecation timeline at the Forum. I have prepared a resolution describing the outcome, and discussion is continuing on the review. Based on the recent feedback, I need to update the resolution to add an explicit deadline for supporting Python 3. Graham also needs to update the PTI documentation change to differentiate between old and new projects. * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130824.html * https://review.openstack.org/571011 python 2 deprecation timeline * https://review.openstack.org/#/c/561922/ PTI documentation change There are two separate discussions about project team affiliation diversity. Zane's proposal to update the new project requirements has some discussion in gerrit, and Mohammed's thread on the mailing list about changing the way we apply the current diversity tag has a couple of competing proposals under consideration. TC members, please review both and provide your input. * https://review.openstack.org/#/c/567944/ * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130776.html and http://lists.openstack.org/pipermail/openstack-dev/2018-June/131029.html Jeremy has revived the thread about adding a secret/key store to our base services via the mailing list. We discussed the topic extensively in the most recent TC office hour, as well. I think we are close to agreeing that although saying a "castellan supported" database is insufficient for all desirable use cases, it is sufficient for a useful number of use cases and would be a reasonable first step. Jeremy, please correct me if I have misremembered that outcome. * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html The operators who have agreed to take over managing the content in the Operations Manual have decided to move the content back from the wiki into gerrit. They plan to establish a SIG to "own" the repository to ensure the content can be published to docs.openstack.org again. * http://lists.openstack.org/pipermail/openstack-operators/2018-May/015318.html Mohammed and Emilien are working with the StarlingX team to import their repositories following the plan we discussed at the Forum. * http://lists.openstack.org/pipermail/openstack-operators/2018-May/015318.html * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130913.html Zane has started a discussion about the terms of service for hosted projects. James Blair started a separate thread to discuss the future of the infrastructure team as it starts to support multiple foundation project areas. TC members, these are both important threads, so please check in and provide your feedback. * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130807.html * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130896.html Based on feedback from the joint leadership meeting at the summit, Dims has started working on a template for describing roles we need filled in various areas. The next step is to convert some of the existing requests for help into the new format and get more feedback about the content. * https://etherpad.openstack.org/p/job-description-template == TC member actions/focus/discussions for the coming week(s) == 1. I am still looking for consensus among TC members about how to record project updates to "old" goals so we can decide what to do with the patch for the Kolla project. https://review.openstack.org/557863 2. Team diversity measurement (see above). 3. We have several items on our backlog that need owners. TC members, please review the storyboard list and consider taking on one of the tasks that we agreed we would do. https://storyboard.openstack.org/#!/project/923 == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From doug at doughellmann.com Mon Jun 4 15:32:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 11:32:51 -0400 Subject: [openstack-dev] [tc] how shall we track status updates? Message-ID: <1528126233-sup-8954@lrrr.local> During the retrospective at the forum we talked about having each group working on an initiative send regular status updates. I would like to start doing that this week, and would like to talk about logistics Should we send emails directly to this list, or the TC list? How often should we post updates? Doug From mriedemos at gmail.com Mon Jun 4 15:53:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 10:53:43 -0500 Subject: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point In-Reply-To: <6992a8851a8349eeb194664c267a1a63@garmin.com> References: <6992a8851a8349eeb194664c267a1a63@garmin.com> Message-ID: +openstack-operators to see if others have the same use case On 5/31/2018 5:14 PM, Moore, Curt wrote: > We recently upgraded from Liberty to Pike and looking ahead to the code > in Queens, noticed the image download deprecation notice with > instructions to post here if this interface was in use.  As such, I’d > like to explain our use case and see if there is a better way of > accomplishing our goal or lobby for the "un-deprecation" of this > extension point. Thanks for speaking up - this is much easier *before* code is removed. > > As with many installations, we are using Ceph for both our Glance image > store and VM instance disks.  In a normal workflow when both Glance and > libvirt are configured to use Ceph, libvirt reacts to the direct_url > field on the Glance image and performs an in-place clone of the RAW disk > image from the images pool into the vms pool all within Ceph.  The > snapshot creation process is very fast and is thinly provisioned as it’s > a COW snapshot. > > This underlying workflow itself works great, the issue is with > performance of the VM’s disk within Ceph, especially as the number of > nodes within the cluster grows.  We have found, especially with Windows > VMs (largely as a result of I/O for the Windows pagefile), that the > performance of the Ceph cluster as a whole takes a very large hit in > keeping up with all of this I/O thrashing, especially when Windows is > booting.  This is not the case with Linux VMs as they do not use swap as > frequently as do Windows nodes with their pagefiles.  Windows can be run > without a pagefile but that leads to other odditites within Windows. > > I should also mention that in our case, the nodes themselves are > ephemeral and we do not care about live migration, etc., we just want > raw performance. > > As an aside on our Ceph setup without getting into too many details, we > have very fast SSD based Ceph nodes for this pool (separate crush root, > SSDs for both OSD and journals, 2 replicas), interconnected on the same > switch backplane, each with bonded 10GB uplinks to the switch.  Our Nova > nodes are within the same datacenter (also have bonded 10GB uplinks to > their switches) but are distributed across different switches.  We could > move the Nova nodes to the same switch as the Ceph nodes but that is a > larger logistical challenge to rearrange many servers to make space. > > Back to our use case, in order to isolate this heavy I/O, a subset of > our compute nodes have a local SSD and are set to use qcow2 images > instead of rbd so that libvirt will pull the image down from Glance into > the node’s local image cache and run the VM from the local SSD.  This > allows Windows VMs to boot and perform their initial cloudbase-init > setup/reboot within ~20 sec vs 4-5 min, regardless of overall Ceph > cluster load.  Additionally, this prevents us from "wasting" IOPS and > instead keep them local to the Nova node, reclaiming the network > bandwidth and Ceph IOPS for use by Cinder volumes.  This is essentially > the use case outlined here in the "Do designate some non-Ceph compute > hosts with low-latency local storage" section: > > https://ceph.com/planet/the-dos-and-donts-for-ceph-for-openstack/ > > The challenge is that transferring the Glance image transfer is > _glacially slow_ when using the Glance HTTP API (~30 min for a 50GB > Windows image (It’s Windows, it’s huge with all of the necessary tools > installed)).  If libvirt can instead perform an RBD export on the image > using the image download functionality, it is able to download the same > image in ~30 sec.  We have code that is performing the direct download > from Glance over RBD and it works great in our use case which is very > similar to the code in this older patch: > > https://review.openstack.org/#/c/44321/ It looks like at the time this had general approval (i.e. it wasn't considered crazy) but was blocked simply due to the Havana feature freeze. That's good to know. > > We could look at attaching an additional ephemeral disk to the instance > and have cloudbase-init use it as the pagefile but it appears that if > libvirt is using rbd for its images_type, _all_ disks must then come > from Ceph, there is no way at present to allow the VM image to run from > Ceph and have an ephemeral disk mapped in from node-local storage.  Even > still, this would have the effect of "wasting" Ceph IOPS for the VM disk > itself which could be better used for other purposes. When you mentioned the swap above I was thinking similar to this, attaching a swap device but as you've pointed out, all disks local to the compute host are going to use the same image type backend, so you can't have the root disk and swap/ephemeral disks using different image backends. > > Based on what I have explained about our use case, is there a > better/different way to accomplish the same goal without using the > deprecated image download functionality?  If not, can we work to > "un-deprecate" the download extension point? Should I work to get the > code for this RBD download into the upstream repository? > I think you should propose your changes upstream with a blueprint, the docs for the blueprint process are here: https://docs.openstack.org/nova/latest/contributor/blueprints.html Since it's not an API change, this might just be a specless blueprint, but you'd need to write up the blueprint and probably post the PoC code to Gerrit and then bring it up during the "Open Discussion" section of the weekly nova meeting. Once we can take a look at the code change, we can go from there on whether or not to add that in-tree or go some alternative route. Until that happens, I think we'll just say we won't remove that deprecated image download extension code, but that's not going to be an unlimited amount of time if you don't propose your changes upstream. Is there going to be anything blocking or slowing you down on your end with regard to contributing this change, like legal approval, license agreements, etc? If so, please be up front about that. -- Thanks, Matt From sean.mcginnis at gmx.com Mon Jun 4 16:06:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 11:06:53 -0500 Subject: [openstack-dev] [tc] how shall we track status updates? In-Reply-To: <1528126233-sup-8954@lrrr.local> References: <1528126233-sup-8954@lrrr.local> Message-ID: <1f052d2e-e2cd-4111-2a0a-b1b4233daff5@gmx.com> On 06/04/2018 10:32 AM, Doug Hellmann wrote: > During the retrospective at the forum we talked about having each group > working on an initiative send regular status updates. I would like to > start doing that this week, and would like to talk about logistics > > Should we send emails directly to this list, or the TC list? > > How often should we post updates? > > Doug Sending to openstack-dev would have the nice benefit of raising awareness, but at the risk of adding to the noise level and being ignored. I'm not sure which would be best, but either seems acceptable to me. Monthly or every other week seems like enough. I don't think anything is urgent or critical enough for weekly updates. Sean From fungi at yuggoth.org Mon Jun 4 17:16:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jun 2018 17:16:53 +0000 Subject: [openstack-dev] [tc] Technical Committee Update, 4 June In-Reply-To: <1528125921-sup-618@lrrr.local> References: <1528125921-sup-618@lrrr.local> Message-ID: <20180604171653.all54an2rdykmrva@yuggoth.org> On 2018-06-04 11:30:17 -0400 (-0400), Doug Hellmann wrote: [...] > Jeremy has revived the thread about adding a secret/key store to > our base services via the mailing list. We discussed the topic > extensively in the most recent TC office hour, as well. I think we > are close to agreeing that although saying a "castellan supported" > database is insufficient for all desirable use cases, it is sufficient > for a useful number of use cases and would be a reasonable first > step. Jeremy, please correct me if I have misremembered that outcome. > > * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html > * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html [...] Basically correct (I think we said "a Castellan-compatible key store"). I intend to have a change up for review to append this to the base services list in the next day or so as the mailing list discussion didn't highlight any new concerns and indicated that the previous blockers we identified have been resolved in the intervening year. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mbayer at redhat.com Mon Jun 4 17:24:35 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 4 Jun 2018 13:24:35 -0400 Subject: [openstack-dev] [keystone] [tripleo] multi-region Keystone via stretched Galera cluster Message-ID: Hey list - as mentioned in the May 11 Keystone meeting, I am working on one of the canonical approaches to producing a multi-region Keystone service, which is by having overcloud-specific keystone services interact with a Galera database that is running masters across multiple overclouds. The work here is to be integrated into tripleo and at [1] we discuss the production of a multi-region Keystone service, deployable across multiple tripleo overclouds. As the spec is being discussed I continue to work on the proof of concept [2] which in its master branch is being developed to deploy the second galera DB as well as haproxy/vip/keystone completely from tripleo heat templates; the changes being patched here are to be proposed as changes to tripleo itself once this version of the POC is working. The "standard_tripleo_version" branch is the previous iteration, which provides a fully working proof of concept that adds a second Galera instance to a pair of already deployed overclouds. Comments on the review welcome. [1] https://review.openstack.org/#/c/566448/ [2] https://github.com/zzzeek/stretch_cluster From sundar.nadathur at intel.com Mon Jun 4 17:49:28 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 4 Jun 2018 10:49:28 -0700 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs Message-ID: Hi,      Cyborg needs to create RCs and traits for accelerators. The original plan was to do that with nested RPs. To avoid rushing the Nova developers, I had proposed that Cyborg could start by applying the traits to the compute node RP, and accept the resulting caveats for Rocky, till we get nested RP support. That proposal did not find many takers, and Cyborg has essentially been in waiting mode. Since it is June already, and there is a risk of not delivering anything meaningful in Rocky, I am reviving my older proposal, which is summarized as below: * Cyborg shall create the RCs and traits as per spec (https://review.openstack.org/#/c/554717/), both in Rocky and beyond. Only the RPs will change post-Rocky. * In Rocky: o Cyborg will not create nested RPs. It shall apply the device traits to the compute node RP. o Cyborg will document the resulting caveat, i.e., all devices in the same host should have the same traits. In particular, we cannot have a GPU and a FPGA, or 2 FPGAs of different types, in the same host. o Cyborg will document that upgrades to post-Rocky releases will require operator intervention (as described below). *  For upgrade to post-Rocky world with nested RPs: o The operator needs to stop all running instances that use an accelerator. o The operator needs to run a script that removes the Cyborg traits and the inventory for Cyborg RCs from compute node RPs. o The operator can then perform the upgrade. The new Cyborg agent/driver(s) shall created nested RPs and publish inventory/traits as specified. IMHO, it is acceptable for Cyborg to do this because it is new and we can set expectations for the (lack of) upgrade plan. The alternative is that potentially no meaningful use cases get addressed in Rocky for Cyborg. Please LMK what you think. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Mon Jun 4 18:04:01 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 4 Jun 2018 14:04:01 -0400 Subject: [openstack-dev] [tc] Technical Committee Update, 4 June In-Reply-To: <20180604171653.all54an2rdykmrva@yuggoth.org> References: <1528125921-sup-618@lrrr.local> <20180604171653.all54an2rdykmrva@yuggoth.org> Message-ID: On Mon, Jun 4, 2018 at 1:16 PM, Jeremy Stanley wrote: > On 2018-06-04 11:30:17 -0400 (-0400), Doug Hellmann wrote: > [...] >> Jeremy has revived the thread about adding a secret/key store to >> our base services via the mailing list. We discussed the topic >> extensively in the most recent TC office hour, as well. I think we >> are close to agreeing that although saying a "castellan supported" >> database is insufficient for all desirable use cases, it is sufficient >> for a useful number of use cases and would be a reasonable first >> step. Jeremy, please correct me if I have misremembered that outcome. >> >> * http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html >> * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-31-15.00.html > [...] > > Basically correct (I think we said "a Castellan-compatible key > store"). I intend to have a change up for review to append this to > the base services list in the next day or so as the mailing list > discussion didn't highlight any new concerns and indicated that the > previous blockers we identified have been resolved in the > intervening year. +100 Jeremy! > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From sean.mcginnis at gmx.com Mon Jun 4 18:07:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 13:07:43 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection Message-ID: <20180604180742.GA6404@sm-xps> Hi everyone, This is to continue the discussion of the goal selection for the Stein release. I had previously sent out a recap of our discussion at the Forum here: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130999.html Now we need to actually narrow things down and pick a couple goals. Strawman ======== Just to set a starting point for debate, I propose the following two as goals for Stein: - Cold Upgade Support - Python 3 first As a reminder of other ideas, here is the link to the backlog of goal ideas we've kept so far: https://etherpad.openstack.org/p/community-goals Feel free to add more to that list, and if you have been involved in any of the things that have been completed there, please remove things you don't think should still be there. This is by no means an exhaustive list of what we could or should do for goals. With that, I'll go over the choices that I've proposed for the strawman. Python 3 First ============== One of the things brought up in the session was picking things that bring excitement and are obvious benefits to deployers and users of OpenStack services. While this one is maybe not as immediately obvious, I think this is something that will end up helping deployers and also falls into the tech debt reduction category that will help us move quicker long term. Python 2 is going away soon, so I think we need something to help compel folks to work on making sure we are ready to transition. This will also be a good point to help switch the mindset over to Python 3 being the default used everywhere, with our Python 2 compatibility being just to continue legacy support. Cold Upgrade Support ==================== The other suggestion in the Forum session related to upgrades was the addition of "upgrade check" CLIs for each project, and I was tempted to suggest that as my second strawman choice. For some projects that would be a very minimal or NOOP check, so it would probably be easy to complete the goal. But ultimately what I think would bring the most value would be the work on supporting cold upgrade, even if it will be more of a stretch for some projects to accomplish. Upgrades have been a major focus of discussion lately, especially as our operators have been trying to get closer to the latest work upstream. This has been an ongoing challenge. There has also been a lot of talk about LTS releases. We've landed on fast forward upgrade to get between several releases, but I think improving upgrades eases the way both for easier and more frequent upgrades and also getting to the point some day where maybe we can think about upgrading over several releases to be able to do something like an LTS to LTS upgrade. Neither one of these upgrade goals really has a clearly defined plan that projects can pick up now and start working on, but I think with those involved in these areas we should be able to come up with a perscriptive plan for projects to follow. And it would really move our fast forward upgrade story forward. Next Steps ========== I'm hoping with a strawman proposal we have a basis for debating the merits of these and getting closer to being able to officially select Stein goals. We still have some time, but I would like to avoid making late-cycle selections so teams can start planning ahead for what will need to be done in Stein. Please feel free to promote other ideas for goals. That would be a good way for us to weigh the pro's and con's between these and whatever else you have in mind. Then hopefully we can come to some consensus and work towards clearly defining what needs to be done and getting things well documented for teams to pick up as soon as they wrap up Rocky (or sooner). --- Sean (smcginnis) From openstack at fried.cc Mon Jun 4 18:24:30 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 4 Jun 2018 13:24:30 -0500 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: Message-ID: <74c7cc87-8218-f350-a8c2-ab55c8714f2f@fried.cc> Sundar- We've been discussing the upgrade path on another thread [1] and are working toward a solution [2][3] that would not require downtime or special scripts (other than whatever's normally required for an upgrade). We still hope to have all of that ready for Rocky, but if you're concerned about timing, this work should make it a viable option for you to start out modeling everything in the compute RP as you say, and then move it over later. Thanks, Eric [1] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131045.html [3] https://etherpad.openstack.org/p/placement-migrate-operations On 06/04/2018 12:49 PM, Nadathur, Sundar wrote: > Hi, >      Cyborg needs to create RCs and traits for accelerators. The > original plan was to do that with nested RPs. To avoid rushing the Nova > developers, I had proposed that Cyborg could start by applying the > traits to the compute node RP, and accept the resulting caveats for > Rocky, till we get nested RP support. That proposal did not find many > takers, and Cyborg has essentially been in waiting mode. > > Since it is June already, and there is a risk of not delivering anything > meaningful in Rocky, I am reviving my older proposal, which is > summarized as below: > > * Cyborg shall create the RCs and traits as per spec > (https://review.openstack.org/#/c/554717/), both in Rocky and > beyond. Only the RPs will change post-Rocky. > * In Rocky: > o Cyborg will not create nested RPs. It shall apply the device > traits to the compute node RP. > o Cyborg will document the resulting caveat, i.e., all devices in > the same host should have the same traits. In particular, we > cannot have a GPU and a FPGA, or 2 FPGAs of different types, in > the same host. > o Cyborg will document that upgrades to post-Rocky releases will > require operator intervention (as described below). > *  For upgrade to post-Rocky world with nested RPs: > o The operator needs to stop all running instances that use an > accelerator. > o The operator needs to run a script that removes the Cyborg > traits and the inventory for Cyborg RCs from compute node RPs. > o The operator can then perform the upgrade. The new Cyborg > agent/driver(s) shall created nested RPs and publish > inventory/traits as specified. > > IMHO, it is acceptable for Cyborg to do this because it is new and we > can set expectations for the (lack of) upgrade plan. The alternative is > that potentially no meaningful use cases get addressed in Rocky for Cyborg. > > Please LMK what you think. > > Regards, > Sundar > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ed at leafe.com Mon Jun 4 18:41:36 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 4 Jun 2018 13:41:36 -0500 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> Message-ID: <99108EBA-63AE-47C0-8AF9-18961ADAC9FF@leafe.com> On Jun 4, 2018, at 7:05 AM, Jay S Bryant wrote: >> Do we have that problem? I honestly don't know how much pressure other >> folks are feeling. My impression is that we've mostly become good at >> finding the necessary compromises, but my experience doesn't cover all >> of our teams. > In my experience this hasn't been a problem for quite some time. In the past, at least for Cinder, there were some minor cases of this but as projects have matured this has been less of an issue. Those rules were added because we wanted to avoid the appearance of one company implementing features that would only be beneficial to it. This arose from concerns in the early days when Rackspace was the dominant contributor: many of the other companies involved in OpenStack were worried that they would be investing their workers in a project that would only benefit Rackspace. As far as I know, there were never specific cases where Rackspace or any other company tried to push features in that no one else supported.. So even if now it doesn't seem that there is a problem, and we could remove these restrictions without ill effect, it just seems prudent to keep them. If a project is so small that the majority of its contributors/cores are from one company, maybe it should be an internal project for that company, and not a community project. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From e0ne at e0ne.info Mon Jun 4 18:59:49 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 4 Jun 2018 21:59:49 +0300 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <20180604180742.GA6404@sm-xps> References: <20180604180742.GA6404@sm-xps> Message-ID: Hi Sean, These goals look reasonable for me. On Mon, Jun 4, 2018 at 9:07 PM, Sean McGinnis wrote: > Hi everyone, > > This is to continue the discussion of the goal selection for the Stein > release. > I had previously sent out a recap of our discussion at the Forum here: > > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130999.html > > Now we need to actually narrow things down and pick a couple goals. > > Strawman > ======== > > Just to set a starting point for debate, I propose the following two as > goals > for Stein: > > - Cold Upgade Support > - Python 3 first > > As a reminder of other ideas, here is the link to the backlog of goal ideas > we've kept so far: > > https://etherpad.openstack.org/p/community-goals > > Feel free to add more to that list, and if you have been involved in any > of the > things that have been completed there, please remove things you don't think > should still be there. > > This is by no means an exhaustive list of what we could or should do for > goals. > > With that, I'll go over the choices that I've proposed for the strawman. > > Python 3 First > ============== > > One of the things brought up in the session was picking things that bring > excitement and are obvious benefits to deployers and users of OpenStack > services. While this one is maybe not as immediately obvious, I think this > is something that will end up helping deployers and also falls into the > tech > debt reduction category that will help us move quicker long term. > > Python 2 is going away soon, so I think we need something to help compel > folks > to work on making sure we are ready to transition. This will also be a good > point to help switch the mindset over to Python 3 being the default used > everywhere, with our Python 2 compatibility being just to continue legacy > support. > I hope we'll have Ubuntu 18.04 LTS on our gates for this activity soon. It becomes important not only for developers but for operators and vendors too. > Cold Upgrade Support > ==================== > > The other suggestion in the Forum session related to upgrades was the > addition > of "upgrade check" CLIs for each project, and I was tempted to suggest > that as > my second strawman choice. For some projects that would be a very minimal > or > NOOP check, so it would probably be easy to complete the goal. But > ultimately > what I think would bring the most value would be the work on supporting > cold > upgrade, even if it will be more of a stretch for some projects to > accomplish. > > Upgrades have been a major focus of discussion lately, especially as our > operators have been trying to get closer to the latest work upstream. This > has > been an ongoing challenge. > > A big +1 from my side on it. There has also been a lot of talk about LTS releases. We've landed on fast > forward upgrade to get between several releases, but I think improving > upgrades > eases the way both for easier and more frequent upgrades and also getting > to > the point some day where maybe we can think about upgrading over several > releases to be able to do something like an LTS to LTS upgrade. > > Neither one of these upgrade goals really has a clearly defined plan that > projects can pick up now and start working on, but I think with those > involved > in these areas we should be able to come up with a perscriptive plan for > projects to follow. > > And it would really move our fast forward upgrade story forward. > > Next Steps > ========== > > I'm hoping with a strawman proposal we have a basis for debating the > merits of > these and getting closer to being able to officially select Stein goals. We > still have some time, but I would like to avoid making late-cycle > selections so > teams can start planning ahead for what will need to be done in Stein. > > Please feel free to promote other ideas for goals. That would be a good > way for > us to weigh the pro's and con's between these and whatever else you have in > mind. Then hopefully we can come to some consensus and work towards clearly > defining what needs to be done and getting things well documented for > teams to > pick up as soon as they wrap up Rocky (or sooner). > > --- > Sean (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Jun 4 19:06:24 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 4 Jun 2018 14:06:24 -0500 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][solum][tatu][trove] pycrypto is dead and insecure, you should migrate part 2 In-Reply-To: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> Message-ID: <20180604190624.tjki5sydsoj45sgo@gentoo.org> On 18-05-13 12:22:06, Matthew Thode wrote: > This is a reminder to the projects called out that they are using old, > unmaintained and probably insecure libraries (it's been dead since > 2014). Please migrate off to use the cryptography library. We'd like > to drop pycrypto from requirements for rocky. > > See also, the bug, which has most of you cc'd already. > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | | solum | requirements.txt | 24 | pycrypto # Public Domain | | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ In order by name, notes follow. daisycloud-core - looks like AES / random functions are used freezer - looks like AES / random functions are used solum - looks like AES / RSA functions are used trove - has a review!!! https://review.openstack.org/#/c/560292/ The following projects are not tracked so we won't wait on them. fuel-dev-tools, fuel-web, tatu so it looks like progress is being made, so we have that going for us, which is nice. What can I do to help move this forward? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From openstack-dev at storpool.com Mon Jun 4 19:07:19 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Mon, 4 Jun 2018 22:07:19 +0300 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... In-Reply-To: References: Message-ID: <20180604190719.GA26100@straylight.m.ringlet.net> On Sat, Jun 02, 2018 at 06:13:23PM -0500, Jay S Bryant wrote: > All, > > This note is to make everyone aware that I have submitted patches for a > number of drivers that have not run 3rd party CI in 60 or more days.  The > following is a list of the drivers, how long since their CI last ran and > links to the patches which mark them as unsupported drivers: > > * DataCore CI – 99 Days - https://review.openstack.org/571533 > * Dell EMC CorpHD CI – 121 Days - https://review.openstack.org/571555 > * HGST Solutions CI – 306 Days - https://review.openstack.org/571560 > * IBM GPFS CI – 212 Days - https://review.openstack.org/571590 > * Itri Disco CI – 110 Days - https://review.openstack.org/571592 > * Nimble Storage CI – 78 Days - https://review.openstack.org/571599 > * StorPool – Unknown - https://review.openstack.org/571935 > * Vedams –HPMSA – 442 Days - https://review.openstack.org/571940 > * Brocade OpenStack – CI – 261 Days - https://review.openstack.org/571943 > > All of these drivers will be marked unsupported for the Rocky release and > will be removed in the Stein release if the 3rd Party CI is not returned to > a working state. > > If your driver is on the list and you have questions please respond to this > thread and we can discuss what needs to be done to return support for your > driver. > > Thank you for your attention to this matter! Hi, Thanks for taking care of Cinder by culling the herd. The StorPool CI is, understandably, on your list, since we had some problems with the CI infrastructure (not the driver itself) during the month of May. However, we fixed them on May 31st and our CI had several successful runs then, quickly followed by a slew of failures because of the "pip version 1" strip()/split() problem in devstack. Our CI has been chugging along since June 2nd (not really related to the timing of your e-mail, it just so happened that we fixed another small problem there). You can see the logs at http://logs.ci-openstack.storpool.com/ I am thinking of scheduling a rerun for all the changes that our CI failed on (and that have not been merged or abandoned yet); this may happen in the next couple of days. So, thanks again for your work on this, and hopefully this message will serve as a "we're still alive" sign. If there is anything more we should do, like reschedule the failed builds, please let us know. Best regards, Peter -- Peter Pentchev roam@{ringlet.net,debian.org,FreeBSD.org} pp at storpool.com PGP key: http://people.FreeBSD.org/~roam/roam.key.asc Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Mon Jun 4 19:10:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 15:10:42 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <99108EBA-63AE-47C0-8AF9-18961ADAC9FF@leafe.com> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> <99108EBA-63AE-47C0-8AF9-18961ADAC9FF@leafe.com> Message-ID: <1528139384-sup-1453@lrrr.local> Excerpts from Ed Leafe's message of 2018-06-04 13:41:36 -0500: > On Jun 4, 2018, at 7:05 AM, Jay S Bryant wrote: > > >> Do we have that problem? I honestly don't know how much pressure other > >> folks are feeling. My impression is that we've mostly become good at > >> finding the necessary compromises, but my experience doesn't cover all > >> of our teams. > > In my experience this hasn't been a problem for quite some time. In the past, at least for Cinder, there were some minor cases of this but as projects have matured this has been less of an issue. > > Those rules were added because we wanted to avoid the appearance of one company implementing features that would only be beneficial to it. This arose from concerns in the early days when Rackspace was the dominant contributor: many of the other companies involved in OpenStack were worried that they would be investing their workers in a project that would only benefit Rackspace. As far as I know, there were never specific cases where Rackspace or any other company tried to push features in that no one else supported.. > > So even if now it doesn't seem that there is a problem, and we could remove these restrictions without ill effect, it just seems prudent to keep them. If a project is so small that the majority of its contributors/cores are from one company, maybe it should be an internal project for that company, and not a community project. > > -- Ed Leafe Where was the rule added, though? I am aware of some individual teams with the rule, but AFAIK it was never a global rule. It's certainly not in any of the projects for which I am currently a core reviewer. Doug From lbragstad at gmail.com Mon Jun 4 19:28:21 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 4 Jun 2018 14:28:21 -0500 Subject: [openstack-dev] [keystone] test storyboard environment Message-ID: Hi all, The StoryBoard team was nice enough to migrate existing content for all keystone-related launchpad projects to a dev environment [0]. This gives us the opportunity to use  StoryBoard with real content. Log in and check it out. I'm curious to know what the rest of the team thinks. [0] https://storyboard-dev.openstack.org/#!/project_group/46 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jungleboyj at gmail.com Mon Jun 4 19:34:28 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 4 Jun 2018 14:34:28 -0500 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... In-Reply-To: <20180604190719.GA26100@straylight.m.ringlet.net> References: <20180604190719.GA26100@straylight.m.ringlet.net> Message-ID: Peter, Thank you for the update.  We are investigating why this shows up as failing in our CI tracking list. I will hold off on the associated patch. Thank you for the quick response! Jay On 6/4/2018 2:07 PM, Peter Penchev wrote: > On Sat, Jun 02, 2018 at 06:13:23PM -0500, Jay S Bryant wrote: >> All, >> >> This note is to make everyone aware that I have submitted patches for a >> number of drivers that have not run 3rd party CI in 60 or more days.  The >> following is a list of the drivers, how long since their CI last ran and >> links to the patches which mark them as unsupported drivers: >> >> * DataCore CI – 99 Days - https://review.openstack.org/571533 >> * Dell EMC CorpHD CI – 121 Days - https://review.openstack.org/571555 >> * HGST Solutions CI – 306 Days - https://review.openstack.org/571560 >> * IBM GPFS CI – 212 Days - https://review.openstack.org/571590 >> * Itri Disco CI – 110 Days - https://review.openstack.org/571592 >> * Nimble Storage CI – 78 Days - https://review.openstack.org/571599 >> * StorPool – Unknown - https://review.openstack.org/571935 >> * Vedams –HPMSA – 442 Days - https://review.openstack.org/571940 >> * Brocade OpenStack – CI – 261 Days - https://review.openstack.org/571943 >> >> All of these drivers will be marked unsupported for the Rocky release and >> will be removed in the Stein release if the 3rd Party CI is not returned to >> a working state. >> >> If your driver is on the list and you have questions please respond to this >> thread and we can discuss what needs to be done to return support for your >> driver. >> >> Thank you for your attention to this matter! > Hi, > > Thanks for taking care of Cinder by culling the herd. > > The StorPool CI is, understandably, on your list, since we had some > problems with the CI infrastructure (not the driver itself) during > the month of May. However, we fixed them on May 31st and our CI had > several successful runs then, quickly followed by a slew of failures > because of the "pip version 1" strip()/split() problem in devstack. > > Our CI has been chugging along since June 2nd (not really related to > the timing of your e-mail, it just so happened that we fixed another > small problem there). You can see the logs at > > http://logs.ci-openstack.storpool.com/ > > I am thinking of scheduling a rerun for all the changes that our CI > failed on (and that have not been merged or abandoned yet); this may > happen in the next couple of days. > > So, thanks again for your work on this, and hopefully this message will > serve as a "we're still alive" sign. If there is anything more we > should do, like reschedule the failed builds, please let us know. > > Best regards, > Peter > From sean.mcginnis at gmx.com Mon Jun 4 19:40:09 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 14:40:09 -0500 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... In-Reply-To: <20180604190719.GA26100@straylight.m.ringlet.net> References: <20180604190719.GA26100@straylight.m.ringlet.net> Message-ID: <20180604194009.GA13935@sm-xps> > > Our CI has been chugging along since June 2nd (not really related to > the timing of your e-mail, it just so happened that we fixed another > small problem there). You can see the logs at > > http://logs.ci-openstack.storpool.com/ > Thanks Peter. It looks like the reason the report run doesn't show Storpool reporting is a due to a mismatch on name. The officially list account is "StorPool CI" according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI But it appears on looking into this that the real CI account is "StorPool distributed storage CI". Is that correct? If so, can you update the wiki with the correct account name? Thanks, Sean From ekuvaja at redhat.com Mon Jun 4 20:09:02 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 4 Jun 2018 21:09:02 +0100 Subject: [openstack-dev] [Glace] Cores review open changes in glance-specs, please Message-ID: Hi all, This week is the deadline week for Glance specs for Rocky! I'd like to get ack (in form of review in gerrit) for open specs [0] and I will start Workflow -+1 them as appropriate during the week. Thu meeting will be last call and anything hanging after that will be considered again for S cycle. Thanks, Erno [0] https://review.openstack.org/#/q/project:openstack/glance-specs+status:open From mjturek at linux.vnet.ibm.com Mon Jun 4 20:22:21 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 4 Jun 2018 16:22:21 -0400 Subject: [openstack-dev] [ironic] Bug Day June 7th @ 1:00 PM to 2:00 PM UTC Message-ID: <93202e3d-3d1f-86f6-cfb3-77ab4d7a4ad8@linux.vnet.ibm.com> Hey all, The first Thursday of the month is approaching which means it's time for a bug day yet again! As we discussed last time, we will shorten the call to an hour. Below is a proposed agenda, location, and time [0]. If you'd like to adjust or propose topics, please let me know. Thanks! Mike Turek [0] https://etherpad.openstack.org/p/ironic-bug-day-june-2018 From mriedemos at gmail.com Mon Jun 4 20:26:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 15:26:28 -0500 Subject: [openstack-dev] Forum Recap - Stein Release Goals In-Reply-To: <20180531205942.GA18176@sm-xps> References: <20180531205942.GA18176@sm-xps> Message-ID: <54dcfa53-c7b1-1e88-9c0e-19920b169fa7@gmail.com> On 5/31/2018 3:59 PM, Sean McGinnis wrote: > We were also able to already identify some possible goals for the T cycle: > > - Move all CLIs to python-openstackclient My understanding was this is something we could do for Stein provided some heavy refactoring in the SDK and OSC got done first in Rocky. Or is that being too aggressive? -- Thanks, Matt From ihrachys at redhat.com Mon Jun 4 20:31:11 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Mon, 4 Jun 2018 13:31:11 -0700 Subject: [openstack-dev] [neutron][stable] Stepping down from core Message-ID: Hi neutrinos and all, As some of you've already noticed, the last several months I was scaling down my involvement in Neutron and, more generally, OpenStack. I am at a point where I feel confident my disappearance won't disturb the project, and so I am ready to make it official. I am stepping down from all administrative roles I so far accumulated in Neutron and Stable teams. I shifted my focus to another project, and so I just removed myself from all relevant admin groups to reflect the change. It was a nice 4.5 year ride for me. I am very happy with what we achieved in all these years and a bit sad to leave. The community is the most brilliant and compassionate and dedicated to openness group of people I was lucky to work with, and I am reminded daily how awesome it is. I am far from leaving the industry, or networking, or the promise of open source infrastructure, so I am sure we will cross our paths once in a while with most of you. :) I also plan to hang out in our IRC channels and make snarky comments, be aware! Thanks for the fish, Ihar From mriedemos at gmail.com Mon Jun 4 20:38:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 15:38:48 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <20180604180742.GA6404@sm-xps> References: <20180604180742.GA6404@sm-xps> Message-ID: On 6/4/2018 1:07 PM, Sean McGinnis wrote: > Python 3 First > ============== > > One of the things brought up in the session was picking things that bring > excitement and are obvious benefits to deployers and users of OpenStack > services. While this one is maybe not as immediately obvious, I think this > is something that will end up helping deployers and also falls into the tech > debt reduction category that will help us move quicker long term. > > Python 2 is going away soon, so I think we need something to help compel folks > to work on making sure we are ready to transition. This will also be a good > point to help switch the mindset over to Python 3 being the default used > everywhere, with our Python 2 compatibility being just to continue legacy > support. I still don't really know what this goal means - we have python 3 support across the projects for the most part don't we? Based on that, this doesn't seem like much to take an entire "goal slot" for the release. > > Cold Upgrade Support > ==================== > > The other suggestion in the Forum session related to upgrades was the addition > of "upgrade check" CLIs for each project, and I was tempted to suggest that as > my second strawman choice. For some projects that would be a very minimal or > NOOP check, so it would probably be easy to complete the goal. But ultimately > what I think would bring the most value would be the work on supporting cold > upgrade, even if it will be more of a stretch for some projects to accomplish. I think you might be mixing two concepts here. The cold upgrade support, per my understanding, is about getting the assert:supports-upgrade tag: https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html Which to me basically means the project runs a grenade job. There was discussion in the room about grenade not being a great tool for all projects, but no one is working on a replacement for that, so I don't think it's really justification at this point for *not* making it a goal. The "upgrade check" CLIs is a different thing though, which is more about automating as much of the upgrade release notes as possible. See the nova docs for examples on how we have used it: https://docs.openstack.org/nova/latest/cli/nova-status.html I'm not sure what projects you had in mind when you said, "For some projects that would be a very minimal or NOOP check, so it would probably be easy to complete the goal." I would expect that projects aren't meeting the goal if they are noop'ing everything. But what can be automated like this isn't necessarily black and white either. > > Upgrades have been a major focus of discussion lately, especially as our > operators have been trying to get closer to the latest work upstream. This has > been an ongoing challenge. > > There has also been a lot of talk about LTS releases. We've landed on fast > forward upgrade to get between several releases, but I think improving upgrades > eases the way both for easier and more frequent upgrades and also getting to > the point some day where maybe we can think about upgrading over several > releases to be able to do something like an LTS to LTS upgrade. > > Neither one of these upgrade goals really has a clearly defined plan that > projects can pick up now and start working on, but I think with those involved > in these areas we should be able to come up with a perscriptive plan for > projects to follow. > > And it would really move our fast forward upgrade story forward. Agreed. In the FFU Forum session at the summit I mentioned the 'nova-status upgrade check' CLI and a lot of people in the room had never heard of it because they are still on Mitaka before we added that CLI (new in Ocata). But they sounded really interested in it and said they wished other projects were doing that to help ease upgrades so they won't be stuck on older unmaintained releases for so long. So anything we can do to improve upgrades, including our testing for them, will help make FFU better. > > Next Steps > ========== > > I'm hoping with a strawman proposal we have a basis for debating the merits of > these and getting closer to being able to officially select Stein goals. We > still have some time, but I would like to avoid making late-cycle selections so > teams can start planning ahead for what will need to be done in Stein. > > Please feel free to promote other ideas for goals. That would be a good way for > us to weigh the pro's and con's between these and whatever else you have in > mind. Then hopefully we can come to some consensus and work towards clearly > defining what needs to be done and getting things well documented for teams to > pick up as soon as they wrap up Rocky (or sooner). I still want to lobby for a push to move off the old per-project CLIs and close the gap on using python-openstackclient CLI for everything, but I'm unclear on what the roadmap is for the major refactor with the SDK Monty was talking about in Vancouver. From a new user perspective, the 2000 individual CLIs to get anything done in OpenStack has to be a major turn off so we should make this a higher priority - including modernizing our per-project documentation to give OSC examples instead of per-project (e.g. nova boot) examples. -- Thanks, Matt From mriedemos at gmail.com Mon Jun 4 20:41:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 15:41:17 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: <34fa9615-2add-4a93-e9fc-2823340357a1@gmail.com> +openstack-operators since we need to have more operator feedback in our community-wide goals decisions. +Melvin as my elected user committee person for the same reasons as adding operators into the discussion. On 6/4/2018 3:38 PM, Matt Riedemann wrote: > On 6/4/2018 1:07 PM, Sean McGinnis wrote: >> Python 3 First >> ============== >> >> One of the things brought up in the session was picking things that bring >> excitement and are obvious benefits to deployers and users of OpenStack >> services. While this one is maybe not as immediately obvious, I think >> this >> is something that will end up helping deployers and also falls into >> the tech >> debt reduction category that will help us move quicker long term. >> >> Python 2 is going away soon, so I think we need something to help >> compel folks >> to work on making sure we are ready to transition. This will also be a >> good >> point to help switch the mindset over to Python 3 being the default used >> everywhere, with our Python 2 compatibility being just to continue legacy >> support. > > I still don't really know what this goal means - we have python 3 > support across the projects for the most part don't we? Based on that, > this doesn't seem like much to take an entire "goal slot" for the release. > >> >> Cold Upgrade Support >> ==================== >> >> The other suggestion in the Forum session related to upgrades was the >> addition >> of "upgrade check" CLIs for each project, and I was tempted to suggest >> that as >> my second strawman choice. For some projects that would be a very >> minimal or >> NOOP check, so it would probably be easy to complete the goal. But >> ultimately >> what I think would bring the most value would be the work on >> supporting cold >> upgrade, even if it will be more of a stretch for some projects to >> accomplish. > > I think you might be mixing two concepts here. > > The cold upgrade support, per my understanding, is about getting the > assert:supports-upgrade tag: > > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html > > > Which to me basically means the project runs a grenade job. There was > discussion in the room about grenade not being a great tool for all > projects, but no one is working on a replacement for that, so I don't > think it's really justification at this point for *not* making it a goal. > > The "upgrade check" CLIs is a different thing though, which is more > about automating as much of the upgrade release notes as possible. See > the nova docs for examples on how we have used it: > > https://docs.openstack.org/nova/latest/cli/nova-status.html > > I'm not sure what projects you had in mind when you said, "For some > projects that would be a very minimal or NOOP check, so it would > probably be easy to complete the goal." I would expect that projects > aren't meeting the goal if they are noop'ing everything. But what can be > automated like this isn't necessarily black and white either. > >> >> Upgrades have been a major focus of discussion lately, especially as our >> operators have been trying to get closer to the latest work upstream. >> This has >> been an ongoing challenge. >> >> There has also been a lot of talk about LTS releases. We've landed on >> fast >> forward upgrade to get between several releases, but I think improving >> upgrades >> eases the way both for easier and more frequent upgrades and also >> getting to >> the point some day where maybe we can think about upgrading over several >> releases to be able to do something like an LTS to LTS upgrade. >> >> Neither one of these upgrade goals really has a clearly defined plan that >> projects can pick up now and start working on, but I think with those >> involved >> in these areas we should be able to come up with a perscriptive plan for >> projects to follow. >> >> And it would really move our fast forward upgrade story forward. > > Agreed. In the FFU Forum session at the summit I mentioned the > 'nova-status upgrade check' CLI and a lot of people in the room had > never heard of it because they are still on Mitaka before we added that > CLI (new in Ocata). But they sounded really interested in it and said > they wished other projects were doing that to help ease upgrades so they > won't be stuck on older unmaintained releases for so long. So anything > we can do to improve upgrades, including our testing for them, will help > make FFU better. > >> >> Next Steps >> ========== >> >> I'm hoping with a strawman proposal we have a basis for debating the >> merits of >> these and getting closer to being able to officially select Stein >> goals. We >> still have some time, but I would like to avoid making late-cycle >> selections so >> teams can start planning ahead for what will need to be done in Stein. >> >> Please feel free to promote other ideas for goals. That would be a >> good way for >> us to weigh the pro's and con's between these and whatever else you >> have in >> mind. Then hopefully we can come to some consensus and work towards >> clearly >> defining what needs to be done and getting things well documented for >> teams to >> pick up as soon as they wrap up Rocky (or sooner). > > I still want to lobby for a push to move off the old per-project CLIs > and close the gap on using python-openstackclient CLI for everything, > but I'm unclear on what the roadmap is for the major refactor with the > SDK Monty was talking about in Vancouver. From a new user perspective, > the 2000 individual CLIs to get anything done in OpenStack has to be a > major turn off so we should make this a higher priority - including > modernizing our per-project documentation to give OSC examples instead > of per-project (e.g. nova boot) examples. > -- Thanks, Matt From doug at doughellmann.com Mon Jun 4 21:02:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 17:02:16 -0400 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May Message-ID: <1528145294-sup-9010@lrrr.local> On 20 May, 2018 the OpenStack foundation staff, board, technical committee, and user committee held a joint leadership meeting at the summit venue in Vancouver to discuss current events and issues related to the OpenStack community. Alan Clark, Melvin Hillsman, and I chaired the meeting together, though Alan did most of the coordination work during the meeting. Because the board was present, I want to give the disclaimer that this summary is from my perspective and based on my notes. It does not in any way reflect an official summary of the meeting. I will also give a further disclaimer that some of these notes may be out of order because the actual agenda [1] was changed on-site to accommodate some participants who could not be present for the entire day. We opened the day by welcoming and introducing new members of the 3 groups. Ruan HE is a new board member from Tencent. Graham Hayes, Mohammed Naser, and Zane Bitter are the 3 newly elected TC members this term. Amy Marrich and Yih Leong Sun are newly elected to the UC. In particular, the discussion of fixing a long-standing issue with a mistake in the bylaws was moved to the start of the day. The problem has to do with section 3.b.i of the appendix that describes the Technical Committee, which reads "An Individual Member is an ATC who has..." but should read "An ATC is an Individual Member who has..."[2] Everyone agreed that is clearly a mistake, but because of where it appears in the bylaws actually fixing it will require a vote of the foundation membership. The board approved a resolution authorizing the secretary to include a fix on the ballot for the next board elections. There may be other bylaws changes at the same time to address the expansion of the foundation into other strategic areas, since the current bylaws do not currently cover the governance structure for any projects other than OpenStack itself. None of those other changes have been discussed in detail, yet. Next, the foundation executive staff gave an update on several foundation- and event-related topics. I didn't take a lot of notes during this section, but a few things stood out for me: 1. They are going to change the user survey to be an annual event, in part to avoid survey fatigue. 2. The call for proposals for the Berlin Summit is open already. 3. After Berlin, the next summit will be in Denver, but downtown at the convention center, not the site of the PTG. During this section of the meeting Kandan Kathirvel of AT&T mentioned a desire to lower the cost of platinum membership because platinum members are already contributing significant developer resources. This was not discussed at any real length, but it may come up again at a regular board meeting, where a change like that could be considered formally. After the foundation update, Melvin and Dims gave an update on OpenLab [3], a project to test the integration of various cloud ecosystem tools, including running Kubernetes on OpenStack and various cloud management libraries that communicate with OpenStack. Next, I gave a presentation discussing contribution levels in individual projects to highlight the impact an individual contributor can have on OpenStack [4]. The purpose of raising this topic was to get input into why the community's "help wanted" areas are not seeing significant contributions. We spent a good amount of time talking about the issue, and several ideas were put forward. These ranged from us not emphasizing the importance and value of mentoring enough to not explaining to contributing companies why these gaps in the community were important from a business perspective. At the end of the discussion we had volunteers from the board (Allison Randal, Alan Clark, Prakash Ramchandran, Chris Price, and Johan Christenson) and TC (Sean, Graham, Dims, and Julia) ready to work on reframing the contribution gaps in terms of "job descriptions" that explain in more detail what is needed and what benefit those roles will provide. As mentioned in this week's TC update, Dims has started working on a template already. Next, Melvin and Matt Van Winkle gave a presentation on the work the user committee has been doing [5]. They covered the status of the UC-led working groups and both short and long term goals. Next, the foundation staff covered their plans for in-person meetings during 2019. The most significant point of interest to the contributor community from this section of the meeting was the apparently overwhelming interest from companies employing contributors, as well as 2/3 of the contributors to recent releases who responded to the survey, to bring the PTG and summit back together as a single event. This had come up at the meeting in Dublin as well, but in the time since then the discussions progressed and it looks much more likely that we will at least try re-combining the two events. We discussed several reasons, including travel expense, travel visa difficulties, time away from home and family, and sponsorship of the events themselves. There are a few plans under consideration, and no firm decisions have been made, yet. We discussed a strawman proposal to combine the summit and PTG in April, in Denver, that would look much like our older Summit events (from the Folsom/Grizzly time frame) with a few days of conference and a few days of design summit, with some overlap in the middle of the week. The dates, overlap, and arrangements will depend on venue availability. The remainder of the joint meeting was spent on revising the foundation mission statement [6]. This isn't the OpenStack mission statement, but the one for the foundation itself. The current mission is: "The OpenStack Foundation is an independent body providing shared resources to help achieve the OpenStack Mission by Protecting, Empowering, and Promoting OpenStack software and the community around it, including users, developers and the entire ecosystem." We considered 7 perspectives on what the mission statement should include 1. Who do we serve? 2. What do we wish to achieve? 3. What are the attributes of what we seek to achieve? 4. What needs do we fulfill? 5. What is the market scope? 6. What makes us unique? 7. and what else is missing? The exercise was very tactile, and involved moving around the room and placing post-it notes with short phrases into groups under each of those sections. As such, it didn't lend itself well to a summary for someone who doesn't have all of the bits of paper involved. Since Alan collected those, I imagine we will see a summary from him before the next joint leadership meeting. Following the joint meeting, the board held a meeting in which they discussed committee updates. I missed this portion of the day due to hallway conversations about other topics. Doug [1] https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting [2] https://www.openstack.org/legal/technical-committee-member-policy/ [3] https://docs.google.com/presentation/d/12y6iTTvff4fzHiN31KApoO8JayZOsJySZugtLBc6XnE/edit#slide=id.g39da8667de_0_6 [4] https://docs.google.com/presentation/d/1wsPfeGC83I5C8s6-9tlsxz49bBRY5PHdIdXGxutVqgA/edit#slide=id.p [5] https://docs.google.com/presentation/d/1-xQApxg3xY3jBagID3PO1MCJiFYPaAw8yHmFdtMj8Pc/edit#slide=id.g33768e8068_2_244 [6] https://etherpad.openstack.org/p/UnofficialBoardNotes-May20-2018 From arnaud.morin at gmail.com Mon Jun 4 21:05:14 2018 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 4 Jun 2018 23:05:14 +0200 Subject: [openstack-dev] [tripleo][puppet] Hello all, puppet modules In-Reply-To: <9300F696-8743-46DF-8E73-EC4A78DD12B2@cern.ch> References: <9300F696-8743-46DF-8E73-EC4A78DD12B2@cern.ch> Message-ID: <20180604210514.aroacbpmmpb6n23w@grocaca> Hey, OVH is also using them as well as some custom ansible playbooks to manage the deployment. But as for red had, the configuration part is handled by puppet. We are also doing some upstream contribution from time to time. For us, the puppet modules are very stable and works very fine. Cheers, -- Arnaud Morin On 31.05.18 - 15:36, Tim Bell wrote: > CERN use these puppet modules too and contributes any missing functionality we need upstream. > > Tim > > From: Alex Schultz > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 31 May 2018 at 16:24 > To: "OpenStack Development Mailing List (not for usage questions)" > Subject: Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules > > > > On Wed, May 30, 2018 at 3:18 PM, Remo Mattei > wrote: > Hello all, > I have talked to several people about this and I would love to get this finalized once and for all. I have checked the OpenStack puppet modules which are mostly developed by the Red Hat team, as of right now, TripleO is using a combo of Ansible and puppet to deploy but in the next couple of releases, the plan is to move away from the puppet option. > > > So the OpenStack puppet modules are maintained by others other than Red Hat, however we have been a major contributor since TripleO has relied on them for some time. That being said, as TripleO has migrated to containers built with Kolla, we've adapted our deployment mechanism to include Ansible and we really only use puppet for configuration generation. Our goal for TripleO is to eventually be fully containerized which isn't something the puppet modules support today and I'm not sure is on the road map. > > > So consequently, what will be the plan of TripleO and the puppet modules? > > > As TripleO moves forward, we may continue to support deployments via puppet modules but the amount of testing that we'll be including upstream will mostly exercise external Ansible integrations (example, ceph-ansible, openshift-ansible, etc) and Kolla containers. As of Queens, most of the services deployed via TripleO are deployed via containers and not on baremetal via puppet. We no longer support deploying OpenStack services on baremetal via the puppet modules and will likely be removing this support in the code in Stein. The end goal will likely be moving away from puppet modules within TripleO if we can solve the backwards compatibility and configuration generation via other mechanism. We will likely recommend leveraging external Ansible role calls rather than including puppet modules and using those to deploy services that are not inherently supported by TripleO. I can't really give a time frame as we are still working out the details, but it is likely that over the next several cycles we'll see a reduction in the dependence of puppet in TripleO and an increase in leveraging available Ansible roles. > > > From the Puppet OpenStack standpoint, others are stepping up to continue to ensure the modules are available and I know I'll keep an eye on them for as long as TripleO leverages some of the functionality. The Puppet OpenStack modules are very stable but I'm not sure without additional community folks stepping up that there will be support for newer functionality being added by the various OpenStack projects. I'm sure others can chime in here on their usage/plans for the Puppet OpenStack modules. > > > Hope that helps. > > > Thanks, > -Alex > > > Thanks > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Mon Jun 4 21:17:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 17:17:18 -0400 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: <1528146144-sup-2183@lrrr.local> Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500: > On 6/4/2018 1:07 PM, Sean McGinnis wrote: > > Python 3 First > > ============== > > > > One of the things brought up in the session was picking things that bring > > excitement and are obvious benefits to deployers and users of OpenStack > > services. While this one is maybe not as immediately obvious, I think this > > is something that will end up helping deployers and also falls into the tech > > debt reduction category that will help us move quicker long term. > > > > Python 2 is going away soon, so I think we need something to help compel folks > > to work on making sure we are ready to transition. This will also be a good > > point to help switch the mindset over to Python 3 being the default used > > everywhere, with our Python 2 compatibility being just to continue legacy > > support. > > I still don't really know what this goal means - we have python 3 > support across the projects for the most part don't we? Based on that, > this doesn't seem like much to take an entire "goal slot" for the release. We still run docs, linters, functional tests, and other jobs under python 2 by default. Perhaps a better framing would be to call this "Python 3 by default", because the point is to change all of those jobs to use Python 3, and to set up all future jobs using Python 3 unless we specifically need to run them under Python 2. This seems like a small thing, but when we did it for Oslo we did find code issues because the linters apply different rules and we did find documentation build issues. The fixes were all straightforward, so I don't expect it to mean a lot of work, but it's more than a single patch per project. I also think using a goal is a good way to start shifting the mindset of the contributor base into this new perspective. > > > > Cold Upgrade Support > > ==================== > > > > The other suggestion in the Forum session related to upgrades was the addition > > of "upgrade check" CLIs for each project, and I was tempted to suggest that as > > my second strawman choice. For some projects that would be a very minimal or > > NOOP check, so it would probably be easy to complete the goal. But ultimately > > what I think would bring the most value would be the work on supporting cold > > upgrade, even if it will be more of a stretch for some projects to accomplish. > > I think you might be mixing two concepts here. > > The cold upgrade support, per my understanding, is about getting the > assert:supports-upgrade tag: > > https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html > > Which to me basically means the project runs a grenade job. There was > discussion in the room about grenade not being a great tool for all > projects, but no one is working on a replacement for that, so I don't > think it's really justification at this point for *not* making it a goal. > > The "upgrade check" CLIs is a different thing though, which is more > about automating as much of the upgrade release notes as possible. See > the nova docs for examples on how we have used it: > > https://docs.openstack.org/nova/latest/cli/nova-status.html > > I'm not sure what projects you had in mind when you said, "For some > projects that would be a very minimal or NOOP check, so it would > probably be easy to complete the goal." I would expect that projects > aren't meeting the goal if they are noop'ing everything. But what can be > automated like this isn't necessarily black and white either. What I remember from the discussion in the room was that not all projects are going to have anything to do by hand that would block an upgrade, but we still want all projects to have the test command. That means many of those commands could potentially be no-ops, right? Unless they're all going to do something like verify the schema has been updated somehow? > > > > > Upgrades have been a major focus of discussion lately, especially as our > > operators have been trying to get closer to the latest work upstream. This has > > been an ongoing challenge. > > > > There has also been a lot of talk about LTS releases. We've landed on fast > > forward upgrade to get between several releases, but I think improving upgrades > > eases the way both for easier and more frequent upgrades and also getting to > > the point some day where maybe we can think about upgrading over several > > releases to be able to do something like an LTS to LTS upgrade. > > > > Neither one of these upgrade goals really has a clearly defined plan that > > projects can pick up now and start working on, but I think with those involved > > in these areas we should be able to come up with a perscriptive plan for > > projects to follow. > > > > And it would really move our fast forward upgrade story forward. > > Agreed. In the FFU Forum session at the summit I mentioned the > 'nova-status upgrade check' CLI and a lot of people in the room had > never heard of it because they are still on Mitaka before we added that > CLI (new in Ocata). But they sounded really interested in it and said > they wished other projects were doing that to help ease upgrades so they > won't be stuck on older unmaintained releases for so long. So anything > we can do to improve upgrades, including our testing for them, will help > make FFU better. > > > > > Next Steps > > ========== > > > > I'm hoping with a strawman proposal we have a basis for debating the merits of > > these and getting closer to being able to officially select Stein goals. We > > still have some time, but I would like to avoid making late-cycle selections so > > teams can start planning ahead for what will need to be done in Stein. > > > > Please feel free to promote other ideas for goals. That would be a good way for > > us to weigh the pro's and con's between these and whatever else you have in > > mind. Then hopefully we can come to some consensus and work towards clearly > > defining what needs to be done and getting things well documented for teams to > > pick up as soon as they wrap up Rocky (or sooner). > > I still want to lobby for a push to move off the old per-project CLIs > and close the gap on using python-openstackclient CLI for everything, > but I'm unclear on what the roadmap is for the major refactor with the > SDK Monty was talking about in Vancouver. From a new user perspective, > the 2000 individual CLIs to get anything done in OpenStack has to be a > major turn off so we should make this a higher priority - including > modernizing our per-project documentation to give OSC examples instead > of per-project (e.g. nova boot) examples. I support this one, too. We're going to need more contributors working on the CLI team, I think, to make it happen, though. Dean is way over his capacity, I'm basically not present, and we've lost Steve. That leaves Akihiro and Rui to do most of the review work, which isn't enough. Doug From doug at doughellmann.com Mon Jun 4 21:20:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 17:20:06 -0400 Subject: [openstack-dev] Forum Recap - Stein Release Goals In-Reply-To: <54dcfa53-c7b1-1e88-9c0e-19920b169fa7@gmail.com> References: <20180531205942.GA18176@sm-xps> <54dcfa53-c7b1-1e88-9c0e-19920b169fa7@gmail.com> Message-ID: <1528147141-sup-7778@lrrr.local> Excerpts from Matt Riedemann's message of 2018-06-04 15:26:28 -0500: > On 5/31/2018 3:59 PM, Sean McGinnis wrote: > > We were also able to already identify some possible goals for the T cycle: > > > > - Move all CLIs to python-openstackclient > > My understanding was this is something we could do for Stein provided > some heavy refactoring in the SDK and OSC got done first in Rocky. Or is > that being too aggressive? > See my comments on the other part of the thread, but I think this is too optimistic until we add a couple of people to the review team on OSC. Others from the OSC team who have a better perspective on how much work is actually left may have a different opinion though? Doug From amy at demarco.com Mon Jun 4 21:28:28 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 4 Jun 2018 14:28:28 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: Zane, Not sure it is to be honest.:) Amy (spotz) On Mon, Jun 4, 2018 at 7:29 AM, Zane Bitter wrote: > On 04/06/18 10:19, Amy Marrich wrote: > >> Zane, >> >> I'll read in more detail, but do we want to add rollcall-vote? >> > > Is it used anywhere other than in the governance repo? We certainly could > add it, but it didn't seem like a top priority. > > - ZB > > Amy (spotz) >> >> >> On Mon, Jun 4, 2018 at 7:13 AM, Zane Bitter > zbitter at redhat.com>> wrote: >> >> On 31/05/18 14:35, Julia Kreger wrote: >> >> Back to the topic of nitpicking! >> >> I virtually sat down with Doug today and we hammered out the >> positive >> aspects that we feel like are the things that we as a community >> want >> to see as part of reviews coming out of this effort. The >> principles >> change[1] in governance has been updated as a result. >> >> I think we are at a point where we have to state high level >> principles, and then also update guidelines or other context >> providing >> documentation to re-enforce some of items covered in this >> discussion... not just to educate new contributors, but to serve >> as a >> checkpoint for existing reviewers when making the decision as to >> how >> to vote change set. The question then becomes where would such >> guidelines or documentation best fit? >> >> >> I think the contributor guide is the logical place for it. Kendall >> pointed out this existing section: >> >> https://docs.openstack.org/contributors/code-and-documentati >> on/using-gerrit.html#reviewing-changes >> > ion/using-gerrit.html#reviewing-changes> >> >> It could go in there, or perhaps we separate out the parts about >> when to use which review scores into a separate page from the >> mechanics of how to use Gerrit. >> >> Should we explicitly detail the >> cause/effect that occurs? Should we convey contributor >> perceptions, or >> maybe even just link to this thread as there has been a massive >> amount >> of feedback raising valid cases, points, and frustrations. >> >> Personally, I'd lean towards a blended approach, but the question >> of >> where is one I'm unsure of. Thoughts? >> >> >> Let's crowdsource a set of heuristics that reviewers and >> contributors should keep in mind when they're reviewing or having >> their changes reviewed. I made a start on collecting ideas from this >> and past threads, as well as my own reviewing experience, into a >> document that I've presumptuously titled "How to Review Changes the >> OpenStack Way" (but might be more accurately called "The Frank >> Sinatra Guide to Code Review" at the moment): >> >> https://etherpad.openstack.org/p/review-the-openstack-way >> >> >> It's in an etherpad to make it easier for everyone to add their >> suggestions and comments (folks in #openstack-tc have made some >> tweaks already). After a suitable interval has passed to collect >> feedback, I'll turn this into a contributor guide change. >> >> Have at it! >> >> cheers, >> Zane. >> >> >> -Julia >> >> [1]: https://review.openstack.org/#/c/570940/ >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Jun 4 21:41:10 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 4 Jun 2018 17:41:10 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527960022-sup-7990@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> Message-ID: On 02/06/18 13:23, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: >> On 01/06/18 12:18, Doug Hellmann wrote: > > [snip] > >>> Is that rule a sign of a healthy team dynamic, that we would want >>> to spread to the whole community? >> >> Yeah, this part I am pretty unsure about too. For some projects it >> probably is. For others it may just be an unnecessary obstacle, although >> I don't think it'd actually be *un*healthy for any project, assuming a >> big enough and diverse enough team (which should be a goal for the whole >> community). > > It feels like we would be saying that we don't trust 2 core reviewers > from the same company to put the project's goals or priorities over > their employer's. And that doesn't feel like an assumption I would > want us to encourage through a tag meant to show the health of the > project. Another way to look at it would be that the perception of a conflict of interest can be just as damaging to a community as somebody actually acting on a conflict of interest, and thus having clearly-defined rules to manage conflicts of interest helps protect everybody (and especially the people who could be perceived to have a conflict of interest but aren't, in fact, acting on it). Apparently enough people see it the way you described that this is probably not something we want to actively spread to other projects at the moment. The appealing part of the idea to me was that we could stop pretending that the results of our mindless script are objective - despite the fact that both the subset of information to rely on and the limits in the script were chosen by someone, in an essentially arbitrary way - and let the decision rest on the expertise of those who are closest to the project (and therefore have the most information), while aligning their incentives with the needs of users so that they're not being asked to keep their own score. I'm always on the lookout for opportunities to do that, so I felt like I had to at least float it. The alignment goes both ways though, and if we'd be creating an incentive to extend the coverage of a policy that is already controversial then this is not the way forward. cheers, Zane. From doug at doughellmann.com Mon Jun 4 21:52:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 04 Jun 2018 17:52:28 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> Message-ID: <1528148963-sup-59@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: > On 02/06/18 13:23, Doug Hellmann wrote: > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: > >> On 01/06/18 12:18, Doug Hellmann wrote: > > > > [snip] > > > >>> Is that rule a sign of a healthy team dynamic, that we would want > >>> to spread to the whole community? > >> > >> Yeah, this part I am pretty unsure about too. For some projects it > >> probably is. For others it may just be an unnecessary obstacle, although > >> I don't think it'd actually be *un*healthy for any project, assuming a > >> big enough and diverse enough team (which should be a goal for the whole > >> community). > > > > It feels like we would be saying that we don't trust 2 core reviewers > > from the same company to put the project's goals or priorities over > > their employer's. And that doesn't feel like an assumption I would > > want us to encourage through a tag meant to show the health of the > > project. > > Another way to look at it would be that the perception of a conflict of > interest can be just as damaging to a community as somebody actually > acting on a conflict of interest, and thus having clearly-defined rules > to manage conflicts of interest helps protect everybody (and especially > the people who could be perceived to have a conflict of interest but > aren't, in fact, acting on it). That's a reasonable perspective. Thanks for expanding on your original statement. > Apparently enough people see it the way you described that this is > probably not something we want to actively spread to other projects at > the moment. I am still curious to know which teams have the policy. If it is more widespread than I realized, maybe it's reasonable to extend it and use it as the basis for a health check after all. > The appealing part of the idea to me was that we could stop pretending > that the results of our mindless script are objective - despite the fact > that both the subset of information to rely on and the limits in the > script were chosen by someone, in an essentially arbitrary way - and let > the decision rest on the expertise of those who are closest to the > project (and therefore have the most information), while aligning their > incentives with the needs of users so that they're not being asked to > keep their own score. I'm always on the lookout for opportunities to do > that, so I felt like I had to at least float it. > > The alignment goes both ways though, and if we'd be creating an > incentive to extend the coverage of a policy that is already > controversial then this is not the way forward. > > cheers, > Zane. > From sean.mcginnis at gmx.com Mon Jun 4 22:04:26 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 17:04:26 -0500 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528148963-sup-59@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> Message-ID: I am still curious to know which teams have the policy. If it is more > widespread than I realized, maybe it's reasonable to extend it and use > it as the basis for a health check after all. > > I think it's been an unwritten "guideline" in Cinder, but not a hard rule. From sean.mcginnis at gmx.com Mon Jun 4 22:13:32 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 17:13:32 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <1528146144-sup-2183@lrrr.local> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> Message-ID: <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> On 06/04/2018 04:17 PM, Doug Hellmann wrote: > Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500: >> On 6/4/2018 1:07 PM, Sean McGinnis wrote: >>> Python 3 First >>> ============== >>> >>> One of the things brought up in the session was picking things that bring >>> excitement and are obvious benefits to deployers and users of OpenStack >>> services. While this one is maybe not as immediately obvious, I think this >>> is something that will end up helping deployers and also falls into the tech >>> debt reduction category that will help us move quicker long term. >>> >>> Python 2 is going away soon, so I think we need something to help compel folks >>> to work on making sure we are ready to transition. This will also be a good >>> point to help switch the mindset over to Python 3 being the default used >>> everywhere, with our Python 2 compatibility being just to continue legacy >>> support. >> I still don't really know what this goal means - we have python 3 >> support across the projects for the most part don't we? Based on that, >> this doesn't seem like much to take an entire "goal slot" for the release. > We still run docs, linters, functional tests, and other jobs under > python 2 by default. Perhaps a better framing would be to call this > "Python 3 by default", because the point is to change all of those jobs > to use Python 3, and to set up all future jobs using Python 3 unless we > specifically need to run them under Python 2. > > This seems like a small thing, but when we did it for Oslo we did find > code issues because the linters apply different rules and we did find > documentation build issues. The fixes were all straightforward, so I > don't expect it to mean a lot of work, but it's more than a single patch > per project. I also think using a goal is a good way to start shifting > the mindset of the contributor base into this new perspective. Yes, that's probably a better way to word it to properly convey the goal. Basically, all things running under Python3, project code and tooling, as the default unless specifically geared towards Python2. >>> Cold Upgrade Support >>> ==================== >>> >>> The other suggestion in the Forum session related to upgrades was the addition >>> of "upgrade check" CLIs for each project, and I was tempted to suggest that as >>> my second strawman choice. For some projects that would be a very minimal or >>> NOOP check, so it would probably be easy to complete the goal. But ultimately >>> what I think would bring the most value would be the work on supporting cold >>> upgrade, even if it will be more of a stretch for some projects to accomplish. >> I think you might be mixing two concepts here. Not so much mixing as discussing the two and the reason why I personally thought the one was a better goal, if you read through what was said about it. >> >> The cold upgrade support, per my understanding, is about getting the >> assert:supports-upgrade tag: >> >> https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html >> >> Which to me basically means the project runs a grenade job. There was >> discussion in the room about grenade not being a great tool for all >> projects, but no one is working on a replacement for that, so I don't >> think it's really justification at this point for *not* making it a goal. >> >> The "upgrade check" CLIs is a different thing though, which is more >> about automating as much of the upgrade release notes as possible. See >> the nova docs for examples on how we have used it: >> >> https://docs.openstack.org/nova/latest/cli/nova-status.html >> >> I'm not sure what projects you had in mind when you said, "For some >> projects that would be a very minimal or NOOP check, so it would >> probably be easy to complete the goal." I would expect that projects >> aren't meeting the goal if they are noop'ing everything. But what can be >> automated like this isn't necessarily black and white either. > What I remember from the discussion in the room was that not all > projects are going to have anything to do by hand that would block > an upgrade, but we still want all projects to have the test command. > That means many of those commands could potentially be no-ops, > right? Unless they're all going to do something like verify the > schema has been updated somehow? Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would check here. We don't have to see if placement has been set up or if cell0 has been configured. Maybe once we have the facility in place we would find some things worth checking, but at present I don't know what that would be. Which also makes me wonder, should this be an oslo thing that projects just plug in to for their specific checks? >>> Upgrades have been a major focus of discussion lately, especially as our >>> operators have been trying to get closer to the latest work upstream. This has >>> been an ongoing challenge. >>> >>> There has also been a lot of talk about LTS releases. We've landed on fast >>> forward upgrade to get between several releases, but I think improving upgrades >>> eases the way both for easier and more frequent upgrades and also getting to >>> the point some day where maybe we can think about upgrading over several >>> releases to be able to do something like an LTS to LTS upgrade. >>> >>> Neither one of these upgrade goals really has a clearly defined plan that >>> projects can pick up now and start working on, but I think with those involved >>> in these areas we should be able to come up with a perscriptive plan for >>> projects to follow. >>> >>> And it would really move our fast forward upgrade story forward. >> Agreed. In the FFU Forum session at the summit I mentioned the >> 'nova-status upgrade check' CLI and a lot of people in the room had >> never heard of it because they are still on Mitaka before we added that >> CLI (new in Ocata). But they sounded really interested in it and said >> they wished other projects were doing that to help ease upgrades so they >> won't be stuck on older unmaintained releases for so long. So anything >> we can do to improve upgrades, including our testing for them, will help >> make FFU better. >> >>> Next Steps >>> ========== >>> >>> I'm hoping with a strawman proposal we have a basis for debating the merits of >>> these and getting closer to being able to officially select Stein goals. We >>> still have some time, but I would like to avoid making late-cycle selections so >>> teams can start planning ahead for what will need to be done in Stein. >>> >>> Please feel free to promote other ideas for goals. That would be a good way for >>> us to weigh the pro's and con's between these and whatever else you have in >>> mind. Then hopefully we can come to some consensus and work towards clearly >>> defining what needs to be done and getting things well documented for teams to >>> pick up as soon as they wrap up Rocky (or sooner). >> I still want to lobby for a push to move off the old per-project CLIs >> and close the gap on using python-openstackclient CLI for everything, >> but I'm unclear on what the roadmap is for the major refactor with the >> SDK Monty was talking about in Vancouver. From a new user perspective, >> the 2000 individual CLIs to get anything done in OpenStack has to be a >> major turn off so we should make this a higher priority - including >> modernizing our per-project documentation to give OSC examples instead >> of per-project (e.g. nova boot) examples. > I support this one, too. We're going to need more contributors > working on the CLI team, I think, to make it happen, though. Dean > is way over his capacity, I'm basically not present, and we've lost > Steve. That leaves Akihiro and Rui to do most of the review work, > which isn't enough. > > Doug I was tempted to go with the OSC one too, but I was afraid resource constraints would make that unlikely. I haven't checked lately, but last I heard neither Cinder v3 nor microversions were supported yet. Maybe this has changed, but my impression is that a lot of work needs to be done before we can reasonably expect this to be a goal that we have a chance of getting near completion in a cycle. From tpb at dyncloud.net Mon Jun 4 22:16:08 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 4 Jun 2018 18:16:08 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528148963-sup-59@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> Message-ID: <20180604221608.nzqo6jdauq4l26ju@barron.net> On 04/06/18 17:52 -0400, Doug Hellmann wrote: >Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: >> On 02/06/18 13:23, Doug Hellmann wrote: >> > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: >> >> On 01/06/18 12:18, Doug Hellmann wrote: >> > >> > [snip] >> > >> >>> Is that rule a sign of a healthy team dynamic, that we would want >> >>> to spread to the whole community? >> >> >> >> Yeah, this part I am pretty unsure about too. For some projects it >> >> probably is. For others it may just be an unnecessary obstacle, although >> >> I don't think it'd actually be *un*healthy for any project, assuming a >> >> big enough and diverse enough team (which should be a goal for the whole >> >> community). >> > >> > It feels like we would be saying that we don't trust 2 core reviewers >> > from the same company to put the project's goals or priorities over >> > their employer's. And that doesn't feel like an assumption I would >> > want us to encourage through a tag meant to show the health of the >> > project. >> >> Another way to look at it would be that the perception of a conflict of >> interest can be just as damaging to a community as somebody actually >> acting on a conflict of interest, and thus having clearly-defined rules >> to manage conflicts of interest helps protect everybody (and especially >> the people who could be perceived to have a conflict of interest but >> aren't, in fact, acting on it). > >That's a reasonable perspective. Thanks for expanding on your original >statement. > >> Apparently enough people see it the way you described that this is >> probably not something we want to actively spread to other projects at >> the moment. > >I am still curious to know which teams have the policy. If it is more >widespread than I realized, maybe it's reasonable to extend it and use >it as the basis for a health check after all. Just some data. Manila has the policy (except for very trivial or urgent commits, where one +2 +W can be sufficient). When the project originated NetApp cores and a Mirantis core who was a contractor for NetApp predominated. I doubt that there was any perception of biased decisions -- the PTL at the time, Ben Swartzlander, is the kind of guy who is quite good at doing what he thinks is best for the project and not listening to any folks within his own company who might suggest otherwise, not that I have any evidence of anything like that either :). But at some point someone suggested that our +2 +W rule, already in place, be augmented with a requirement that the two +2s come from different affiliations and the rule was adopted. So far that seems to work OK though affiliations have shifted and NetApp cores are no longer quantitatively dominant in the project. There are three companies with two cores and so far as I can see they don't tend to vote together more than any other two cores, on the one hand, but on the other hand it isn't hard to get another core +2 if a change is ready to be merged. None of this is intended as an argument that this rule be expanded to other projects, it's just data as I said. -- Tom From sean.mcginnis at gmx.com Mon Jun 4 22:19:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 4 Jun 2018 17:19:55 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> Message-ID: <240736b2-d066-829f-55c0-3e46ffdebc0b@gmx.com> Adding back the openstack-operators list that Matt added. On 06/04/2018 05:13 PM, Sean McGinnis wrote: > On 06/04/2018 04:17 PM, Doug Hellmann wrote: >> Excerpts from Matt Riedemann's message of 2018-06-04 15:38:48 -0500: >>> On 6/4/2018 1:07 PM, Sean McGinnis wrote: >>>> Python 3 First >>>> ============== >>>> >>>> One of the things brought up in the session was picking things that >>>> bring >>>> excitement and are obvious benefits to deployers and users of >>>> OpenStack >>>> services. While this one is maybe not as immediately obvious, I >>>> think this >>>> is something that will end up helping deployers and also falls into >>>> the tech >>>> debt reduction category that will help us move quicker long term. >>>> >>>> Python 2 is going away soon, so I think we need something to help >>>> compel folks >>>> to work on making sure we are ready to transition. This will also >>>> be a good >>>> point to help switch the mindset over to Python 3 being the default >>>> used >>>> everywhere, with our Python 2 compatibility being just to continue >>>> legacy >>>> support. >>> I still don't really know what this goal means - we have python 3 >>> support across the projects for the most part don't we? Based on that, >>> this doesn't seem like much to take an entire "goal slot" for the >>> release. >> We still run docs, linters, functional tests, and other jobs under >> python 2 by default. Perhaps a better framing would be to call this >> "Python 3 by default", because the point is to change all of those jobs >> to use Python 3, and to set up all future jobs using Python 3 unless we >> specifically need to run them under Python 2. >> >> This seems like a small thing, but when we did it for Oslo we did find >> code issues because the linters apply different rules and we did find >> documentation build issues. The fixes were all straightforward, so I >> don't expect it to mean a lot of work, but it's more than a single patch >> per project. I also think using a goal is a good way to start shifting >> the mindset of the contributor base into this new perspective. > Yes, that's probably a better way to word it to properly convey the goal. > Basically, all things running under Python3, project code and tooling, as > the default unless specifically geared towards Python2. > >>>> Cold Upgrade Support >>>> ==================== >>>> >>>> The other suggestion in the Forum session related to upgrades was >>>> the addition >>>> of "upgrade check" CLIs for each project, and I was tempted to >>>> suggest that as >>>> my second strawman choice. For some projects that would be a very >>>> minimal or >>>> NOOP check, so it would probably be easy to complete the goal. But >>>> ultimately >>>> what I think would bring the most value would be the work on >>>> supporting cold >>>> upgrade, even if it will be more of a stretch for some projects to >>>> accomplish. >>> I think you might be mixing two concepts here. > Not so much mixing as discussing the two and the reason why I > personally thought > the one was a better goal, if you read through what was said about it. >>> >>> The cold upgrade support, per my understanding, is about getting the >>> assert:supports-upgrade tag: >>> >>> https://governance.openstack.org/tc/reference/tags/assert_supports-upgrade.html >>> >>> >>> Which to me basically means the project runs a grenade job. There was >>> discussion in the room about grenade not being a great tool for all >>> projects, but no one is working on a replacement for that, so I don't >>> think it's really justification at this point for *not* making it a >>> goal. >>> >>> The "upgrade check" CLIs is a different thing though, which is more >>> about automating as much of the upgrade release notes as possible. See >>> the nova docs for examples on how we have used it: >>> >>> https://docs.openstack.org/nova/latest/cli/nova-status.html >>> >>> I'm not sure what projects you had in mind when you said, "For some >>> projects that would be a very minimal or NOOP check, so it would >>> probably be easy to complete the goal." I would expect that projects >>> aren't meeting the goal if they are noop'ing everything. But what >>> can be >>> automated like this isn't necessarily black and white either. >> What I remember from the discussion in the room was that not all >> projects are going to have anything to do by hand that would block >> an upgrade, but we still want all projects to have the test command. >> That means many of those commands could potentially be no-ops, >> right? Unless they're all going to do something like verify the >> schema has been updated somehow? > > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would > check here. We don't have to see if placement has been set up or if cell0 > has been configured. Maybe once we have the facility in place we would > find some things worth checking, but at present I don't know what that > would be. > > Which also makes me wonder, should this be an oslo thing that projects > just plug in to for their specific checks? > >>>> Upgrades have been a major focus of discussion lately, especially >>>> as our >>>> operators have been trying to get closer to the latest work >>>> upstream. This has >>>> been an ongoing challenge. >>>> >>>> There has also been a lot of talk about LTS releases. We've landed >>>> on fast >>>> forward upgrade to get between several releases, but I think >>>> improving upgrades >>>> eases the way both for easier and more frequent upgrades and also >>>> getting to >>>> the point some day where maybe we can think about upgrading over >>>> several >>>> releases to be able to do something like an LTS to LTS upgrade. >>>> >>>> Neither one of these upgrade goals really has a clearly defined >>>> plan that >>>> projects can pick up now and start working on, but I think with >>>> those involved >>>> in these areas we should be able to come up with a perscriptive >>>> plan for >>>> projects to follow. >>>> >>>> And it would really move our fast forward upgrade story forward. >>> Agreed. In the FFU Forum session at the summit I mentioned the >>> 'nova-status upgrade check' CLI and a lot of people in the room had >>> never heard of it because they are still on Mitaka before we added that >>> CLI (new in Ocata). But they sounded really interested in it and said >>> they wished other projects were doing that to help ease upgrades so >>> they >>> won't be stuck on older unmaintained releases for so long. So anything >>> we can do to improve upgrades, including our testing for them, will >>> help >>> make FFU better. >>> >>>> Next Steps >>>> ========== >>>> >>>> I'm hoping with a strawman proposal we have a basis for debating >>>> the merits of >>>> these and getting closer to being able to officially select Stein >>>> goals. We >>>> still have some time, but I would like to avoid making late-cycle >>>> selections so >>>> teams can start planning ahead for what will need to be done in Stein. >>>> >>>> Please feel free to promote other ideas for goals. That would be a >>>> good way for >>>> us to weigh the pro's and con's between these and whatever else you >>>> have in >>>> mind. Then hopefully we can come to some consensus and work towards >>>> clearly >>>> defining what needs to be done and getting things well documented >>>> for teams to >>>> pick up as soon as they wrap up Rocky (or sooner). >>> I still want to lobby for a push to move off the old per-project CLIs >>> and close the gap on using python-openstackclient CLI for everything, >>> but I'm unclear on what the roadmap is for the major refactor with the >>> SDK Monty was talking about in Vancouver. From a new user perspective, >>> the 2000 individual CLIs to get anything done in OpenStack has to be a >>> major turn off so we should make this a higher priority - including >>> modernizing our per-project documentation to give OSC examples instead >>> of per-project (e.g. nova boot) examples. >> I support this one, too. We're going to need more contributors >> working on the CLI team, I think, to make it happen, though. Dean >> is way over his capacity, I'm basically not present, and we've lost >> Steve. That leaves Akihiro and Rui to do most of the review work, >> which isn't enough. >> >> Doug > I was tempted to go with the OSC one too, but I was afraid resource > constraints would make that unlikely. I haven't checked lately, but > last I heard neither Cinder v3 nor microversions were supported yet. > > Maybe this has changed, but my impression is that a lot of work needs > to be done before we can reasonably expect this to be a goal that we > have a chance of getting near completion in a cycle. > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Mon Jun 4 22:25:33 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 4 Jun 2018 18:25:33 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528148963-sup-59@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> Message-ID: On 04/06/18 17:52, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: >> On 02/06/18 13:23, Doug Hellmann wrote: >>> Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: >>>> On 01/06/18 12:18, Doug Hellmann wrote: >>> >>> [snip] >>> >>>>> Is that rule a sign of a healthy team dynamic, that we would want >>>>> to spread to the whole community? >>>> >>>> Yeah, this part I am pretty unsure about too. For some projects it >>>> probably is. For others it may just be an unnecessary obstacle, although >>>> I don't think it'd actually be *un*healthy for any project, assuming a >>>> big enough and diverse enough team (which should be a goal for the whole >>>> community). >>> >>> It feels like we would be saying that we don't trust 2 core reviewers >>> from the same company to put the project's goals or priorities over >>> their employer's. And that doesn't feel like an assumption I would >>> want us to encourage through a tag meant to show the health of the >>> project. >> >> Another way to look at it would be that the perception of a conflict of >> interest can be just as damaging to a community as somebody actually >> acting on a conflict of interest, and thus having clearly-defined rules >> to manage conflicts of interest helps protect everybody (and especially >> the people who could be perceived to have a conflict of interest but >> aren't, in fact, acting on it). > > That's a reasonable perspective. Thanks for expanding on your original > statement. > >> Apparently enough people see it the way you described that this is >> probably not something we want to actively spread to other projects at >> the moment. > > I am still curious to know which teams have the policy. If it is more > widespread than I realized, maybe it's reasonable to extend it and use > it as the basis for a health check after all. At least Nova still does, judging by this comment from Matt Riedemann in January: "For the record, it's not cool for two cores from the same company to be the sole +2s on a change contributed by the same company. Pretty standard operating procedure." (on https://review.openstack.org/#/c/523958/18) When this thread started I looked for somewhere that was documented more permanently, but I didn't find it. >> The appealing part of the idea to me was that we could stop pretending >> that the results of our mindless script are objective - despite the fact >> that both the subset of information to rely on and the limits in the >> script were chosen by someone, in an essentially arbitrary way - and let >> the decision rest on the expertise of those who are closest to the >> project (and therefore have the most information), while aligning their >> incentives with the needs of users so that they're not being asked to >> keep their own score. I'm always on the lookout for opportunities to do >> that, so I felt like I had to at least float it. >> >> The alignment goes both ways though, and if we'd be creating an >> incentive to extend the coverage of a policy that is already >> controversial then this is not the way forward. >> >> cheers, >> Zane. >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon Jun 4 22:47:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 4 Jun 2018 18:47:22 -0400 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: <1528145294-sup-9010@lrrr.local> References: <1528145294-sup-9010@lrrr.local> Message-ID: <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> On 06/04/2018 05:02 PM, Doug Hellmann wrote: > The most significant point of interest to the contributor > community from this section of the meeting was the apparently > overwhelming interest from companies employing contributors, as > well as 2/3 of the contributors to recent releases who responded > to the survey, to bring the PTG and summit back together as a single > event. This had come up at the meeting in Dublin as well, but in > the time since then the discussions progressed and it looks much > more likely that we will at least try re-combining the two events. OK, so will we return to having eleventy billion different mid-cycle events for each project? Personally, I've very much enjoyed the separate PTGs because I've actually been able to get work done at them; something that was much harder when the design summits were part of the overall conference. In fact I haven't gone to the last two summit events because of what I perceive to be a continued trend of the summits being focused on marketing, buzzwords and vendor pitches/sales. An extra spoonful of the "edge", anyone? > We discussed several reasons, including travel expense, travel visa > difficulties, time away from home and family, and sponsorship of > the events themselves. > > There are a few plans under consideration, and no firm decisions > have been made, yet. We discussed a strawman proposal to combine > the summit and PTG in April, in Denver, that would look much like > our older Summit events (from the Folsom/Grizzly time frame) with > a few days of conference and a few days of design summit, with some > overlap in the middle of the week. The dates, overlap, and > arrangements will depend on venue availability. Has the option of doing a single conference a year been addressed? Seems to me that we (the collective we) could save a lot of money not having to put on multiple giant events per year and instead have one. Just my two cents, but the OpenStack and Linux foundations seem to be pumping out new "open events" at a pretty regular clip -- OpenStack Summit, OpenDev, Open Networking Summit, OpenStack Days, OpenInfra Days, OpenNFV summit, the list keeps growing... at some point, do we think that the industry as a whole is just going to get event overload? Best, -jay From openstack at fried.cc Mon Jun 4 23:00:48 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 4 Jun 2018 18:00:48 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> Message-ID: <3442ae9b-9b77-7a6a-8ff9-3a159fd5999f@fried.cc> There has been much discussion. We've gotten to a point of an initial proposal and are ready for more (hopefully smaller, hopefully conclusive) discussion. To that end, there will be a HANGOUT tomorrow (TUESDAY, JUNE 5TH) at 1500 UTC. Be in #openstack-placement to get the link to join. The strawpeople outlined below and discussed in the referenced etherpad have been consolidated/distilled into a new etherpad [1] around which the hangout discussion will be centered. [1] https://etherpad.openstack.org/p/placement-making-the-(up)grade Thanks, efried On 06/01/2018 01:12 PM, Jay Pipes wrote: > On 05/31/2018 02:26 PM, Eric Fried wrote: >>> 1. Make everything perform the pivot on compute node start (which can be >>>     re-used by a CLI tool for the offline case) >>> 2. Make everything default to non-nested inventory at first, and provide >>>     a way to migrate a compute node and its instances one at a time (in >>>     place) to roll through. >> >> I agree that it sure would be nice to do ^ rather than requiring the >> "slide puzzle" thing. >> >> But how would this be accomplished, in light of the current "separation >> of responsibilities" drawn at the virt driver interface, whereby the >> virt driver isn't supposed to talk to placement directly, or know >> anything about allocations? > FWIW, I don't have a problem with the virt driver "knowing about > allocations". What I have a problem with is the virt driver *claiming > resources for an instance*. > > That's what the whole placement claims resources things was all about, > and I'm not interested in stepping back to the days of long racy claim > operations by having the compute nodes be responsible for claiming > resources. > > That said, once the consumer generation microversion lands [1], it > should be possible to *safely* modify an allocation set for a consumer > (instance) and move allocation records for an instance from one provider > to another. > > [1] https://review.openstack.org/#/c/565604/ > >> Here's a first pass: >> >> The virt driver, via the return value from update_provider_tree, tells >> the resource tracker that "inventory of resource class A on provider B >> have moved to provider C" for all applicable AxBxC.  E.g. >> >> [ { 'from_resource_provider': , >>      'moved_resources': [VGPU: 4], >>      'to_resource_provider': >>    }, >>    { 'from_resource_provider': , >>      'moved_resources': [VGPU: 4], >>      'to_resource_provider': >>    }, >>    { 'from_resource_provider': , >>      'moved_resources': [ >>          SRIOV_NET_VF: 2, >>          NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, >>          NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, >>      ], >>      'to_resource_provider': >>    } >> ] >> >> As today, the resource tracker takes the updated provider tree and >> invokes [1] the report client method update_from_provider_tree [2] to >> flush the changes to placement.  But now update_from_provider_tree also >> accepts the return value from update_provider_tree and, for each "move": >> >> - Creates provider C (as described in the provider_tree) if it doesn't >> already exist. >> - Creates/updates provider C's inventory as described in the >> provider_tree (without yet updating provider B's inventory).  This ought >> to create the inventory of resource class A on provider C. > > Unfortunately, right here you'll introduce a race condition. As soon as > this operation completes, the scheduler will have the ability to throw > new instances on provider C and consume the inventory from it that you > intend to give to the existing instance that is consuming from provider B. > >> - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. > > For each consumer of resources on rp B, right? > >> - Updates provider B's inventory. > > Again, this is problematic because the scheduler will have already begun > to place new instances on B's inventory, which could very well result in > incorrect resource accounting on the node. > > We basically need to have one giant new REST API call that accepts the > list of "move instructions" and performs all of the instructions in a > single transaction. :( > >> (*There's a hole here: if we're splitting a glommed-together inventory >> across multiple new child providers, as the VGPUs in the example, we >> don't know which allocations to put where.  The virt driver should know >> which instances own which specific inventory units, and would be able to >> report that info within the data structure.  That's getting kinda close >> to the virt driver mucking with allocations, but maybe it fits well >> enough into this model to be acceptable?) > > Well, it's not really the virt driver *itself* mucking with the > allocations. It's more that the virt driver is telling something *else* > the move instructions that it feels are needed... > >> Note that the return value from update_provider_tree is optional, and >> only used when the virt driver is indicating a "move" of this ilk.  If >> it's None/[] then the RT/update_from_provider_tree flow is the same as >> it is today. >> >> If we can do it this way, we don't need a migration tool.  In fact, we >> don't even need to restrict provider tree "reshaping" to release >> boundaries.  As long as the virt driver understands its own data model >> migrations and reports them properly via update_provider_tree, it can >> shuffle its tree around whenever it wants. > > Due to the many race conditions we would have in trying to fudge > inventory amounts (the reserved/total thing) and allocation movement for >>1 consumer at a time, I'm pretty sure the only safe thing to do is have > a single new HTTP endpoint that would take this list of move operations > and perform them atomically (on the placement server side of course). > > Here's a strawman for how that HTTP endpoint might look like: > > https://etherpad.openstack.org/p/placement-migrate-operations > > feel free to markup and destroy. > > Best, > -jay > >> Thoughts? >> >> -efried >> >> [1] >> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 >> >> [2] >> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Mon Jun 4 23:05:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jun 2018 23:05:27 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: <20180604230527.v4u7tdsapu53rel2@yuggoth.org> On 2018-06-04 14:28:28 -0700 (-0700), Amy Marrich wrote: > On Mon, Jun 4, 2018 at 7:29 AM, Zane Bitter wrote: > > On 04/06/18 10:19, Amy Marrich wrote: > > > [...] > > > I'll read in more detail, but do we want to add rollcall-vote? > > > > Is it used anywhere other than in the governance repo? We certainly could > > add it, but it didn't seem like a top priority. > > Not sure it is to be honest.:) The infra-specs repo uses it to solicit Infra Council votes; the governance-uc, openstack-specs and transparency-policy repos also use it for similar reasons to the governance repo. But no, it's not common enough I'd bother to mention it in any sort of documentation aimed at the general reviewer base. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Jun 4 23:12:33 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jun 2018 23:12:33 +0000 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528148963-sup-59@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> Message-ID: <20180604231233.fcvmi2bktkbq37c4@yuggoth.org> On 2018-06-04 17:52:28 -0400 (-0400), Doug Hellmann wrote: [...] > I am still curious to know which teams have the policy. If it is more > widespread than I realized, maybe it's reasonable to extend it and use > it as the basis for a health check after all. [...] Not team-wide, but I have a personal policy that I try to avoid approving a change if both the author and any other core reviews on that change are from people paid by the same organization which funds my time (unless I have a very good reason, and then I leave a clear review comment when approving in such situations). It's not so much a matter of a lack of trust on anyone's part, as a desire for me to keep and further improve on that trust I've already built. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pc2929 at att.com Mon Jun 4 23:24:20 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Mon, 4 Jun 2018 23:24:20 +0000 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> Message-ID: On Monday, June 04, 2018 18:47, Jay Pipes wrote: >Just my two cents, but the OpenStack and Linux foundations seem to be pumping out new "open events" at a pretty regular clip -- >OpenStack Summit, OpenDev, Open Networking Summit, OpenStack Days, OpenInfra Days, OpenNFV summit, the list keeps >growing... at some point, do we think that the industry as a whole is just going to get event overload? Future tense? I think you could re-write "going to get event overload" into past tense and not be wrong. We may be past the shoe event horizon. From mriedemos at gmail.com Mon Jun 4 23:44:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 18:44:15 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> Message-ID: <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> On 6/4/2018 5:13 PM, Sean McGinnis wrote: > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would > check here. We don't have to see if placement has been set up or if cell0 > has been configured. Maybe once we have the facility in place we would > find some things worth checking, but at present I don't know what that > would be. Here is an example from the Cinder Queens upgrade release notes: "RBD/Ceph backends should adjust max_over_subscription_ratio to take into account that the driver is no longer reporting volume’s physical usage but it’s provisioned size." I'm assuming you could check if rbd is configured as a storage backend and if so, is max_over_subscription_ratio set? If not, is it fatal? Does the operator need to configure it before upgrading to Rocky? Or is it something they should consider but don't necessary have to do - if that, there is a 'WARNING' status for those types of things. Things that are good candidates for automating are anything that would stop the cinder-volume service from starting, or things that require data migrations before you can roll forward. In nova we've had blocking DB schema migrations for stuff like this which basically mean "you haven't run the online data migrations CLI yet so we're not letting you go any further until your homework is done". Like I said, it's not black and white, but chances are good there are things that fall into these categories. -- Thanks, Matt From mriedemos at gmail.com Mon Jun 4 23:46:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 18:46:15 -0500 Subject: [openstack-dev] Forum Recap - Stein Release Goals In-Reply-To: <1528147141-sup-7778@lrrr.local> References: <20180531205942.GA18176@sm-xps> <54dcfa53-c7b1-1e88-9c0e-19920b169fa7@gmail.com> <1528147141-sup-7778@lrrr.local> Message-ID: On 6/4/2018 4:20 PM, Doug Hellmann wrote: > See my comments on the other part of the thread, but I think this is too > optimistic until we add a couple of people to the review team on OSC. > > Others from the OSC team who have a better perspective on how much work > is actually left may have a different opinion though? Yeah that is definitely something I was thinking about in Vancouver. Would a more realistic goal be to decentralize the OSC code, like the previous goal about how tempest plugins were done? Or similar to the docs being decentralized? That would spread the review load onto the projects that are actually writing CLIs for their resources - which they are already doing in their per-project clients, e.g. python-novaclient and python-cinderclient. -- Thanks, Matt From mriedemos at gmail.com Mon Jun 4 23:50:17 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 4 Jun 2018 18:50:17 -0500 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: On 6/4/2018 3:31 PM, Ihar Hrachyshka wrote: > Hi neutrinos and all, > > As some of you've already noticed, the last several months I was > scaling down my involvement in Neutron and, more generally, OpenStack. > I am at a point where I feel confident my disappearance won't disturb > the project, and so I am ready to make it official. > > I am stepping down from all administrative roles I so far accumulated > in Neutron and Stable teams. I shifted my focus to another project, > and so I just removed myself from all relevant admin groups to reflect > the change. > > It was a nice 4.5 year ride for me. I am very happy with what we > achieved in all these years and a bit sad to leave. The community is > the most brilliant and compassionate and dedicated to openness group > of people I was lucky to work with, and I am reminded daily how > awesome it is. > > I am far from leaving the industry, or networking, or the promise of > open source infrastructure, so I am sure we will cross our paths once > in a while with most of you.:) I also plan to hang out in our IRC > channels and make snarky comments, be aware! > > Thanks for the fish, > Ihar Ihar, I think we mostly crossed paths over QA and stable maintenance stuff, but it was always a pleasure working with you and you were/are an extremely valuable contributor to OpenStack. I wish you the best in your new endeavors. -- Thanks, Matt From emilien at redhat.com Tue Jun 5 00:26:09 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 4 Jun 2018 17:26:09 -0700 Subject: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One) Message-ID: TL;DR: we made nice progress and you can checkout this demo: https://asciinema.org/a/185533 We started the discussion back in Dublin during the last PTG. The idea of Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud) is to deploy a single node OpenStack where the provisioning happens on the same node (there is no notion of {under/over}cloud). A kind of a "packstack" or "devstack" but using TripleO which has can offer: - composable containerized services - composable upgrades - composable roles - Ansible driven deployment One of the key features we have been focusing so far are: - low bar to be able to dev/test TripleO (single machine: VM), with simpler tooling - make it fast (being able to deploy OpenStack in minutes) - being able to make a change in OpenStack (e.g. Keystone) and test the change immediately The workflow that we're currently targeting is: - deploy the system by yourself (centos7 or rhel7) - deploy the repos, install python-tripleoclient - run 'openstack tripleo deploy (+ few args) - (optional) modify your container with a Dockerfile + Ansible - Test your change Status: - tripleoclient was refactored in a way that the undercloud is actually a special configuration of the standalone deployment (still work in progress). We basically refactored the containerized undercloud to be more generic and configurable for standalone. - we can now deploy a standalone OpenStack with just Keystone + dependencies - which takes 12 minutes total (demo here: https://asciinema.org/a/185533 and doc in progress: http://logs.openstack.org/27/571827/6/check/build-openstack-sphinx-docs/1885304/html/install/containers_deployment/standalone.html ) - we have an Ansible role to push modifications to containers via a Docker file: https://github.com/openstack/ansible-role-tripleo-modify-image/ What's next: - Documentation: as you can see the documentation is still in progress ( https://review.openstack.org/#/c/571827/) - Continuous Integration: we're working on a new CI job: tripleo-ci-centos-7-standalone https://trello.com/c/HInL8pNm/7-upstream-ci-testing - Working on the standalone configuration interface, still WIP: https://review.openstack.org/#/c/569535/ - Investigate the use case where a developer wants to prepare the containers before the deployment I hope this update was useful, feel free to give feedback or ask any questions, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Jun 5 00:47:13 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 4 Jun 2018 17:47:13 -0700 Subject: [openstack-dev] [nova] spec review day next week Tuesday 2018-06-05 In-Reply-To: <61bd1858-93b4-1d1d-5106-0aaf2074c8b0@gmail.com> References: <61bd1858-93b4-1d1d-5106-0aaf2074c8b0@gmail.com> Message-ID: On Wed, 30 May 2018 12:22:20 -0700, Melanie Witt wrote: > Howdy all, > > This cycle, we have our spec freeze later than usual at milestone r-2 > June 7 because of the review runways system we've been trying out. We > wanted to allow more time for spec approvals as blueprints were > completed via runways. > > So, ahead of the spec freeze, let's have a spec review day next week > Tuesday June 5 to ensure we get what spec approvals we can over the line > before the freeze. Please try to make some time on Tuesday to review > some specs and thanks in advance for participating! Reminder: the spec review day is TODAY Tuesday June 5 (or tomorrow depending on your time zone). Please take some time to review some nova specs today if you can! Cheers, -melanie From luo.lujin at jp.fujitsu.com Tue Jun 5 00:52:08 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Tue, 5 Jun 2018 00:52:08 +0000 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: Hi Ihar, I still cannot believe that you are leaving OpenStack world. Words can hardly express how I appreciate your guidance either in OVO work or Neutron as a whole. It was valuable experience to me both technically and non-technically to get to work with you. Please do hang out in the channels and let the comments bloom! I wish you all the best in your next endeavors. Thanks, Lujin > -----Original Message----- > From: Ihar Hrachyshka [mailto:ihrachys at redhat.com] > Sent: Tuesday, June 5, 2018 5:31 AM > To: OpenStack Development Mailing List (not for usage questions) > > Cc: Miguel Lavalle > Subject: [openstack-dev] [neutron][stable] Stepping down from core > > Hi neutrinos and all, > > As some of you've already noticed, the last several months I was scaling > down my involvement in Neutron and, more generally, OpenStack. > I am at a point where I feel confident my disappearance won't disturb the > project, and so I am ready to make it official. > > I am stepping down from all administrative roles I so far accumulated in > Neutron and Stable teams. I shifted my focus to another project, and so I just > removed myself from all relevant admin groups to reflect the change. > > It was a nice 4.5 year ride for me. I am very happy with what we achieved in > all these years and a bit sad to leave. The community is the most brilliant and > compassionate and dedicated to openness group of people I was lucky to > work with, and I am reminded daily how awesome it is. > > I am far from leaving the industry, or networking, or the promise of open > source infrastructure, so I am sure we will cross our paths once in a while > with most of you. :) I also plan to hang out in our IRC channels and make > snarky comments, be aware! > > Thanks for the fish, > Ihar > > ________________________________________________________________ > __________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rochelle.grober at huawei.com Tue Jun 5 01:27:01 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Tue, 5 Jun 2018 01:27:01 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: Zane Bitter wrote: > On 31/05/18 14:35, Julia Kreger wrote: > > Back to the topic of nitpicking! > > > > I virtually sat down with Doug today and we hammered out the positive > > aspects that we feel like are the things that we as a community want > > to see as part of reviews coming out of this effort. The principles > > change[1] in governance has been updated as a result. > > > > I think we are at a point where we have to state high level > > principles, and then also update guidelines or other context providing > > documentation to re-enforce some of items covered in this > > discussion... not just to educate new contributors, but to serve as a > > checkpoint for existing reviewers when making the decision as to how > > to vote change set. The question then becomes where would such > > guidelines or documentation best fit? > > I think the contributor guide is the logical place for it. Kendall pointed out this > existing section: > > https://docs.openstack.org/contributors/code-and-documentation/using- > gerrit.html#reviewing-changes > > It could go in there, or perhaps we separate out the parts about when to use > which review scores into a separate page from the mechanics of how to use > Gerrit. > > > Should we explicitly detail the > > cause/effect that occurs? Should we convey contributor perceptions, or > > maybe even just link to this thread as there has been a massive amount > > of feedback raising valid cases, points, and frustrations. > > > > Personally, I'd lean towards a blended approach, but the question of > > where is one I'm unsure of. Thoughts? > > Let's crowdsource a set of heuristics that reviewers and contributors should > keep in mind when they're reviewing or having their changes reviewed. I > made a start on collecting ideas from this and past threads, as well as my own > reviewing experience, into a document that I've presumptuously titled "How > to Review Changes the OpenStack Way" (but might be more accurately called > "The Frank Sinatra Guide to Code Review" > at the moment): > > https://etherpad.openstack.org/p/review-the-openstack-way > > It's in an etherpad to make it easier for everyone to add their suggestions > and comments (folks in #openstack-tc have made some tweaks already). > After a suitable interval has passed to collect feedback, I'll turn this into a > contributor guide change. I offer the suggestion that there are some real examples of Good/Not Good in the document or maybe an addendum. Since we have many non-native speakers in our community, examples are like pictures -- worth a thousand foreign words;-) Maybe Zhipeng has a few favorites to supply. I would suggest both score and comment to go with score. In some cases, the example would show how to score and avoid nitpicking, in others, valid scores, but comments that are reasonable or not for the score. --Rocky > Have at it! > > cheers, > Zane. > > > -Julia > > > > [1]: https://review.openstack.org/#/c/570940/ > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gdubreui at redhat.com Tue Jun 5 01:45:58 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 5 Jun 2018 11:45:58 +1000 Subject: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core) In-Reply-To: <69FC568F-D687-4EC8-AAEE-9FB3C5695F1A@doughellmann.com> References: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> <69FC568F-D687-4EC8-AAEE-9FB3C5695F1A@doughellmann.com> Message-ID: On 04/06/18 22:20, Doug Hellmann wrote: >> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil wrote: >> >> Hi, >> >> Can someone from the core team request infra to create a feature branch for the Proof of Concept we agreed to do during API SIG forum session [1] a Vancouver? >> >> Thanks, >> Gilles >> >> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum > You can do this through the releases repo now. See the README for instructions. > > Doug Great, thanks Doug! What about the UUID associated? Do I generate one?: branches:   - name: feature/graphql     location:       openstack/neutron: From ed at leafe.com Tue Jun 5 02:56:47 2018 From: ed at leafe.com (Ed Leafe) Date: Mon, 4 Jun 2018 21:56:47 -0500 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528139384-sup-1453@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> <99108EBA-63AE-47C0-8AF9-18961ADAC9FF@leafe.com> <1528139384-sup-1453@lrrr.local> Message-ID: On Jun 4, 2018, at 2:10 PM, Doug Hellmann wrote: > >> Those rules were added because we wanted to avoid the appearance of one company implementing features that would only be beneficial to it. This arose from concerns in the early days when Rackspace was the dominant contributor: many of the other companies involved in OpenStack were worried that they would be investing their workers in a project that would only benefit Rackspace. As far as I know, there were never specific cases where Rackspace or any other company tried to push features in that no one else supported.. >> >> So even if now it doesn't seem that there is a problem, and we could remove these restrictions without ill effect, it just seems prudent to keep them. If a project is so small that the majority of its contributors/cores are from one company, maybe it should be an internal project for that company, and not a community project. >> >> -- Ed Leafe > > Where was the rule added, though? I am aware of some individual teams > with the rule, but AFAIK it was never a global rule. It's certainly not > in any of the projects for which I am currently a core reviewer. If you're looking for a reference to a particular bit of governance, I can't help you there. But being one of the Nova cores who worked for Rackspace back then, I was part of many such discussions, and can tell you that Rackspace was very conscious of not wanting to appear to be dictating the direction, and that this agreement not to approve code committed by other Rackers was an important part of that. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From amotoki at gmail.com Tue Jun 5 03:02:19 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 5 Jun 2018 12:02:19 +0900 Subject: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core) In-Reply-To: References: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> <69FC568F-D687-4EC8-AAEE-9FB3C5695F1A@doughellmann.com> Message-ID: Hi Gilles, 2018年6月5日(火) 10:46 Gilles Dubreuil : > > > On 04/06/18 22:20, Doug Hellmann wrote: > >> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil > wrote: > >> > >> Hi, > >> > >> Can someone from the core team request infra to create a feature branch > for the Proof of Concept we agreed to do during API SIG forum session [1] a > Vancouver? > >> > >> Thanks, > >> Gilles > >> > >> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum > > You can do this through the releases repo now. See the README for > instructions. > > > > Doug > > Great, thanks Doug! > > What about the UUID associated? Do I generate one?: > > branches: > - name: feature/graphql > location: > openstack/neutron: > This needs to be a valid commit hash. You can specify the latest conmit ID of the neutron repo. Akihiro > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue Jun 5 03:04:33 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 5 Jun 2018 13:04:33 +1000 Subject: [openstack-dev] [neutron][api][graphql] Feature branch creation please (PTL/Core) In-Reply-To: References: <5f993fb7-d4c9-15a1-c192-61d6d5562a53@redhat.com> <69FC568F-D687-4EC8-AAEE-9FB3C5695F1A@doughellmann.com> Message-ID: <39a6f22e-aae8-76a9-436c-2618a7edf492@redhat.com> On 05/06/18 13:02, Akihiro Motoki wrote: > Hi Gilles, > > 2018年6月5日(火) 10:46 Gilles Dubreuil >: > > > > On 04/06/18 22:20, Doug Hellmann wrote: > >> On Jun 4, 2018, at 7:57 AM, Gilles Dubreuil > > wrote: > >> > >> Hi, > >> > >> Can someone from the core team request infra to create a > feature branch for the Proof of Concept we agreed to do during API > SIG forum session [1] a Vancouver? > >> > >> Thanks, > >> Gilles > >> > >> [1] https://etherpad.openstack.org/p/YVR18-API-SIG-forum > > You can do this through the releases repo now. See the README > for instructions. > > > > Doug > > Great, thanks Doug! > > What about the UUID associated? Do I generate one?: > > branches: >    - name: feature/graphql >      location: >        openstack/neutron: > > > This needs to be a valid commit hash. > You can specify the latest conmit ID of the neutron repo. > > Akihiro Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Tue Jun 5 05:17:59 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 5 Jun 2018 05:17:59 +0000 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 On 5/31/18, 10:08 AM, "Borne Mace" wrote: Greetings all, I would like to propose the addition of Steve Noyes to the kolla-cli core reviewer team. Consider this nomination as my personal +1. Steve has a long history with the kolla-cli and should be considered its co-creator as probably half or more of the existing code was due to his efforts. He has now been working diligently since it was pushed upstream to improve the stability and testability of the cli and has the second most commits on the project. The kolla core team consists of 19 people, and the kolla-cli team of 2, for a total of 21. Steve therefore requires a minimum of 11 votes (so just 10 more after my +1), with no veto -2 votes within a 7 day voting window to end on June 6th. Voting will be closed immediately on a veto or in the case of a unanimous vote. As I'm not sure how active all of the 19 kolla cores are, your attention and timely vote is much appreciated. Thanks! -- Borne __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Tue Jun 5 06:55:00 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 5 Jun 2018 08:55:00 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal Message-ID: <8a1a0879-5d47-c2a7-786e-e8719c3ee384@redhat.com> The proposed undercloud installation jobs dependency [0] worked, see the jobs start time [1], [2] confirms that. The resulting delay for the full pipeline is an ~80 minutes, as it was expected. So PTAL folks, I propose to try it out in real gating and see how the tripleo zuul queue gets relieved. The remaining patch [1] adding a dependency on tox/linting didn't work, I'll need some help please to figure out why. Thank you Tristan and James and y'all folks for helping! [0] https://review.openstack.org/#/c/568536/ [1] http://logs.openstack.org/36/568536/6/check/tripleo-ci-centos-7-undercloud-containers/cfebec0/ara-report/ [2] http://logs.openstack.org/36/568536/6/check/tripleo-ci-centos-7-containers-multinode/1a211bb/ara-report/ [3] https://review.openstack.org/#/c/568543/ > > Perhaps this has something to do with jobs evaluation order, it may be > worth trying to add the dependencies list in the project-templates, like > it is done here for example: > http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 > > It also easier to read dependencies from pipelines definition imo. > > -Tristan -- Best regards, Bogdan Dobrelya, Irc #bogdando From tobias.urdin at crystone.com Tue Jun 5 07:00:38 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Tue, 5 Jun 2018 07:00:38 +0000 Subject: [openstack-dev] [tripleo][puppet] Hello all, puppet modules References: <9300F696-8743-46DF-8E73-EC4A78DD12B2@cern.ch> <20180604210514.aroacbpmmpb6n23w@grocaca> Message-ID: <7715b96938794858b5d3b6c103c0e011@mb01.staff.ognet.se> We are using them for one of our deployments and are working on moving our other one to use the same modules :) Best regards On 06/04/2018 11:06 PM, Arnaud Morin wrote: > Hey, > > OVH is also using them as well as some custom ansible playbooks to manage the deployment. But as for red had, the configuration part is handled by puppet. > We are also doing some upstream contribution from time to time. > > For us, the puppet modules are very stable and works very fine. > > Cheers, > From skaplons at redhat.com Tue Jun 5 07:18:58 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 5 Jun 2018 09:18:58 +0200 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: Hi Ihar, Thanks for everything what You did for OpenStack and Neutron especially. I remember that You were one of first people which I met in OpenStack community. Thanks for all Your help then and during all time when we worked together :) Good luck in Your new project :) > Wiadomość napisana przez Ihar Hrachyshka w dniu 04.06.2018, o godz. 22:31: > > Hi neutrinos and all, > > As some of you've already noticed, the last several months I was > scaling down my involvement in Neutron and, more generally, OpenStack. > I am at a point where I feel confident my disappearance won't disturb > the project, and so I am ready to make it official. > > I am stepping down from all administrative roles I so far accumulated > in Neutron and Stable teams. I shifted my focus to another project, > and so I just removed myself from all relevant admin groups to reflect > the change. > > It was a nice 4.5 year ride for me. I am very happy with what we > achieved in all these years and a bit sad to leave. The community is > the most brilliant and compassionate and dedicated to openness group > of people I was lucky to work with, and I am reminded daily how > awesome it is. > > I am far from leaving the industry, or networking, or the promise of > open source infrastructure, so I am sure we will cross our paths once > in a while with most of you. :) I also plan to hang out in our IRC > channels and make snarky comments, be aware! > > Thanks for the fish, > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From rasca at redhat.com Tue Jun 5 07:30:07 2018 From: rasca at redhat.com (Raoul Scarazzini) Date: Tue, 5 Jun 2018 09:30:07 +0200 Subject: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One) In-Reply-To: References: Message-ID: On 05/06/2018 02:26, Emilien Macchi wrote: [...] > I hope this update was useful, feel free to give feedback or ask any > questions, [...] I'm no prophet here, but I see a bright future for this approach. I can imagine how useful this can be on the testing and much more the learning side. Thanks for sharing! -- Raoul Scarazzini rasca at redhat.com From jean-philippe at evrard.me Tue Jun 5 07:36:25 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 5 Jun 2018 09:36:25 +0200 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> <2ed6661c-3020-6f70-20c4-e56855aeb326@gmail.com> <99108EBA-63AE-47C0-8AF9-18961ADAC9FF@leafe.com> <1528139384-sup-1453@lrrr.local> Message-ID: Sorry if I missed/repeat something already said in this thread... When I am looking at diversity, I generally like to know: 1) what's going on "right now", and 2) what happened in the cycle x. I think these 2 are different problems to solve. And tags are, IMO, best applied to the second case. So if I focus on the second: What if we are only tagging once per cycle, after the release? (I am pushing the idea further than the quarter basically). It would avoid flappiness (if that's a proper term?). For me, a cycle has a clear meaning. And involvements can balance out in a cycle. This would be, IMO, good enough to promote/declare diversity after the facts (and is an answer to the "what happened during the cycle"). Jean-Philippe Evrard (evrardjp) From jean-philippe at evrard.me Tue Jun 5 08:14:12 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 5 Jun 2018 10:14:12 +0200 Subject: [openstack-dev] [openstack-ansible][releases][governance] Change in OSA roles tagging Message-ID: Hello, *TL:DR;* If you are an openstack-ansible user, consuming our roles directly, with tags, without using openstack-ansible plays or integrated repo, then things will change for you. Start using git shas instead of tags. All other openstack-ansible users should not see a difference, even if they use openstack-ansible tags. During the summit, I had a discussion with dhellman (and smcginnis) to change how openstack-ansible does its releases. Currently, we tag openstack-ansible + many roles under our umbrella every two weeks. As far as I know, nobody requested to have roles tagged every two weeks. Only OpenStack-Ansible need to be tagged for consumption. Even people using our roles directly outside openstack-ansible generally use sha for roles. We don't rely on ansible galaxy. Because there is no need to tag the roles, there is no need to make them part of the "openstack-ansible" deliverable [1][2]. I will therefore clarify the governance repo for that, separating the roles, each of them with their own deliverable, instead of grouping some roles within openstack-ansible, and some others outside it. With this done, a release of openstack-ansible becomes straightforward using the standard release tooling. The releases of openstack-ansible becomes far simpler to request, review, and will not have timeouts anymore :p There are a few issues I see from the change. Still according to the discussion, it seems we can overcome those. 1. As this will be applied on all the branches, we may reach some issues with releasing in the next days. While the validate tooling of releases has shown me that it wouldn't be a problem (just warning) to not have all the repos in the deliverable, I would expect a governance change could be impactful. However, that is only impacting openstack-ansible, releases, and governance team: Keep in mind, openstack-ansible will not change for its users, and will still be tagged as you know it. 2. We will keep branching our roles the same way we do now. What we liked for roles being part of this deliverable, is the ability of having them automatically branched and their files adapted. To what I heard, it is still possible to do so, by having a devstack-like behavior, which branches on a sha, instead of branching on tag. So I guess it means all our roles will now be part of release files like this one [3], or even on a single release file, similar to it. What I would like to have, from this email, is: 1. Raise awareness to all the involved parties; 2. Confirmation we can go ahead, from a governance standpoint; 3. Confirmation we can still benefit from this automatic branch tooling. Thank you in advance. Jean-Philippe Evrard (evrardjp) [1]: https://github.com/openstack/governance/blob/8215c5fd9b464b332b310bbb767812fefc5d9174/reference/projects.yaml#L2493-L2540 [2]: https://github.com/openstack/releases/blob/9db5991707458bbf26a4dd9f55c2a01fee96a45d/deliverables/queens/openstack-ansible.yaml#L768-L851 [3]: https://github.com/openstack/releases/blob/9db5991707458bbf26a4dd9f55c2a01fee96a45d/deliverables/queens/devstack.yaml From aschadin at sbcloud.ru Tue Jun 5 08:27:13 2018 From: aschadin at sbcloud.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YAg0KHQtdGA0LPQtdC10LI=?= =?utf-8?B?0LjRhw==?=) Date: Tue, 5 Jun 2018 08:27:13 +0000 Subject: [openstack-dev] [watcher] Nominating suzhengwei as Watcher core Message-ID: <11B68DFD-B14E-4A3C-BAA0-3D6182DB90E5@sbcloud.ru> Hi Watchers, I’d like to nominate suzhengwei for Watcher Core team. suzhengwei makes great contribution to the Watcher project including code reviews and implementations. Please vote +1/-1. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jun 5 09:21:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 5 Jun 2018 11:21:16 +0200 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> Message-ID: Jay Pipes wrote: > On 06/04/2018 05:02 PM, Doug Hellmann wrote: >> [...]> > Personally, I've very much enjoyed the separate PTGs because I've actually been able to get work done at them; something that was much harder when the design summits were part of the overall conference. Right, the trick is to try to preserve that productivity while making it easier to travel to... One way would be to make sure the PTG remains a separate event (separate days, separate venues, separate registration), just co-located in same city and week. > [...] >> There are a few plans under consideration, and no firm decisions >> have been made, yet. We discussed a strawman proposal to combine >> the summit and PTG in April, in Denver, that would look much like >> our older Summit events (from the Folsom/Grizzly time frame) with >> a few days of conference and a few days of design summit, with some >> overlap in the middle of the week.  The dates, overlap, and >> arrangements will depend on venue availability. > > Has the option of doing a single conference a year been addressed? Seems > to me that we (the collective we) could save a lot of money not having > to put on multiple giant events per year and instead have one. Yes, the same strawman proposal included the idea of leveraging an existing international "OpenStack Day" event and raising its profile rather than organizing a full second summit every year. The second PTG of the year could then be kept as a separate event, or put next to that "upgraded" OpenStack Day. Thinking on this is still very much work in progress. -- Thierry Carrez (ttx) From amotoki at gmail.com Tue Jun 5 11:59:49 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 5 Jun 2018 20:59:49 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi, Sorry for re-using the ancient ML thread. Looking at recent xstatic-* repo reviews, I am a bit afraid that xstatic-cores do not have a common understanding on the principle of xstatic packages. I hope all xstatic-cores re-read "Packing Software" in the horizon contributor docs [1], especially "Minified Javascript policy" [2], carefully. Thanks, Akihiro [1] https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html [2] https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy 2018年4月4日(水) 14:35 Xinni Ge : > Hi Ivan and other Horizon team member, > > Thanks for adding us into xstatic-core group. > But I still need your opinion and help to release the newly-added xstatic > packages to pypi index. > > Current `xstatic-core` group doesn't have the permission to PUSH SIGNED > TAG, and I cannot release the first non-trivial version. > > If I (or maybe Kaz) could be added into xstatic-release group, we can > release all the 8 packages by ourselves. > > Or, we are very appreciate if any member of xstatic-release could help to > do it. > > Just for your quick access, here is the link of access permission page of > one xstatic package. > > https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access > > > -- > Best Regards, > Xinni > > On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara > wrote: > >> Hi Ivan, >> >> >> Thank you very much. >> I've confirmed that all of us have been added to xstatic-core. >> >> As discussed, we will focus on the followings what we added for >> heat-dashboard, will not touch other xstatic repos as core. >> >> xstatic-angular-material >> xstatic-angular-notify >> xstatic-angular-uuid >> xstatic-angular-vis >> xstatic-filesaver >> xstatic-js-yaml >> xstatic-json2yaml >> xstatic-vis >> >> Regards, >> Kaz >> >> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >> > Hi Kuz, >> > >> > Don't worry, we're on the same page with you. I added both you, Xinni >> and >> > Keichii to the xstatic-core group. Thank you for your contributions! >> > >> > Regards, >> > Ivan Kolodyazhny, >> > http://blog.e0ne.info/ >> > >> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >> wrote: >> >> >> >> Hi Ivan & Horizon folks >> >> >> >> >> >> AFAIK, Horizon team had conclusion that you will add the specific >> >> members to xstatic-core, correct ? >> >> Can I ask you to add the following members ? >> >> # All of tree are heat-dashboard core. >> >> >> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >> >> Xinni Ge / xinni.ge1990 at gmail.com >> >> Keiichi Hikita / keiichi.hikita at gmail.com >> >> >> >> Please give me a shout, if we are not on same page or any concern. >> >> >> >> Regards, >> >> Kaz >> >> >> >> >> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >> >> > Hi Ivan, Akihiro, >> >> > >> >> > >> >> > Thanks for your kind arrangement. >> >> > Looking forward to hearing your decision soon. >> >> > >> >> > Regards, >> >> > Kaz >> >> > >> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >> >> >> HI Team, >> >> >> >> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >> that >> >> >> #4 >> >> >> could be too complicated for us. Anyway, we've got this topic on the >> >> >> meeting >> >> >> agenda [1] so we'll discuss it there too. I'll share our decision >> after >> >> >> the >> >> >> meeting. >> >> >> >> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >> >> >> >> >> >> >> >> >> >> >> >> Regards, >> >> >> Ivan Kolodyazhny, >> >> >> http://blog.e0ne.info/ >> >> >> >> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > > >> >> >> wrote: >> >> >>> >> >> >>> Hi Kaz and Ivan, >> >> >>> >> >> >>> Yeah, it is worth discussed officially in the horizon team meeting >> or >> >> >>> the >> >> >>> mailing list thread to get a consensus. >> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >> >> >>> >> >> >>> After sending the previous mail, I noticed anther option. I see >> there >> >> >>> are >> >> >>> several options now. >> >> >>> (1) Keep xstatic-core and horizon-core same. >> >> >>> (2) Add specific members to xstatic-core >> >> >>> (3) Add specific horizon-plugin core to xstatic-core >> >> >>> (4) Split core membership into per-repo basis (perhaps too >> >> >>> complicated!!) >> >> >>> >> >> >>> My current vote is (2) as xstatic-core needs to understand what is >> >> >>> xstatic >> >> >>> and how it is maintained. >> >> >>> >> >> >>> Thanks, >> >> >>> Akihiro >> >> >>> >> >> >>> >> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >> >> >>>> >> >> >>>> Hi Akihiro, >> >> >>>> >> >> >>>> >> >> >>>> Thanks for your comment. >> >> >>>> The background of my request to add us to xstatic-core comes from >> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >> discussion. >> >> >>>> >> >> >>>> >> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >> >> >>>> Line135, "we can share ownership if needed - e0ne" >> >> >>>> >> >> >>>> Just in case, could you guys confirm unified opinion on this >> matter >> >> >>>> as >> >> >>>> Horizon team ? >> >> >>>> >> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >> >> >>>> because it's easier & smoother to manage what we are taking for >> >> >>>> heat-dashboard. >> >> >>>> On the other hand, I can understand what Akihiro you are saying, >> the >> >> >>>> newly added repos belong to Horizon project & being managed by not >> >> >>>> Horizon core is not consistent. >> >> >>>> Also having exception might make unexpected confusion in near >> future. >> >> >>>> >> >> >>>> Eventually we will follow your opinion, let me hear Horizon team's >> >> >>>> conclusion. >> >> >>>> >> >> >>>> Regards, >> >> >>>> Kaz >> >> >>>> >> >> >>>> >> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >> >> >>>> > Hi Kaz, >> >> >>>> > >> >> >>>> > These repositories are under horizon project. It looks better to >> >> >>>> > keep >> >> >>>> > the >> >> >>>> > current core team. >> >> >>>> > It potentially brings some confusion if we treat some horizon >> >> >>>> > plugin >> >> >>>> > team >> >> >>>> > specially. >> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >> >> >>>> > would >> >> >>>> > work >> >> >>>> > without problem even if only horizon-core can approve xstatic >> >> >>>> > reviews. >> >> >>>> > >> >> >>>> > >> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara > >: >> >> >>>> >> >> >> >>>> >> Hi Ivan, Horizon folks, >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >> >> >>>> >> landed. >> >> >>>> >> >> >> >>>> >> In project-config for them, I've set same acl-config as the >> >> >>>> >> existing >> >> >>>> >> xstatic repos. >> >> >>>> >> It means only "xstatic-core" can manage the newly created >> repos on >> >> >>>> >> gerrit. >> >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" >> >> >>>> >> like as >> >> >>>> >> what horizon-core is doing ? >> >> >>>> >> >> >> >>>> >> xstatic-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >> >> >>>> >> >> >> >>>> >> heat-dashboard-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >> >> >>>> >> >> >> >>>> >> Of course, we will surely touch only what we made, just would >> like >> >> >>>> >> to >> >> >>>> >> manage them smoothly by ourselves. >> >> >>>> >> In case we need to touch the other ones, will ask Horizon team >> for >> >> >>>> >> help. >> >> >>>> >> >> >> >>>> >> Thanks in advance. >> >> >>>> >> >> >> >>>> >> Regards, >> >> >>>> >> Kaz >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> >> >>>> >> > Hi Horizon Team, >> >> >>>> >> > >> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >> >> >>>> >> > option, >> >> >>>> >> > and submitted a patch for it. >> >> >>>> >> > Could you please help to review the patch. >> >> >>>> >> > >> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> >> >>>> >> > https://review.openstack.org/#/c/552259/ >> >> >>>> >> > >> >> >>>> >> > Thank you very much. >> >> >>>> >> > >> >> >>>> >> > Best Regards, >> >> >>>> >> > Xinni >> >> >>>> >> > >> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> >> >>>> >> > >> >> >>>> >> > wrote: >> >> >>>> >> >> >> >> >>>> >> >> Hi Kaz, >> >> >>>> >> >> >> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >> patches >> >> >>>> >> >> >> >> >>>> >> >> Regards, >> >> >>>> >> >> Ivan Kolodyazhny, >> >> >>>> >> >> http://blog.e0ne.info/ >> >> >>>> >> >> >> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> >> >>>> >> >> >> >> >>>> >> >> wrote: >> >> >>>> >> >>> >> >> >>>> >> >>> Hi Ivan & Horizon folks, >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >> >> >>>> >> >>> xstatic >> >> >>>> >> >>> modules. >> >> >>>> >> >>> Let me request you to have review the following patches. >> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >> >> >>>> >> >>> >> >> >>>> >> >>> project-config >> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >> >> >>>> >> >>> >> >> >>>> >> >>> governance >> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >> >> >>>> >> >>> >> >> >>>> >> >>> Thanks in advance:) >> >> >>>> >> >>> >> >> >>>> >> >>> Regards, >> >> >>>> >> >>> Kaz >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >> >>>> >> >>> : >> >> >>>> >> >>> > Yes, please do that. We can then discuss in the review >> about >> >> >>>> >> >>> > technical >> >> >>>> >> >>> > details. >> >> >>>> >> >>> > >> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> >> >>>> >> >>> > >> >> >>>> >> >>> > wrote: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Hi, Akihiro >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Thanks for the quick reply. >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >> should >> >> >>>> >> >>> >> not >> >> >>>> >> >>> >> be >> >> >>>> >> >>> >> modified. >> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >> >> >>>> >> >>> >> and I think maybe there could be one option like >> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >> >> >>>> >> >>> >> STATICFILES_DIRS. >> >> >>>> >> >>> >> I am considering to add a bug report to describe it at >> >> >>>> >> >>> >> first, >> >> >>>> >> >>> >> and >> >> >>>> >> >>> >> give >> >> >>>> >> >>> >> a >> >> >>>> >> >>> >> patch later maybe. >> >> >>>> >> >>> >> Is that ok with the Horizon team? >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Best Regards. >> >> >>>> >> >>> >> Xinni >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> wrote: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Hi Xinni, >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >> >> >>>> >> >>> >>> : >> >> >>>> >> >>> >>> > Hello Horizon Team, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > I would like to hear about your opinions about how to >> >> >>>> >> >>> >>> > add >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic >> >> >>>> >> >>> >>> > modules to horizon settings. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >> files >> >> >>>> >> >>> >>> > issue, >> >> >>>> >> >>> >>> > thanks >> >> >>>> >> >>> >>> > for >> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them >> and >> >> >>>> >> >>> >>> > referencing as >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic-* libs. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Thanks for moving this forward. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded >> as >> >> >>>> >> >>> >>> > openstack >> >> >>>> >> >>> >>> > official >> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >> >> >>>> >> >>> >>> > hesitate >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > decide >> >> >>>> >> >>> >>> > how to >> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >> >> >>>> >> >>> >>> > STATICFILES_DIRS >> >> >>>> >> >>> >>> > in >> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static files >> >> >>>> >> >>> >>> > could >> >> >>>> >> >>> >>> > be >> >> >>>> >> >>> >>> > automatically >> >> >>>> >> >>> >>> > collected by *collectstatic* process. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >> relevant >> >> >>>> >> >>> >>> > static >> >> >>>> >> >>> >>> > fils >> >> >>>> >> >>> >>> > are >> >> >>>> >> >>> >>> > added >> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >> plugin >> >> >>>> >> >>> >>> > dashboard. >> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >> >> >>>> >> >>> >>> > similar >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > ADD_JS_FILES) >> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >> >>>> >> >>> >>> > STATICFILES_DIRS). >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >> xstatic >> >> >>>> >> >>> >>> modules >> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a >> >> >>>> >> >>> >>> good >> >> >>>> >> >>> >>> idea >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >> horizon >> >> >>>> >> >>> >>> plugin >> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >> >> >>>> >> >>> >>> xstatic >> >> >>>> >> >>> >>> module >> >> >>>> >> >>> >>> in >> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >> horizon >> >> >>>> >> >>> >>> code >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> handle plugin settings should merge multiple entries >> to a >> >> >>>> >> >>> >>> single >> >> >>>> >> >>> >>> one >> >> >>>> >> >>> >>> hopefully. >> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Akihiro >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >> >> >>>> >> >>> >>> > guys, >> >> >>>> >> >>> >>> > and >> >> >>>> >> >>> >>> > Best Regards, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Xinni Ge >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> __________________________________________________________________________ >> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> > questions) >> >> >>>> >> >>> >>> > Unsubscribe: >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> __________________________________________________________________________ >> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> questions) >> >> >>>> >> >>> >>> Unsubscribe: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> -- >> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> __________________________________________________________________________ >> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >> questions) >> >> >>>> >> >>> >> Unsubscribe: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> __________________________________________________________________________ >> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> > Unsubscribe: >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> > >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> __________________________________________________________________________ >> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> Unsubscribe: >> >> >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> __________________________________________________________________________ >> >> >>>> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> >> Unsubscribe: >> >> >>>> >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >> >> >> >>>> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > -- >> >> >>>> >> > 葛馨霓 Xinni Ge >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> __________________________________________________________________________ >> >> >>>> >> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> > Unsubscribe: >> >> >>>> >> > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> > >> >> >>>> >> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> >> > >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> __________________________________________________________________________ >> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> Unsubscribe: >> >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> __________________________________________________________________________ >> >> >>>> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> > Unsubscribe: >> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>>> > >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> __________________________________________________________________________ >> >> >>>> OpenStack Development Mailing List (not for usage questions) >> >> >>>> Unsubscribe: >> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> __________________________________________________________________________ >> >> >>> OpenStack Development Mailing List (not for usage questions) >> >> >>> Unsubscribe: >> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >> >> >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > 葛馨霓 Xinni Ge > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lajos.katona at ericsson.com Tue Jun 5 12:06:32 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Tue, 5 Jun 2018 14:06:32 +0200 Subject: [openstack-dev] [heat][neutron] Extraroute support In-Reply-To: References: <9a7cbe15-e678-24e6-9e77-86fdc1429dc6@ericsson.com> Message-ID: Thanks for the answer. On 2018-06-01 18:04, Kevin Benton wrote: > The neutron API now supports compare and swap updates with an If-Match > header so the race condition can be avoided. > https://bugs.launchpad.net/neutron/+bug/1703234 > > > > On Fri, Jun 1, 2018, 04:57 Rabi Mishra > wrote: > > > On Fri, Jun 1, 2018 at 3:57 PM, Lajos Katona > > wrote: > > Hi, > > Could somebody help me out with Neutron's Extraroute support > in Hot templates. > The support status of the Extraroute is support.UNSUPPORTED in > heat, and only create and delete are the supported operations. > see: > https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/neutron/extraroute.py#LC35 > > As I see the unsupported tag was added when the feature was > moved from the contrib folder to in-tree > (https://review.openstack.org/186608) > Perhaps you can help me out why only create and delete are > supported and update not. > > > I think most of the resources when moved from contrib to in-tree > are marked as unsupported. Adding routes to an existing router by > multiple stacks can be racy and is probably the reason use of this > resource is not encouraged and hence it's not supported. You can > see the discussion in the original patch that proposed this > resource https://review.openstack.org/#/c/41044/ > > Not sure if things have changed on neutron side for us to revisit > the concerns. > > Also it does not have any update_allowed properties, hence no > handle_update(). It would be replaced if you change any property. > > Hope it helps. > > Thanks in advance for  the help. > > Regards > Lajos > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Regards, > Rabi Mishra > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jun 5 12:26:27 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 5 Jun 2018 07:26:27 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> Message-ID: <20180605122627.GA75119@smcginnis-mbp.local> On Mon, Jun 04, 2018 at 06:44:15PM -0500, Matt Riedemann wrote: > On 6/4/2018 5:13 PM, Sean McGinnis wrote: > > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would > > check here. We don't have to see if placement has been set up or if cell0 > > has been configured. Maybe once we have the facility in place we would > > find some things worth checking, but at present I don't know what that > > would be. > > Here is an example from the Cinder Queens upgrade release notes: > > "RBD/Ceph backends should adjust max_over_subscription_ratio to take into > account that the driver is no longer reporting volume’s physical usage but > it’s provisioned size." > > I'm assuming you could check if rbd is configured as a storage backend and > if so, is max_over_subscription_ratio set? If not, is it fatal? Does the > operator need to configure it before upgrading to Rocky? Or is it something > they should consider but don't necessary have to do - if that, there is a > 'WARNING' status for those types of things. > > Things that are good candidates for automating are anything that would stop > the cinder-volume service from starting, or things that require data > migrations before you can roll forward. In nova we've had blocking DB schema > migrations for stuff like this which basically mean "you haven't run the > online data migrations CLI yet so we're not letting you go any further until > your homework is done". > Thanks, I suppose we probably could find some things to at least WARN on. Maybe that would be useful. I suppose as far as a series goal goes, even if each project doesn't come up with a comprehensive set of checks, this would be a known thing deployers could use and potentially build some additional tooling around. This could be a good win for the overall ease of upgrade story. > Like I said, it's not black and white, but chances are good there are things > that fall into these categories. > > -- > > Thanks, > > Matt From sean.mcginnis at gmx.com Tue Jun 5 12:31:27 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 5 Jun 2018 07:31:27 -0500 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> Message-ID: <20180605123127.GB75119@smcginnis-mbp.local> On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote: > Jay Pipes wrote: > > On 06/04/2018 05:02 PM, Doug Hellmann wrote: > >> [...]> > > Personally, I've very much enjoyed the separate PTGs because I've actually > > been able to get work done at them; something that was much harder when the > > design summits were part of the overall conference. > > Right, the trick is to try to preserve that productivity while making it > easier to travel to... One way would be to make sure the PTG remains a > separate event (separate days, separate venues, separate registration), > just co-located in same city and week. > > > [...] > >> There are a few plans under consideration, and no firm decisions > >> have been made, yet. We discussed a strawman proposal to combine > >> the summit and PTG in April, in Denver, that would look much like > >> our older Summit events (from the Folsom/Grizzly time frame) with > >> a few days of conference and a few days of design summit, with some > >> overlap in the middle of the week.  The dates, overlap, and > >> arrangements will depend on venue availability. > > > > Has the option of doing a single conference a year been addressed? Seems > > to me that we (the collective we) could save a lot of money not having > > to put on multiple giant events per year and instead have one. > > Yes, the same strawman proposal included the idea of leveraging an > existing international "OpenStack Day" event and raising its profile > rather than organizing a full second summit every year. The second PTG > of the year could then be kept as a separate event, or put next to that > "upgraded" OpenStack Day. > I actually really like this idea. As things slow down, there just aren't enough of big splashy new things to announce every 6 months. I think it could work well to have one Summit a year, while using the OSD events as a way to reach those folks that can't make it to the one big event for the year due to timing or location. It could help concentrate efforts to have bigger goals ready by the Summit and keep things on a better cadence. And if we can still do a PTG-like event along side one of the OSD events, it would allow development to still get the valuable face-to-face time that we've come to expect. > Thinking on this is still very much work in progress. > > -- > Thierry Carrez (ttx) > From sfinucan at redhat.com Tue Jun 5 12:50:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 05 Jun 2018 13:50:52 +0100 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: Message-ID: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> On Mon, 2018-06-04 at 10:49 -0700, Nadathur, Sundar wrote: > Hi, > > Cyborg needs to create RCs and traits for accelerators. The > original plan was to do that with nested RPs. To avoid rushing > the > Nova developers, I had proposed that Cyborg could start by > applying > the traits to the compute node RP, and accept the resulting > caveats > for Rocky, till we get nested RP support. That proposal did not > find > many takers, and Cyborg has essentially been in waiting mode. > > > > Since it is June already, and there is a risk of not delivering > anything meaningful in Rocky, I am reviving my older proposal, > which > is summarized as below: > > > Cyborg shall create the RCs and traits as per spec > (https://review.openstack.org/#/c/554717/), both in Rocky and > beyond. Only the RPs will change post-Rocky. > > > In Rocky: > > Cyborg will not create nested RPs. It shall apply the device > traits to the compute node RP. > Cyborg will document the resulting caveat, i.e., all devices > in the same host should have the same traits. In > particular, > we cannot have a GPU and a FPGA, or 2 FPGAs of different > types, in the same host. > Cyborg will document that upgrades to post-Rocky releases > will require operator intervention (as described below). > > > > For upgrade to post-Rocky world with nested RPs: > > The operator needs to stop all running instances that use an > accelerator. > The operator needs to run a script that removes the Cyborg > traits and the inventory for Cyborg RCs from compute node > RPs. > The operator can then perform the upgrade. The new Cyborg > agent/driver(s) shall created nested RPs and publish > inventory/traits as specified. > > > IMHO, it is acceptable for Cyborg to do this because it is new > and we can set expectations for the (lack of) upgrade plan. The > alternative is that potentially no meaningful use cases get > addressed in Rocky for Cyborg. > > > > Please LMK what you think. I thought nested resource providers were already supported by placement? To the best of my knowledge, what is not supported is virt drivers using these to report NUMA topologies but I doubt that affects you. The placement guys will need to weigh in on this as I could be missing something but it sounds like you can start using this functionality right now. Stephen > > > Regards, > > Sundar > > > > _____________________________________________________________________ > _____OpenStack Development Mailing List (not for usage > questions)Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribehttp://lists.openstack > .org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Jun 5 13:29:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 5 Jun 2018 09:29:37 -0400 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: On 06/05/2018 08:50 AM, Stephen Finucane wrote: > I thought nested resource providers were already supported by placement? > To the best of my knowledge, what is /not/ supported is virt drivers > using these to report NUMA topologies but I doubt that affects you. The > placement guys will need to weigh in on this as I could be missing > something but it sounds like you can start using this functionality > right now. To be clear, this is what placement and nova *currently* support with regards to nested resource providers: 1) When creating a resource provider in placement, you can specify a parent_provider_uuid and thus create trees of providers. This was placement API microversion 1.14. Also included in this microversion was support for displaying the parent and root provider UUID for resource providers. 2) The nova "scheduler report client" (terrible name, it's mostly just the placement client at this point) understands how to call placement API 1.14 and create resource providers with a parent provider. 3) The nova scheduler report client uses a ProviderTree object [1] to cache information about the hierarchy of providers that it knows about. For nova-compute workers managing hypervisors, that means the ProviderTree object contained in the report client is rooted in a resource provider that represents the compute node itself (the hypervisor). For nova-compute workers managing baremetal, that means the ProviderTree object contains many root providers, each representing an Ironic baremetal node. 4) The placement API's GET /allocation_candidates endpoint now understands the concept of granular request groups [2]. Granular request groups are only relevant when a user wants to specify that child providers in a provider tree should be used to satisfy part of an overall scheduling request. However, this support is yet incomplete -- see #5 below. The following parts of the nested resource providers modeling are *NOT* yet complete, however: 5) GET /allocation_candidates does not currently return *results* when granular request groups are specified. So, while the placement service understands the *request* for granular groups, it doesn't yet have the ability to constrain the returned candidates appropriately. Tetsuro is actively working on this functionality in this patch series: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates 6) The virt drivers need to implement the update_provider_tree() interface [3] and construct the tree of resource providers along with appropriate inventory records for each child provider in the tree. Both libvirt and XenAPI virt drivers have patch series up that begin to take advantage of the nested provider modeling. However, a number of concerns [4] about in-place nova-compute upgrades when moving from a single resource provider to a nested provider tree model were raised, and we have begun brainstorming how to handle the migration of existing data in the single-provider model to the nested provider model. [5] We are blocking any reviews on patch series that modify the local provider modeling until these migration concerns are fully resolved. 7) The scheduler does not currently pass granular request groups to placement. Once #5 and #6 are resolved, and once the migration/upgrade path is resolved, clearly we will need to have the scheduler start making requests to placement that represent the granular request groups and have the scheduler pass the resulting allocation candidates to its filters and weighers. Hope this helps highlight where we currently are and the work still left to do (in Rocky) on nested resource providers. Best, -jay [1] https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py [2] https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html [3] https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade From e0ne at e0ne.info Tue Jun 5 13:32:22 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Tue, 5 Jun 2018 16:32:22 +0300 Subject: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose Message-ID: Hi team, In Horizon, we're going to get rid of unsupported Nose and use Django Test Runner instead of it [1]. Nose has some issues and limitations which blocks us in our testing improvement efforts. Nose has different test discovery mechanism than Django does. So, there was a chance to break some Horizon Plugins:(. Unfortunately, we haven't cross-project CI yet (TBH, I'm working on it and it's one of the first steps to get it done), that's why I tested this change [2] against all known plugins [3]. Most of the projects don't need any changes. I proposed few changed to plugins repositories [4] and most of them are merged already. Thanks a lot to everybody who helped me with it. Patches for heat-dashboard [5] and searchlight-ui [6] are under review. Additional efforts are needed for murano-dashboard, sahara-dashboard, and watcher-dashboard projects. murano-dashboard has Nose test runner enabled in the config, so Horizon change won't affect it. I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to explicitly enable Nose test runner there until we'll fix all related issues. I hope we'll have a good number of cross-project activities with these teams. Once all patches above will be merged, we'll be ready to the next step to make Horizon and plugins CI better. [1] https://review.openstack.org/#/c/544296/ [2] https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing [3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html [4] https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged) [5] https://review.openstack.org/572095 [6] https://review.openstack.org/572124 [7] https://review.openstack.org/572390 [8] https://review.openstack.org/572391 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Jun 5 13:46:46 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 5 Jun 2018 09:46:46 -0400 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: References: Message-ID: Hi everyone: This email is just to provide an update to the initial email regarding the state of StarlingX. The team has proposed a set of repositories to be imported[1] which are completely new projects (not forks of OpenStack or any other open source software). Importing those projects will help us on-board the new StarlingX contributors to our community, using the same tools we use for developing our other projects. [1]: https://review.openstack.org/#/c/569562/ If anyone has any questions, I'd be more than happy to address them. Regards, Mohammed On Wed, May 30, 2018 at 4:23 PM, Mohammed Naser wrote: > Hi everyone: > > Over the past week in the summit, there was a lot of discussion > regarding StarlingX > and members of the technical commitee had a few productive discussions regarding > the best approach to deal with a proposed new pilot project for > incubation in the OSF's Edge > Computing strategic focus area: StarlingX. > > If you're not aware, StarlingX includes forks of some OpenStack > components and other open source software > which contain certain features that are specific to edge and > industrial IoT computing use cases. The code > behind the project is from Wind River (and is used to build a product > called "Titanium > Cloud"). > > At the moment, the goal of StarlingX hosting their projects on the > community infrastructure > is to get the developers used to the Gerrit workflow. The intention > is to evenutally > work with upstream teams in order to bring the features and bug fixes which are > specific to the fork back upstream, with an ideal goal of bringing all > the differences > upstream. > > We've discussed around all the different ways that we can approach > this and how to > help the StarlingX team be part of our community. If we can > succesfully do this, it would > be a big success for our community as well as our community gaining > contributors from > the Wind River team. In an ideal world, it's a win-win. > > The plan at the moment is the following: > - StarlingX will have the first import of code that is not forked, > simply other software that > they've developed to help deliver their product. This code can be > hosted with no problems. > - StarlingX will generate a list of patches to be brought upstream and > the StarlingX team > will work together with upstream teams in order to start backporting > and upstreaming the > codebase. Emilien Macchi (EmilienM) and I have volunteered to take > on the responsibility of > monitoring the progress upstreaming these patches. > - StarlingX contains a few forks of other non-OpenStack software. The > StarlingX team will work > with the authors of the original projects to ensure that they do not > mind us hosting a fork > of their software. If they don't, we'll proceed to host those > projects. If they prefer > something else (hosting it themselves, placing it on another hosting > service, etc.), > the StarlingX team will work with them in that way. > > We discussed approaches for cases where patches aren't acceptable > upstream, because they > diverge from the project mission or aren't comprehensive. Ideally all > of those could be turned > into acceptable changes that meet both team's criteria. In some cases, > adding plugin interfaces > or driver interfaces may be the best alternative. Only as a last > resort would we retain the > forks for a long period of time. > > From what was brought up, the team from Wind River is hoping to > on-board roughly 50 new full > time contributors. In combination with the features that they've > built that we can hopefully > upstream, I am hopeful that we can come to a win-win situation for > everyone in this. > > Regards, > Mohammed From doug at doughellmann.com Tue Jun 5 13:51:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 09:51:04 -0400 Subject: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose In-Reply-To: References: Message-ID: <1528206617-sup-8376@lrrr.local> Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300: > Hi team, > > In Horizon, we're going to get rid of unsupported Nose and use Django Test > Runner instead of it [1]. Nose has some issues and limitations which blocks > us in our testing improvement efforts. > > Nose has different test discovery mechanism than Django does. So, there was > a chance to break some Horizon Plugins:(. Unfortunately, we haven't > cross-project CI yet (TBH, I'm working on it and it's one of the first > steps to get it done), that's why I tested this change [2] against all > known plugins [3]. > > Most of the projects don't need any changes. I proposed few changed to > plugins repositories [4] and most of them are merged already. Thanks a lot > to everybody who helped me with it. Patches for heat-dashboard [5] and > searchlight-ui [6] are under review. > > Additional efforts are needed for murano-dashboard, sahara-dashboard, and > watcher-dashboard projects. murano-dashboard has Nose test runner enabled > in the config, so Horizon change won't affect it. > > I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to > explicitly enable Nose test runner there until we'll fix all related > issues. I hope we'll have a good number of cross-project activities with > these teams. > > Once all patches above will be merged, we'll be ready to the next step to > make Horizon and plugins CI better. > > > [1] https://review.openstack.org/#/c/544296/ > [2] > https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing > [3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html > [4] > https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged) > [5] https://review.openstack.org/572095 > [6] https://review.openstack.org/572124 > [7] https://review.openstack.org/572390 > [8] https://review.openstack.org/572391 > > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ Nice work! Thanks for taking the initiative on updating our tooling. Doug From openstack at fried.cc Tue Jun 5 13:56:25 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 5 Jun 2018 08:56:25 -0500 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: To summarize: cyborg could model things nested-wise, but there would be no way to schedule them yet. Couple of clarifications inline. On 06/05/2018 08:29 AM, Jay Pipes wrote: > On 06/05/2018 08:50 AM, Stephen Finucane wrote: >> I thought nested resource providers were already supported by >> placement? To the best of my knowledge, what is /not/ supported is >> virt drivers using these to report NUMA topologies but I doubt that >> affects you. The placement guys will need to weigh in on this as I >> could be missing something but it sounds like you can start using this >> functionality right now. > > To be clear, this is what placement and nova *currently* support with > regards to nested resource providers: > > 1) When creating a resource provider in placement, you can specify a > parent_provider_uuid and thus create trees of providers. This was > placement API microversion 1.14. Also included in this microversion was > support for displaying the parent and root provider UUID for resource > providers. > > 2) The nova "scheduler report client" (terrible name, it's mostly just > the placement client at this point) understands how to call placement > API 1.14 and create resource providers with a parent provider. > > 3) The nova scheduler report client uses a ProviderTree object [1] to > cache information about the hierarchy of providers that it knows about. > For nova-compute workers managing hypervisors, that means the > ProviderTree object contained in the report client is rooted in a > resource provider that represents the compute node itself (the > hypervisor). For nova-compute workers managing baremetal, that means the > ProviderTree object contains many root providers, each representing an > Ironic baremetal node. > > 4) The placement API's GET /allocation_candidates endpoint now > understands the concept of granular request groups [2]. Granular request > groups are only relevant when a user wants to specify that child > providers in a provider tree should be used to satisfy part of an > overall scheduling request. However, this support is yet incomplete -- > see #5 below. Granular request groups are also usable/useful when sharing providers are in play. That functionality is complete on both the placement side and the report client side (see below). > The following parts of the nested resource providers modeling are *NOT* > yet complete, however: > > 5) GET /allocation_candidates does not currently return *results* when > granular request groups are specified. So, while the placement service > understands the *request* for granular groups, it doesn't yet have the > ability to constrain the returned candidates appropriately. Tetsuro is > actively working on this functionality in this patch series: > > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates > > > 6) The virt drivers need to implement the update_provider_tree() > interface [3] and construct the tree of resource providers along with > appropriate inventory records for each child provider in the tree. Both > libvirt and XenAPI virt drivers have patch series up that begin to take > advantage of the nested provider modeling. However, a number of concerns > [4] about in-place nova-compute upgrades when moving from a single > resource provider to a nested provider tree model were raised, and we > have begun brainstorming how to handle the migration of existing data in > the single-provider model to the nested provider model. [5] We are > blocking any reviews on patch series that modify the local provider > modeling until these migration concerns are fully resolved. > > 7) The scheduler does not currently pass granular request groups to > placement. The code is in place to do this [6] - so the scheduler *will* pass granular request groups to placement if your flavor specifies them. As noted above, such flavors will be limited to exploiting sharing providers until Tetsuro's series merges. But no further code work is required on the scheduler side. [6] https://review.openstack.org/#/c/515811/ > Once #5 and #6 are resolved, and once the migration/upgrade > path is resolved, clearly we will need to have the scheduler start > making requests to placement that represent the granular request groups > and have the scheduler pass the resulting allocation candidates to its > filters and weighers. > > Hope this helps highlight where we currently are and the work still left > to do (in Rocky) on nested resource providers. > > Best, > -jay > > > [1] > https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py > > [2] > https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html > > > [3] > https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833 > > > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html > > [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amotoki at gmail.com Tue Jun 5 13:59:06 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 5 Jun 2018 22:59:06 +0900 Subject: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose In-Reply-To: <1528206617-sup-8376@lrrr.local> References: <1528206617-sup-8376@lrrr.local> Message-ID: This is an important step to drop nose and nosehtmloutput :) We plan to switch the test runner and then re-enable integration tests (with selenium) for cross project testing. In addition, we horizon team are trying to minimize gate breakage in horizon plugins for recent changes (this and django 2.0). Hopefully pending related patches will land soon. 2018年6月5日(火) 22:52 Doug Hellmann : > Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300: > > Hi team, > > > > In Horizon, we're going to get rid of unsupported Nose and use Django > Test > > Runner instead of it [1]. Nose has some issues and limitations which > blocks > > us in our testing improvement efforts. > > > > Nose has different test discovery mechanism than Django does. So, there > was > > a chance to break some Horizon Plugins:(. Unfortunately, we haven't > > cross-project CI yet (TBH, I'm working on it and it's one of the first > > steps to get it done), that's why I tested this change [2] against all > > known plugins [3]. > > > > Most of the projects don't need any changes. I proposed few changed to > > plugins repositories [4] and most of them are merged already. Thanks a > lot > > to everybody who helped me with it. Patches for heat-dashboard [5] and > > searchlight-ui [6] are under review. > > > > Additional efforts are needed for murano-dashboard, sahara-dashboard, and > > watcher-dashboard projects. murano-dashboard has Nose test runner enabled > > in the config, so Horizon change won't affect it. > > > > I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to > > explicitly enable Nose test runner there until we'll fix all related > > issues. I hope we'll have a good number of cross-project activities with > > these teams. > > > > Once all patches above will be merged, we'll be ready to the next step to > > make Horizon and plugins CI better. > > > > > > [1] https://review.openstack.org/#/c/544296/ > > [2] > > > https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvNFM8NgTkrPxovMo/edit?usp=sharing > > [3] > https://docs.openstack.org/horizon/latest/install/plugin-registry.html > > [4] > > > https://review.openstack.org/#/q/topic:bp/improve-horizon-testing+(status:open+OR+status:merged) > > [5] https://review.openstack.org/572095 > > [6] https://review.openstack.org/572124 > > [7] https://review.openstack.org/572390 > > [8] https://review.openstack.org/572391 > > > > > > > > Regards, > > Ivan Kolodyazhny, > > http://blog.e0ne.info/ > > Nice work! Thanks for taking the initiative on updating our tooling. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Jun 5 14:05:20 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 5 Jun 2018 22:05:20 +0800 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: Maybe I missed something. Is there anyway the nova-compute can know the resources are allocated from which child resource provider? For example, the host has two PFs. The request is asking one VF, then the nova-compute needs to know the VF is allocated from which PF (resource provider). As my understand, currently we only return a list of alternative resource provider to the nova-compute, those alternative is root resource provider. 2018-06-05 21:29 GMT+08:00 Jay Pipes : > On 06/05/2018 08:50 AM, Stephen Finucane wrote: > >> I thought nested resource providers were already supported by placement? >> To the best of my knowledge, what is /not/ supported is virt drivers using >> these to report NUMA topologies but I doubt that affects you. The placement >> guys will need to weigh in on this as I could be missing something but it >> sounds like you can start using this functionality right now. >> > > To be clear, this is what placement and nova *currently* support with > regards to nested resource providers: > > 1) When creating a resource provider in placement, you can specify a > parent_provider_uuid and thus create trees of providers. This was placement > API microversion 1.14. Also included in this microversion was support for > displaying the parent and root provider UUID for resource providers. > > 2) The nova "scheduler report client" (terrible name, it's mostly just the > placement client at this point) understands how to call placement API 1.14 > and create resource providers with a parent provider. > > 3) The nova scheduler report client uses a ProviderTree object [1] to > cache information about the hierarchy of providers that it knows about. For > nova-compute workers managing hypervisors, that means the ProviderTree > object contained in the report client is rooted in a resource provider that > represents the compute node itself (the hypervisor). For nova-compute > workers managing baremetal, that means the ProviderTree object contains > many root providers, each representing an Ironic baremetal node. > > 4) The placement API's GET /allocation_candidates endpoint now understands > the concept of granular request groups [2]. Granular request groups are > only relevant when a user wants to specify that child providers in a > provider tree should be used to satisfy part of an overall scheduling > request. However, this support is yet incomplete -- see #5 below. > > The following parts of the nested resource providers modeling are *NOT* > yet complete, however: > > 5) GET /allocation_candidates does not currently return *results* when > granular request groups are specified. So, while the placement service > understands the *request* for granular groups, it doesn't yet have the > ability to constrain the returned candidates appropriately. Tetsuro is > actively working on this functionality in this patch series: > > https://review.openstack.org/#/q/status:open+project:opensta > ck/nova+branch:master+topic:bp/nested-resource-providers- > allocation-candidates > > 6) The virt drivers need to implement the update_provider_tree() interface > [3] and construct the tree of resource providers along with appropriate > inventory records for each child provider in the tree. Both libvirt and > XenAPI virt drivers have patch series up that begin to take advantage of > the nested provider modeling. However, a number of concerns [4] about > in-place nova-compute upgrades when moving from a single resource provider > to a nested provider tree model were raised, and we have begun > brainstorming how to handle the migration of existing data in the > single-provider model to the nested provider model. [5] We are blocking any > reviews on patch series that modify the local provider modeling until these > migration concerns are fully resolved. > > 7) The scheduler does not currently pass granular request groups to > placement. Once #5 and #6 are resolved, and once the migration/upgrade path > is resolved, clearly we will need to have the scheduler start making > requests to placement that represent the granular request groups and have > the scheduler pass the resulting allocation candidates to its filters and > weighers. > > Hope this helps highlight where we currently are and the work still left > to do (in Rocky) on nested resource providers. > > Best, > -jay > > > [1] https://github.com/openstack/nova/blob/master/nova/compute/p > rovider_tree.py > > [2] https://specs.openstack.org/openstack/nova-specs/specs/queen > s/approved/granular-resource-requests.html > > [3] https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a > 7aca73d185775d43df2/nova/virt/driver.py#L833 > > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/ > 130783.html > > [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From henrynash9 at mac.com Tue Jun 5 14:09:55 2018 From: henrynash9 at mac.com (Henry Nash) Date: Tue, 05 Jun 2018 15:09:55 +0100 Subject: [openstack-dev] Unsubscribe In-Reply-To: References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: <0123F142-0697-46AC-9D72-6535F0023F17@mac.com> > On 5 Jun 2018, at 14:56, Eric Fried wrote: > > To summarize: cyborg could model things nested-wise, but there would be > no way to schedule them yet. > > Couple of clarifications inline. > > On 06/05/2018 08:29 AM, Jay Pipes wrote: >> On 06/05/2018 08:50 AM, Stephen Finucane wrote: >>> I thought nested resource providers were already supported by >>> placement? To the best of my knowledge, what is /not/ supported is >>> virt drivers using these to report NUMA topologies but I doubt that >>> affects you. The placement guys will need to weigh in on this as I >>> could be missing something but it sounds like you can start using this >>> functionality right now. >> >> To be clear, this is what placement and nova *currently* support with >> regards to nested resource providers: >> >> 1) When creating a resource provider in placement, you can specify a >> parent_provider_uuid and thus create trees of providers. This was >> placement API microversion 1.14. Also included in this microversion was >> support for displaying the parent and root provider UUID for resource >> providers. >> >> 2) The nova "scheduler report client" (terrible name, it's mostly just >> the placement client at this point) understands how to call placement >> API 1.14 and create resource providers with a parent provider. >> >> 3) The nova scheduler report client uses a ProviderTree object [1] to >> cache information about the hierarchy of providers that it knows about. >> For nova-compute workers managing hypervisors, that means the >> ProviderTree object contained in the report client is rooted in a >> resource provider that represents the compute node itself (the >> hypervisor). For nova-compute workers managing baremetal, that means the >> ProviderTree object contains many root providers, each representing an >> Ironic baremetal node. >> >> 4) The placement API's GET /allocation_candidates endpoint now >> understands the concept of granular request groups [2]. Granular request >> groups are only relevant when a user wants to specify that child >> providers in a provider tree should be used to satisfy part of an >> overall scheduling request. However, this support is yet incomplete -- >> see #5 below. > > Granular request groups are also usable/useful when sharing providers > are in play. That functionality is complete on both the placement side > and the report client side (see below). > >> The following parts of the nested resource providers modeling are *NOT* >> yet complete, however: >> >> 5) GET /allocation_candidates does not currently return *results* when >> granular request groups are specified. So, while the placement service >> understands the *request* for granular groups, it doesn't yet have the >> ability to constrain the returned candidates appropriately. Tetsuro is >> actively working on this functionality in this patch series: >> >> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates >> >> >> 6) The virt drivers need to implement the update_provider_tree() >> interface [3] and construct the tree of resource providers along with >> appropriate inventory records for each child provider in the tree. Both >> libvirt and XenAPI virt drivers have patch series up that begin to take >> advantage of the nested provider modeling. However, a number of concerns >> [4] about in-place nova-compute upgrades when moving from a single >> resource provider to a nested provider tree model were raised, and we >> have begun brainstorming how to handle the migration of existing data in >> the single-provider model to the nested provider model. [5] We are >> blocking any reviews on patch series that modify the local provider >> modeling until these migration concerns are fully resolved. >> >> 7) The scheduler does not currently pass granular request groups to >> placement. > > The code is in place to do this [6] - so the scheduler *will* pass > granular request groups to placement if your flavor specifies them. As > noted above, such flavors will be limited to exploiting sharing > providers until Tetsuro's series merges. But no further code work is > required on the scheduler side. > > [6] https://review.openstack.org/#/c/515811/ > >> Once #5 and #6 are resolved, and once the migration/upgrade >> path is resolved, clearly we will need to have the scheduler start >> making requests to placement that represent the granular request groups >> and have the scheduler pass the resulting allocation candidates to its >> filters and weighers. >> >> Hope this helps highlight where we currently are and the work still left >> to do (in Rocky) on nested resource providers. >> >> Best, >> -jay >> >> >> [1] >> https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py >> >> [2] >> https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html >> >> >> [3] >> https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833 >> >> >> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html >> >> [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 5 14:10:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 10:10:35 -0400 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> Message-ID: <1528207132-sup-6054@lrrr.local> Excerpts from Jay Pipes's message of 2018-06-04 18:47:22 -0400: > On 06/04/2018 05:02 PM, Doug Hellmann wrote: > > The most significant point of interest to the contributor > > community from this section of the meeting was the apparently > > overwhelming interest from companies employing contributors, as > > well as 2/3 of the contributors to recent releases who responded > > to the survey, to bring the PTG and summit back together as a single > > event. This had come up at the meeting in Dublin as well, but in > > the time since then the discussions progressed and it looks much > > more likely that we will at least try re-combining the two events. > > OK, so will we return to having eleventy billion different mid-cycle > events for each project? Given that the main objections seem to be funding the travel (not the events themselves), or participants not *wanting* to go to that many events, I suspect not. > Personally, I've very much enjoyed the separate PTGs because I've > actually been able to get work done at them; something that was much > harder when the design summits were part of the overall conference. Yes, me, too. I find it useful to separate the discussions focused on internal team planning as opposed to setting priorities for the community more broadly. If we recombine the events I hope we can find a way to retain both types of conversations. > In fact I haven't gone to the last two summit events because of what I > perceive to be a continued trend of the summits being focused on > marketing, buzzwords and vendor pitches/sales. An extra spoonful of the > "edge", anyone? I've found the Forums significantly more useful the last 2 times. We definitely felt your absence in a few sessions. I don't think I've attended a regular talk at a summit in years. Maybe if we're going to combine the events again we can get a track set aside for contributor-focused talks, though. Not onboarding, or how-to-use-it talks, but deep-dives into how things like the new placement system was designed or how to build a driver for oslo.config, or whatever. The sort of thing you would expect to find at a tech-focused conference with contributors attending. > > We discussed several reasons, including travel expense, travel visa > > difficulties, time away from home and family, and sponsorship of > > the events themselves. > > > > There are a few plans under consideration, and no firm decisions > > have been made, yet. We discussed a strawman proposal to combine > > the summit and PTG in April, in Denver, that would look much like > > our older Summit events (from the Folsom/Grizzly time frame) with > > a few days of conference and a few days of design summit, with some > > overlap in the middle of the week. The dates, overlap, and > > arrangements will depend on venue availability. > > Has the option of doing a single conference a year been addressed? Seems > to me that we (the collective we) could save a lot of money not having > to put on multiple giant events per year and instead have one. > > Just my two cents, but the OpenStack and Linux foundations seem to be > pumping out new "open events" at a pretty regular clip -- OpenStack > Summit, OpenDev, Open Networking Summit, OpenStack Days, OpenInfra Days, > OpenNFV summit, the list keeps growing... at some point, do we think > that the industry as a whole is just going to get event overload? Thierry addressed this a bit, but I want to emphasize that the cost savings were less focused on the event itself or foundation costs and more on the travel expenses for all of the people going to the event. So, yes, having fewer events (or focusing more on regional events) would help with that, some. It's not clear to me how much of a critical mass of contributors we would get at regional events, though, unless we planned for it explicitly. Doug From doug at doughellmann.com Tue Jun 5 14:14:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 10:14:33 -0400 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <20180605122627.GA75119@smcginnis-mbp.local> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> <20180605122627.GA75119@smcginnis-mbp.local> Message-ID: <1528207912-sup-6956@lrrr.local> Excerpts from Sean McGinnis's message of 2018-06-05 07:26:27 -0500: > On Mon, Jun 04, 2018 at 06:44:15PM -0500, Matt Riedemann wrote: > > On 6/4/2018 5:13 PM, Sean McGinnis wrote: > > > Yes, exactly what I meant by the NOOP. I'm not sure what Cinder would > > > check here. We don't have to see if placement has been set up or if cell0 > > > has been configured. Maybe once we have the facility in place we would > > > find some things worth checking, but at present I don't know what that > > > would be. > > > > Here is an example from the Cinder Queens upgrade release notes: > > > > "RBD/Ceph backends should adjust max_over_subscription_ratio to take into > > account that the driver is no longer reporting volume’s physical usage but > > it’s provisioned size." > > > > I'm assuming you could check if rbd is configured as a storage backend and > > if so, is max_over_subscription_ratio set? If not, is it fatal? Does the > > operator need to configure it before upgrading to Rocky? Or is it something > > they should consider but don't necessary have to do - if that, there is a > > 'WARNING' status for those types of things. > > > > Things that are good candidates for automating are anything that would stop > > the cinder-volume service from starting, or things that require data > > migrations before you can roll forward. In nova we've had blocking DB schema > > migrations for stuff like this which basically mean "you haven't run the > > online data migrations CLI yet so we're not letting you go any further until > > your homework is done". > > > > Thanks, I suppose we probably could find some things to at least WARN on. Maybe > that would be useful. > > I suppose as far as a series goal goes, even if each project doesn't come up > with a comprehensive set of checks, this would be a known thing deployers could > use and potentially build some additional tooling around. This could be a good > win for the overall ease of upgrade story. > > > Like I said, it's not black and white, but chances are good there are things > > that fall into these categories. In the past when we've had questions about how broadly a goal is going to affect projects, we did a little data collection work up front. Maybe that's the next step for this one? Does someone want to volunteer to go around and talk to some of the project teams that are likely candidates for these sorts of upgrade blockers to start making lists? Doug > > > > -- > > > > Thanks, > > > > Matt > From sean.mcginnis at gmx.com Tue Jun 5 14:16:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 5 Jun 2018 09:16:43 -0500 Subject: [openstack-dev] Unsubscribe In-Reply-To: <0123F142-0697-46AC-9D72-6535F0023F17@mac.com> References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> <0123F142-0697-46AC-9D72-6535F0023F17@mac.com> Message-ID: <996d689f-c55b-c325-ff04-bc3ab11c92a0@gmx.com> Hey Henry, see footer on every message for how to unsubscribe. On 06/05/2018 09:09 AM, Henry Nash wrote: > >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 5 14:39:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 10:39:59 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527966421-sup-6019@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> Message-ID: <1528209520-sup-5595@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400: > Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +0000: > > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: > > [...] > > > It feels like we would be saying that we don't trust 2 core reviewers > > > from the same company to put the project's goals or priorities over > > > their employer's. And that doesn't feel like an assumption I would > > > want us to encourage through a tag meant to show the health of the > > > project. > > [...] > > > > That's one way of putting it. On the other hand, if we ostensibly > > have that sort of guideline (say, two core reviewers shouldn't be > > the only ones to review a change submitted by someone else from > > their same organization if the team is large and diverse enough to > > support such a pattern) then it gives our reviewers a better > > argument to push back on their management _if_ they're being > > strongly urged to review/approve certain patches. At least then they > > can say, "this really isn't going to fly because we have to get a > > reviewer from another organization to agree it's in the best > > interests of the project" rather than "fire me if you want but I'm > > not approving that change, no matter how much your product launch is > > going to be delayed." > > Do we have that problem? I honestly don't know how much pressure other > folks are feeling. My impression is that we've mostly become good at > finding the necessary compromises, but my experience doesn't cover all > of our teams. To all of the people who have replied to me privately that they have experienced this problem: We can't really start to address it until it's out here in the open. Please post to the list. Doug From cjeanner at redhat.com Tue Jun 5 14:44:03 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Jun 2018 16:44:03 +0200 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud Message-ID: Hello guys! I'm currently working on python-tripleoclient in order to squash the dreadful "NOPASSWD:ALL" allowed to the "stack" user. The start was an issue with the rights on some files being wrong (owner by root instead of stack, in stack home). After some digging and poking, it appears the undercloud deployment is called with a "sudo openstack tripleo deploy" command - this, of course, creates some major issues regarding both security and right management. I see a couple of ways to correct that bad situation: - let the global "sudo" call, and play with setuid/setgid when we actually don't need the root access (as it's mentioned in this comment¹) - drop that global sudo call, and replace all the necessary calls by some "sudo" when needed. This involves the replacement of native python code, like "os.mkdir" and the like. The first one isn't a solution - code maintenance will not be possible, having to thing "darn, os.setuid() before calling that, because I don't need root" is the current way, and it just doesn't apply. So I started the second one. It's, of course, longer, not really nice and painful, but at least this will end to a good status, and not so bad solution. This also meets the current work of the Security Squad about "limiting sudo rights and accesses". For now I don't have a proper patch to show, but it will most probably appear shortly, as a Work In Progress (I don't think it will be mergeable before some time, due to all the constraints we have regarding version portability, new sudoer integration and so on). I'll post the relevant review link as an answer of this thread when I have something I can show. Cheers, C. ¹ https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/tripleo_deploy.py#L827-L829 -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From kgiusti at gmail.com Tue Jun 5 14:47:17 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Tue, 5 Jun 2018 10:47:17 -0400 Subject: [openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens Message-ID: Hi, The telemetry integration test for oslo.messaging has started failing on the stable/queens branch [0]. A quick review of the logs points to a change in heat-tempest-plugin that is incompatible with the version of gabbi from queens upper constraints (1.40.0) [1][2]. The job definition [3] includes required-projects that do not have stable/queens branches - including heat-tempest-plugin. My question - how do I prevent this job from breaking when these unbranched projects introduce changes that are incompatible with upper-constrants for a particular branch? I've tried to use override-checkout in the job definition, but that seems a bit hacky in this case since the tagged versions don't appear to work and I've resorted to a hardcoded ref [4]. Advice appreciated, thanks! [0] https://review.openstack.org/#/c/567124/ [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624 [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332 [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250 [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml -- Ken Giusti (kgiusti at gmail.com) From lebre.adrien at free.fr Tue Jun 5 14:48:55 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Tue, 5 Jun 2018 16:48:55 +0200 (CEST) Subject: [openstack-dev] [FEMDC] meetings suspended until further notice In-Reply-To: <67878378.160787292.1528209018026.JavaMail.root@zimbra29-e5.priv.proxad.net> Message-ID: <596449522.160890842.1528210135736.JavaMail.root@zimbra29-e5.priv.proxad.net> Dear all, Following the exchanges we had during the Vancouver summit, in particular the non-sense to maintain/animate two groups targeting similar challenges (ie., the FEMDC SIG [1] and the new Edge Computing Working group [2]), FEMDC meetings are suspended until further notice. If you are interested by Edge Computing discussions, please see information available on the new edge wiki page [2]. Thanks ad_ri3n_ [1]https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds [2]https://wiki.openstack.org/wiki/Edge_Computing_Group From openstack at fried.cc Tue Jun 5 14:53:00 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 5 Jun 2018 09:53:00 -0500 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: Alex- Allocations for an instance are pulled down by the compute manager and passed into the virt driver's spawn method since [1]. An allocation comprises a consumer, provider, resource class, and amount. Once we can schedule to trees, the allocations pulled down by the compute manager will span the tree as appropriate. So in that sense, yes, nova-compute knows which amounts of which resource classes come from which providers. However, if you're asking about the situation where we have two different allocations of the same resource class coming from two separate providers: Yes, we can still tell which RCxAMOUNT is associated with which provider; but No, we still have no inherent way to correlate a specific one of those allocations with the part of the *request* it came from. If just the provider UUID isn't enough for the virt driver to figure out what to do, it may have to figure it out by looking at the flavor (and/or image metadata), inspecting the traits on the providers associated with the allocations, etc. (The theory here is that, if the virt driver can't tell the difference at that point, then it actually doesn't matter.) [1] https://review.openstack.org/#/c/511879/ On 06/05/2018 09:05 AM, Alex Xu wrote: > Maybe I missed something. Is there anyway the nova-compute can know the > resources are allocated from which child resource provider? For example, > the host has two PFs. The request is asking one VF, then the > nova-compute needs to know the VF is allocated from which PF (resource > provider). As my understand, currently we only return a list of > alternative resource provider to the nova-compute, those alternative is > root resource provider. > > 2018-06-05 21:29 GMT+08:00 Jay Pipes >: > > On 06/05/2018 08:50 AM, Stephen Finucane wrote: > > I thought nested resource providers were already supported by > placement? To the best of my knowledge, what is /not/ supported > is virt drivers using these to report NUMA topologies but I > doubt that affects you. The placement guys will need to weigh in > on this as I could be missing something but it sounds like you > can start using this functionality right now. > > > To be clear, this is what placement and nova *currently* support > with regards to nested resource providers: > > 1) When creating a resource provider in placement, you can specify a > parent_provider_uuid and thus create trees of providers. This was > placement API microversion 1.14. Also included in this microversion > was support for displaying the parent and root provider UUID for > resource providers. > > 2) The nova "scheduler report client" (terrible name, it's mostly > just the placement client at this point) understands how to call > placement API 1.14 and create resource providers with a parent provider. > > 3) The nova scheduler report client uses a ProviderTree object [1] > to cache information about the hierarchy of providers that it knows > about. For nova-compute workers managing hypervisors, that means the > ProviderTree object contained in the report client is rooted in a > resource provider that represents the compute node itself (the > hypervisor). For nova-compute workers managing baremetal, that means > the ProviderTree object contains many root providers, each > representing an Ironic baremetal node. > > 4) The placement API's GET /allocation_candidates endpoint now > understands the concept of granular request groups [2]. Granular > request groups are only relevant when a user wants to specify that > child providers in a provider tree should be used to satisfy part of > an overall scheduling request. However, this support is yet > incomplete -- see #5 below. > > The following parts of the nested resource providers modeling are > *NOT* yet complete, however: > > 5) GET /allocation_candidates does not currently return *results* > when granular request groups are specified. So, while the placement > service understands the *request* for granular groups, it doesn't > yet have the ability to constrain the returned candidates > appropriately. Tetsuro is actively working on this functionality in > this patch series: > > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers-allocation-candidates > > > 6) The virt drivers need to implement the update_provider_tree() > interface [3] and construct the tree of resource providers along > with appropriate inventory records for each child provider in the > tree. Both libvirt and XenAPI virt drivers have patch series up that > begin to take advantage of the nested provider modeling. However, a > number of concerns [4] about in-place nova-compute upgrades when > moving from a single resource provider to a nested provider tree > model were raised, and we have begun brainstorming how to handle the > migration of existing data in the single-provider model to the > nested provider model. [5] We are blocking any reviews on patch > series that modify the local provider modeling until these migration > concerns are fully resolved. > > 7) The scheduler does not currently pass granular request groups to > placement. Once #5 and #6 are resolved, and once the > migration/upgrade path is resolved, clearly we will need to have the > scheduler start making requests to placement that represent the > granular request groups and have the scheduler pass the resulting > allocation candidates to its filters and weighers. > > Hope this helps highlight where we currently are and the work still > left to do (in Rocky) on nested resource providers. > > Best, > -jay > > > [1] > https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py > > > [2] > https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/granular-resource-requests.html > > > [3] > https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833 > > > [4] > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130783.html > > > [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Tue Jun 5 15:22:12 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 5 Jun 2018 10:22:12 -0500 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: <20180605123127.GB75119@smcginnis-mbp.local> References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> <20180605123127.GB75119@smcginnis-mbp.local> Message-ID: On 6/5/2018 7:31 AM, Sean McGinnis wrote: > On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote: >> Jay Pipes wrote: >>> On 06/04/2018 05:02 PM, Doug Hellmann wrote: >>>> [...]> >>> Personally, I've very much enjoyed the separate PTGs because I've actually >>> been able to get work done at them; something that was much harder when the >>> design summits were part of the overall conference. >> Right, the trick is to try to preserve that productivity while making it >> easier to travel to... One way would be to make sure the PTG remains a >> separate event (separate days, separate venues, separate registration), >> just co-located in same city and week. >> >>> [...] >>>> There are a few plans under consideration, and no firm decisions >>>> have been made, yet. We discussed a strawman proposal to combine >>>> the summit and PTG in April, in Denver, that would look much like >>>> our older Summit events (from the Folsom/Grizzly time frame) with >>>> a few days of conference and a few days of design summit, with some >>>> overlap in the middle of the week.  The dates, overlap, and >>>> arrangements will depend on venue availability. >>> Has the option of doing a single conference a year been addressed? Seems >>> to me that we (the collective we) could save a lot of money not having >>> to put on multiple giant events per year and instead have one. >> Yes, the same strawman proposal included the idea of leveraging an >> existing international "OpenStack Day" event and raising its profile >> rather than organizing a full second summit every year. The second PTG >> of the year could then be kept as a separate event, or put next to that >> "upgraded" OpenStack Day. >> > I actually really like this idea. As things slow down, there just aren't enough > of big splashy new things to announce every 6 months. I think it could work > well to have one Summit a year, while using the OSD events as a way to reach > those folks that can't make it to the one big event for the year due to timing > or location. > > It could help concentrate efforts to have bigger goals ready by the Summit and > keep things on a better cadence. And if we can still do a PTG-like event along > side one of the OSD events, it would allow development to still get the > valuable face-to-face time that we've come to expect. I think going to one large summit event a year could be good with a co-located PTG type event.  I think, however, that we would still need to have a Separate PTG type event at some other point in the year.  I think it is going to be hard to keep development momentum going in the projects without a couple of face-to-face meetings a year. >> Thinking on this is still very much work in progress. >> >> -- >> Thierry Carrez (ttx) >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gr at ham.ie Tue Jun 5 15:37:01 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 5 Jun 2018 16:37:01 +0100 Subject: [openstack-dev] [tc] summary of joint leadership meeting from 20 May In-Reply-To: References: <1528145294-sup-9010@lrrr.local> <07ae6fc4-659b-d883-a7b7-d880ed6c3a74@gmail.com> <20180605123127.GB75119@smcginnis-mbp.local> Message-ID: <02a92786-e86e-02d4-256e-4a8d945c2bac@ham.ie> On 05/06/18 16:22, Jay S Bryant wrote: > > > On 6/5/2018 7:31 AM, Sean McGinnis wrote: >> On Tue, Jun 05, 2018 at 11:21:16AM +0200, Thierry Carrez wrote: >>> Jay Pipes wrote: >>>> On 06/04/2018 05:02 PM, Doug Hellmann wrote: >>>>> [...]> >>>> Personally, I've very much enjoyed the separate PTGs because I've >>>> actually >>>> been able to get work done at them; something that was much harder >>>> when the >>>> design summits were part of the overall conference. >>> Right, the trick is to try to preserve that productivity while making it >>> easier to travel to... One way would be to make sure the PTG remains a >>> separate event (separate days, separate venues, separate registration), >>> just co-located in same city and week. >>> >>>> [...] >>>>> There are a few plans under consideration, and no firm decisions >>>>> have been made, yet. We discussed a strawman proposal to combine >>>>> the summit and PTG in April, in Denver, that would look much like >>>>> our older Summit events (from the Folsom/Grizzly time frame) with >>>>> a few days of conference and a few days of design summit, with some >>>>> overlap in the middle of the week.  The dates, overlap, and >>>>> arrangements will depend on venue availability. >>>> Has the option of doing a single conference a year been addressed? >>>> Seems >>>> to me that we (the collective we) could save a lot of money not having >>>> to put on multiple giant events per year and instead have one. >>> Yes, the same strawman proposal included the idea of leveraging an >>> existing international "OpenStack Day" event and raising its profile >>> rather than organizing a full second summit every year. The second PTG >>> of the year could then be kept as a separate event, or put next to that >>> "upgraded" OpenStack Day. >>> >> I actually really like this idea. As things slow down, there just >> aren't enough >> of big splashy new things to announce every 6 months. I think it could >> work >> well to have one Summit a year, while using the OSD events as a way to >> reach >> those folks that can't make it to the one big event for the year due >> to timing >> or location. >> >> It could help concentrate efforts to have bigger goals ready by the >> Summit and >> keep things on a better cadence. And if we can still do a PTG-like >> event along >> side one of the OSD events, it would allow development to still get the >> valuable face-to-face time that we've come to expect. > I think going to one large summit event a year could be good with a > co-located PTG type event.  I think, however, that we would still need > to have a Separate PTG type event at some other point in the year.  I > think it is going to be hard to keep development momentum going in the > projects without a couple of face-to-face meetings a year. > I personally think a single summit (with a PTG / Ops Mid Cycle before or after) + a separate PTG / Ops Mid Cycle would be the best solution. We don't need to rotate locations - while my airmiles balance has really appreciated places like Tokyo / Sydney - we can just reuse locations. For example saying that the week of May 20 something every year in Vancouver[1] (or $NORTH_AMERICAN_CITY) is the OpenStack Summit + Kata + OpenDev + Edge conference massively reduces planning / scouting. Then having the PTG (or OpenStack Foundation Developer & Ops Team Gathering to make any new groups feel as welcomed, and not "tacked on") in Denver + $NON_NORTH_AMERICAN_CITY[2] (again, as static locations to reduce planning / scouting) seems like a good idea. 1 - But lets just say Vancouver please :) 2 - not sure if this is actually a good idea, but I don't the same visa problems that some of our contributors do, or have knowledge of how tax write off work so if I am wrong please tell me. >>> Thinking on this is still very much work in progress. >>> >>> --  >>> Thierry Carrez (ttx) >>> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 5 15:41:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 11:41:29 -0400 Subject: [openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens In-Reply-To: References: Message-ID: <1528213168-sup-5113@lrrr.local> Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400: > Hi, > > The telemetry integration test for oslo.messaging has started failing > on the stable/queens branch [0]. > > A quick review of the logs points to a change in heat-tempest-plugin > that is incompatible with the version of gabbi from queens upper > constraints (1.40.0) [1][2]. > > The job definition [3] includes required-projects that do not have > stable/queens branches - including heat-tempest-plugin. > > My question - how do I prevent this job from breaking when these > unbranched projects introduce changes that are incompatible with > upper-constrants for a particular branch? Aren't those projects co-gating on the oslo.messaging test job? How are the tests working for heat's stable/queens branch? Or telemetry? (whichever project is pulling in that tempest repo) > > I've tried to use override-checkout in the job definition, but that > seems a bit hacky in this case since the tagged versions don't appear > to work and I've resorted to a hardcoded ref [4]. > > Advice appreciated, thanks! > > [0] https://review.openstack.org/#/c/567124/ > [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624 > [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332 > [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250 > [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml From gr at ham.ie Tue Jun 5 15:42:45 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 5 Jun 2018 16:42:45 +0100 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: References: Message-ID: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> On 30/05/18 21:23, Mohammed Naser wrote: > Hi everyone: > > Over the past week in the summit, there was a lot of discussion > regarding StarlingX > and members of the technical commitee had a few productive discussions regarding > the best approach to deal with a proposed new pilot project for > incubation in the OSF's Edge > Computing strategic focus area: StarlingX. > > If you're not aware, StarlingX includes forks of some OpenStack > components and other open source software > which contain certain features that are specific to edge and > industrial IoT computing use cases. The code > behind the project is from Wind River (and is used to build a product > called "Titanium > Cloud"). > > At the moment, the goal of StarlingX hosting their projects on the > community infrastructure > is to get the developers used to the Gerrit workflow. The intention > is to evenutally > work with upstream teams in order to bring the features and bug fixes which are > specific to the fork back upstream, with an ideal goal of bringing all > the differences > upstream. > > We've discussed around all the different ways that we can approach > this and how to > help the StarlingX team be part of our community. If we can > succesfully do this, it would > be a big success for our community as well as our community gaining > contributors from > the Wind River team. In an ideal world, it's a win-win. > > The plan at the moment is the following: > - StarlingX will have the first import of code that is not forked, > simply other software that > they've developed to help deliver their product. This code can be > hosted with no problems. > - StarlingX will generate a list of patches to be brought upstream and > the StarlingX team > will work together with upstream teams in order to start backporting > and upstreaming the > codebase. Emilien Macchi (EmilienM) and I have volunteered to take > on the responsibility of > monitoring the progress upstreaming these patches. > - StarlingX contains a few forks of other non-OpenStack software. The > StarlingX team will work > with the authors of the original projects to ensure that they do not > mind us hosting a fork > of their software. If they don't, we'll proceed to host those > projects. If they prefer > something else (hosting it themselves, placing it on another hosting > service, etc.), > the StarlingX team will work with them in that way. > > We discussed approaches for cases where patches aren't acceptable > upstream, because they > diverge from the project mission or aren't comprehensive. Ideally all > of those could be turned > into acceptable changes that meet both team's criteria. In some cases, > adding plugin interfaces > or driver interfaces may be the best alternative. Only as a last > resort would we retain the > forks for a long period of time. I honestly think that these forks should never be inside the foundation. If there is a big enough disagreement between project teams and the fork, we (as the TC of the OpenStack project) and the board (of *OpenStack* Foundation) should support our current teams, who have been working in the open. There is plenty of companies who would have loved certain features in OpenStack over the years that an extra driver extension point would have enabled, but when the upstream team pushed back, they redesigned the feature to work with the community vision. We should not reward companies that didn't. > From what was brought up, the team from Wind River is hoping to > on-board roughly 50 new full > time contributors. In combination with the features that they've > built that we can hopefully > upstream, I am hopeful that we can come to a win-win situation for > everyone in this. > > Regards, > Mohammed > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Jun 5 15:47:25 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 11:47:25 -0400 Subject: [openstack-dev] Forum Recap - Stein Release Goals In-Reply-To: References: <20180531205942.GA18176@sm-xps> <54dcfa53-c7b1-1e88-9c0e-19920b169fa7@gmail.com> <1528147141-sup-7778@lrrr.local> Message-ID: <1528213473-sup-4542@lrrr.local> Excerpts from Matt Riedemann's message of 2018-06-04 18:46:15 -0500: > On 6/4/2018 4:20 PM, Doug Hellmann wrote: > > See my comments on the other part of the thread, but I think this is too > > optimistic until we add a couple of people to the review team on OSC. > > > > Others from the OSC team who have a better perspective on how much work > > is actually left may have a different opinion though? > > Yeah that is definitely something I was thinking about in Vancouver. > > Would a more realistic goal be to decentralize the OSC code, like the > previous goal about how tempest plugins were done? Or similar to the > docs being decentralized? That would spread the review load onto the > projects that are actually writing CLIs for their resources - which they > are already doing in their per-project clients, e.g. python-novaclient > and python-cinderclient. > In the past we've tried to avoid that because we wanted some consistency in the UI design. I don't know if it's time to give up on that and reconsider dividing the commands into multiple repos, or just ask that people participate in building this tool for our users. I don't think it would be any more complicated to do the work in the OSC repo and gain some minimal experience that could let folks become cores than it would be to do the same work in a repo where they are already core. The plugin APIs are relatively stable so it's basically the same code. Doug From remo at rm.ht Tue Jun 5 15:59:18 2018 From: remo at rm.ht (Remo Mattei) Date: Tue, 5 Jun 2018 08:59:18 -0700 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> Message-ID: <4A494BF8-3FE6-4C8F-BEA0-618FBB451AB0@rm.ht> I agree with Graham +1 Remo > On Jun 5, 2018, at 8:42 AM, Graham Hayes wrote: > > > > On 30/05/18 21:23, Mohammed Naser wrote: >> Hi everyone: >> >> Over the past week in the summit, there was a lot of discussion >> regarding StarlingX >> and members of the technical commitee had a few productive discussions regarding >> the best approach to deal with a proposed new pilot project for >> incubation in the OSF's Edge >> Computing strategic focus area: StarlingX. >> >> If you're not aware, StarlingX includes forks of some OpenStack >> components and other open source software >> which contain certain features that are specific to edge and >> industrial IoT computing use cases. The code >> behind the project is from Wind River (and is used to build a product >> called "Titanium >> Cloud"). >> >> At the moment, the goal of StarlingX hosting their projects on the >> community infrastructure >> is to get the developers used to the Gerrit workflow. The intention >> is to evenutally >> work with upstream teams in order to bring the features and bug fixes which are >> specific to the fork back upstream, with an ideal goal of bringing all >> the differences >> upstream. >> >> We've discussed around all the different ways that we can approach >> this and how to >> help the StarlingX team be part of our community. If we can >> succesfully do this, it would >> be a big success for our community as well as our community gaining >> contributors from >> the Wind River team. In an ideal world, it's a win-win. >> >> The plan at the moment is the following: >> - StarlingX will have the first import of code that is not forked, >> simply other software that >> they've developed to help deliver their product. This code can be >> hosted with no problems. >> - StarlingX will generate a list of patches to be brought upstream and >> the StarlingX team >> will work together with upstream teams in order to start backporting >> and upstreaming the >> codebase. Emilien Macchi (EmilienM) and I have volunteered to take >> on the responsibility of >> monitoring the progress upstreaming these patches. >> - StarlingX contains a few forks of other non-OpenStack software. The >> StarlingX team will work >> with the authors of the original projects to ensure that they do not >> mind us hosting a fork >> of their software. If they don't, we'll proceed to host those >> projects. If they prefer >> something else (hosting it themselves, placing it on another hosting >> service, etc.), >> the StarlingX team will work with them in that way. >> >> We discussed approaches for cases where patches aren't acceptable >> upstream, because they >> diverge from the project mission or aren't comprehensive. Ideally all >> of those could be turned >> into acceptable changes that meet both team's criteria. In some cases, >> adding plugin interfaces >> or driver interfaces may be the best alternative. Only as a last >> resort would we retain the >> forks for a long period of time. > > I honestly think that these forks should never be inside the foundation. > If there is a big enough disagreement between project teams and the > fork, we (as the TC of the OpenStack project) and the board (of > *OpenStack* Foundation) should support our current teams, who have > been working in the open. > > There is plenty of companies who would have loved certain features in > OpenStack over the years that an extra driver extension point would > have enabled, but when the upstream team pushed back, they redesigned > the feature to work with the community vision. We should not reward > companies that didn't. > >> From what was brought up, the team from Wind River is hoping to >> on-board roughly 50 new full >> time contributors. In combination with the features that they've >> built that we can hopefully >> upstream, I am hopeful that we can come to a win-win situation for >> everyone in this. >> >> Regards, >> Mohammed >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue Jun 5 16:08:20 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 5 Jun 2018 17:08:20 +0100 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: References: Message-ID: On Tue, Jun 5, 2018 at 3:44 PM, Cédric Jeanneret wrote: > Hello guys! > > I'm currently working on python-tripleoclient in order to squash the > dreadful "NOPASSWD:ALL" allowed to the "stack" user. > > The start was an issue with the rights on some files being wrong (owner > by root instead of stack, in stack home). After some digging and poking, > it appears the undercloud deployment is called with a "sudo openstack > tripleo deploy" command - this, of course, creates some major issues > regarding both security and right management. > > I see a couple of ways to correct that bad situation: > - let the global "sudo" call, and play with setuid/setgid when we > actually don't need the root access (as it's mentioned in this comment¹) > > - drop that global sudo call, and replace all the necessary calls by > some "sudo" when needed. This involves the replacement of native python > code, like "os.mkdir" and the like. > > The first one isn't a solution - code maintenance will not be possible, > having to thing "darn, os.setuid() before calling that, because I don't > need root" is the current way, and it just doesn't apply. > > So I started the second one. It's, of course, longer, not really nice > and painful, but at least this will end to a good status, and not so bad > solution. > > This also meets the current work of the Security Squad about "limiting > sudo rights and accesses". > > For now I don't have a proper patch to show, but it will most probably > appear shortly, as a Work In Progress (I don't think it will be > mergeable before some time, due to all the constraints we have regarding > version portability, new sudoer integration and so on). > > I'll post the relevant review link as an answer of this thread when I > have something I can show. > > Cheers, > > C. > > Hi Cédric, Pleased to hear you are willing to take this on. It makes sense we should co-ordinate efforts here as I have been looking at the same item, but planned to start with heat-admin over on the overcloud. Due to the complexity / level of coverage in the use of sudo, it makes sense to have a spec where we can then get community consensus on the approach selected. This is important as it looks like we will need to have some sort of white list to maintain and make considerations around functional test coverage in CI (in case someone writes something new wrapped in sudo). In regards to your suggested positions within python code such as the client, its worth looking at oslo.privsep [1] where a decorator can be used for when needing to setuid. Let's discuss this also in the squad meeting tomorrow and try to synergize approach for all tripleo nix accounts. [1] https://github.com/openstack/oslo.privsep Cheers, Luke > ¹ > https://github.com/openstack/python-tripleoclient/blob/ > master/tripleoclient/v1/tripleo_deploy.py#L827-L829 > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jun 5 16:08:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 12:08:26 -0400 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> Message-ID: <1528214025-sup-920@lrrr.local> Excerpts from Graham Hayes's message of 2018-06-05 16:42:45 +0100: > > On 30/05/18 21:23, Mohammed Naser wrote: > > Hi everyone: > > > > Over the past week in the summit, there was a lot of discussion > > regarding StarlingX > > and members of the technical commitee had a few productive discussions regarding > > the best approach to deal with a proposed new pilot project for > > incubation in the OSF's Edge > > Computing strategic focus area: StarlingX. > > > > If you're not aware, StarlingX includes forks of some OpenStack > > components and other open source software > > which contain certain features that are specific to edge and > > industrial IoT computing use cases. The code > > behind the project is from Wind River (and is used to build a product > > called "Titanium > > Cloud"). > > > > At the moment, the goal of StarlingX hosting their projects on the > > community infrastructure > > is to get the developers used to the Gerrit workflow. The intention > > is to evenutally > > work with upstream teams in order to bring the features and bug fixes which are > > specific to the fork back upstream, with an ideal goal of bringing all > > the differences > > upstream. > > > > We've discussed around all the different ways that we can approach > > this and how to > > help the StarlingX team be part of our community. If we can > > succesfully do this, it would > > be a big success for our community as well as our community gaining > > contributors from > > the Wind River team. In an ideal world, it's a win-win. > > > > The plan at the moment is the following: > > - StarlingX will have the first import of code that is not forked, > > simply other software that > > they've developed to help deliver their product. This code can be > > hosted with no problems. > > - StarlingX will generate a list of patches to be brought upstream and > > the StarlingX team > > will work together with upstream teams in order to start backporting > > and upstreaming the > > codebase. Emilien Macchi (EmilienM) and I have volunteered to take > > on the responsibility of > > monitoring the progress upstreaming these patches. > > - StarlingX contains a few forks of other non-OpenStack software. The > > StarlingX team will work > > with the authors of the original projects to ensure that they do not > > mind us hosting a fork > > of their software. If they don't, we'll proceed to host those > > projects. If they prefer > > something else (hosting it themselves, placing it on another hosting > > service, etc.), > > the StarlingX team will work with them in that way. > > > > We discussed approaches for cases where patches aren't acceptable > > upstream, because they > > diverge from the project mission or aren't comprehensive. Ideally all > > of those could be turned > > into acceptable changes that meet both team's criteria. In some cases, > > adding plugin interfaces > > or driver interfaces may be the best alternative. Only as a last > > resort would we retain the > > forks for a long period of time. > > I honestly think that these forks should never be inside the foundation. > If there is a big enough disagreement between project teams and the > fork, we (as the TC of the OpenStack project) and the board (of > *OpenStack* Foundation) should support our current teams, who have > been working in the open. > > There is plenty of companies who would have loved certain features in > OpenStack over the years that an extra driver extension point would > have enabled, but when the upstream team pushed back, they redesigned > the feature to work with the community vision. We should not reward > companies that didn't. I can understand that point of view. I even somewhat agree. But saying that we don't welcome contributions now, because they didn't do things the right way when someone else was in charge of their project, doesn't strike the right tone for me. The conversations I've had with the folks involved with StarlingX have convinced me they have learned, the hard way, the error of a closed-source fork and they are trying to do better for the future. The first step of that is to bring what they already have out into the open, where it will be easier to figure out what can be introduced into projects to eliminate the forks, what can be discarded, and what will need to be worked around. When all of this is done, a viable project with real users will be open source instead of closed source. Those contributors, and users, will be a part of our community instead of looking in from the outside. The path is ugly, long, and clearly not ideal. But, I consider the result a win, overall. > > > From what was brought up, the team from Wind River is hoping to > > on-board roughly 50 new full > > time contributors. In combination with the features that they've > > built that we can hopefully > > upstream, I am hopeful that we can come to a win-win situation for > > everyone in this. > > > > Regards, > > Mohammed > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From Kevin.Fox at pnnl.gov Tue Jun 5 16:09:24 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 5 Jun 2018 16:09:24 +0000 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528209520-sup-5595@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local>,<1528209520-sup-5595@lrrr.local> Message-ID: <1A3C52DFCD06494D8528644858247BF01C0DB05A@EX10MBOX03.pnnl.gov> That might not be a good idea. That may just push the problem underground as people are afraid to speak up publicly. Perhaps an anonymous poll kind of thing, so that it can be counted publicly but doesn't cause people to fear retaliation? Thanks, Kevin ________________________________________ From: Doug Hellmann [doug at doughellmann.com] Sent: Tuesday, June 05, 2018 7:39 AM To: openstack-dev Subject: Re: [openstack-dev] [tc] Organizational diversity tag Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400: > Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +0000: > > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: > > [...] > > > It feels like we would be saying that we don't trust 2 core reviewers > > > from the same company to put the project's goals or priorities over > > > their employer's. And that doesn't feel like an assumption I would > > > want us to encourage through a tag meant to show the health of the > > > project. > > [...] > > > > That's one way of putting it. On the other hand, if we ostensibly > > have that sort of guideline (say, two core reviewers shouldn't be > > the only ones to review a change submitted by someone else from > > their same organization if the team is large and diverse enough to > > support such a pattern) then it gives our reviewers a better > > argument to push back on their management _if_ they're being > > strongly urged to review/approve certain patches. At least then they > > can say, "this really isn't going to fly because we have to get a > > reviewer from another organization to agree it's in the best > > interests of the project" rather than "fire me if you want but I'm > > not approving that change, no matter how much your product launch is > > going to be delayed." > > Do we have that problem? I honestly don't know how much pressure other > folks are feeling. My impression is that we've mostly become good at > finding the necessary compromises, but my experience doesn't cover all > of our teams. To all of the people who have replied to me privately that they have experienced this problem: We can't really start to address it until it's out here in the open. Please post to the list. Doug __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 5 16:14:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 12:14:19 -0400 Subject: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team Message-ID: <1528214965-sup-1044@lrrr.local> The review team [1] for the project-team-guide repository (managed by the TC) hasn't been updated for a while. I would like to propose removing a few reviewers who are no longer active, and adding one new reviewer. My understanding is that Kyle Mestery and Nikhil Komawar have both moved on from OpenStack to other projects, so I propose that we remove them from the core team. Chris Dent has been active with reviews lately and has expressed willingness to help manage the guide. I propose that we add him to the team. Please let me know what you think, Doug [1] https://review.openstack.org/#/admin/groups/953,members From zbitter at redhat.com Tue Jun 5 16:19:10 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 5 Jun 2018 12:19:10 -0400 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker Message-ID: I've been doing some investigation into the Service Catalog in Kubernetes and how we can get OpenStack resources to show up in the catalog for use by applications running in Kubernetes. (The Big 3 public clouds already support this.) The short answer is via an implementation of something called the Open Service Broker API, but there are shortcuts available to make it easier to do. I'm convinced that this is readily achievable and something we ought to do as a community. I've put together a (long-winded) FAQ below to answer all of your questions about it. Would you be interested in working on a new project to implement this integration? Reply to this thread and let's collect a list of volunteers to form the initial core review team. cheers, Zane. What is the Open Service Broker API? ------------------------------------ The Open Service Broker API[1] is a standard way to expose external resources to applications running in a PaaS. It was originally developed in the context of CloudFoundry, but the same standard was adopted by Kubernetes (and hence OpenShift) in the form of the Service Catalog extension[2]. (The Service Catalog in Kubernetes is the component that calls out to a service broker.) So a single implementation can cover the most popular open-source PaaS offerings. In many cases, the services take the form of simply a pre-packaged application that also runs inside the PaaS. But they don't have to be - services can be anything. Provisioning via the service broker ensures that the services requested are tied in to the PaaS's orchestration of the application's lifecycle. (This is certainly not the be-all and end-all of integration between OpenStack and containers - we also need ways to tie PaaS-based applications into the OpenStack's orchestration of a larger group of resources. Some applications may even use both. But it's an important part of the story.) What sorts of services would OpenStack expose? ---------------------------------------------- Some example use cases might be: * The application needs a reliable message queue. Rather than spinning up multiple storage-backed containers with anti-affinity policies and dealing with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar queue from an OpenStack cloud. The overhead of running the queueing service is amortised across all of the applications in the cloud. The queue gets cleaned up correctly when the application is removed, since it is tied into the application definition. * The application needs a database. Rather than spinning one up in a storage-backed container and dealing with the overhead of managing it, the application requests a Trove DB from an OpenStack cloud. * The application includes a service that needs to run on bare metal for performance reasons (e.g. could also be a database). The application requests a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to requesting a VM, but there are alternatives like KubeVirt - which also operates through the Service Catalog - available for getting a VM in Kubernetes. There are no non-proprietary alternatives for getting a bare-metal server.) AWS[3], Azure[4], and GCP[5] all have service brokers available that support these and many more services that they provide. I don't know of any reason in principle not to expose every type of resource that OpenStack provides via a service broker. How is this different from cloud-provider-openstack? ---------------------------------------------------- The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to access features of the cloud to provide its service. For example, if k8s needs persistent storage for a container then it can request that from Cinder through cloud-provider-openstack[7]. It can also request a load balancer from Octavia instead of having to start a container running HAProxy to load balance between multiple instances of an application container (thus enabling use of hardware load balancers via the cloud's abstraction for them). In contrast, the Service Catalog interface allows the *application* running on Kubernetes to access features of the cloud. What does a service broker look like? ------------------------------------- A service broker provides an HTTP API with 5 actions: * List the services provided by the broker * Create an instance of a resource * Bind the resource into an instance of the application * Unbind the resource from an instance of the application * Delete the resource The binding step is used for things like providing a set of DB credentials to a container. You can rotate credentials when replacing a container by revoking the existing credentials on unbind and creating a new set on bind, without replacing the entire resource. Is there an easier way? ----------------------- Yes! Folks from OpenShift came up with a project called the Automation Broker[8]. To add support for a service to Automation Broker you just create a container with an Ansible playbook to handle each of the actions (create/bind/unbind/delete). This eliminates the need to write another implementation of the service broker API, and allows us to simply write Ansible playbooks instead.[9] (Aside: Heat uses a comparable method to allow users to manage an external resource using Mistral workflows: the OS::Mistral::ExternalResource resource type.) Support for accessing AWS resources through a service broker is also implemented using these Ansible Playbook Bundles.[3] Does this mean maintaining another client interface? ---------------------------------------------------- Maybe not. We already have per-project Python libraries, (deprecated) per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat resource plugins, and Horizon dashboards. (Mistral actions are generated automatically from the clients.) Some consolidation is already planned, but it would be great not to require projects to maintain yet another interface. One option is to implement a tool that generates a set of playbooks for each of the resources already exposed (via shade) in the OpenStack Ansible modules. Then in theory we'd only need to implement the common parts once, and then every service with support in shade would get this for free. Ideally the same broker could be used against any OpenStack cloud (so e.g. k8s might be running in your private cloud, but you may want its service catalog to allow you to connect to resources in one or more public clouds) - using shade is an advantage there because it is designed to abstract the differences between clouds. Another option might be to write or generate Heat templates for each resource type we want to expose. Then we'd only need to implement a common way of creating a Heat stack, and just have a different template for each resource type. This is the approach taken by the AWS playbook bundles (except with CloudFormation, obviously). An advantage is that this allows Heat to do any checking and type conversion required on the input parameters. Heat templates can also be made to be fairly cloud-independent, mainly because they make it easier to be explicit about things like ports and subnets than on the command line, where it's more tempting to allow things to happen in a magical but cloud-specific way. I'd prefer to go with the pure-Ansible autogenerated way so we can have support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] brokers they have 10, 11 and 17 services respectively, so arguably we could get a comparable number of features exposed without investing crazy amounts of time if we had to write templates explicitly. How would authentication work? ------------------------------ There are two main deployment topologies we need to consider: Kubernetes deployed by an OpenStack tenant (Magnum-style, though not necessarily using Magnum) and accessing resources in that tenant's project in the local cloud, or accessing resources in some remote OpenStack cloud. We also need to take into account that in the second case, the Kubernetes cluster may 'belong' to a single cloud tenant (as in the first case) or may be shared by applications that each need to authenticate to different OpenStack tenants. (Kubernetes has traditionally assumed the former, but I expect it to move in the direction of allowing the latter, and it's already fairly common for OpenShift deployments.) The way e.g. the AWS broker[3] works is that you can either use the credentials provisioned to the VM that k8s is installed on (a 'Role' in AWS parlance - note that this is completely different to a Keystone Role), or supply credentials to authenticate to AWS remotely. OpenStack doesn't yet support per-instance credentials, although we're working on it. (One thing to keep in mind is that ideally we'll want a way to provide different permissions to the service broker and cloud-provider-openstack.) An option in the meantime might be to provide a way to set up credentials as part of the k8s installation. We'd also need to have a way to specify credentials manually. Unlike for proprietary clouds, the credentials also need to include the Keystone auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml format[10] if possible. The OpenShift Ansible Broker works by starting up an Ansible container on k8s to run a playbook from the bundle, so presumably credentials can be passed as regular k8s secrets. In all cases we'll want to encourage users to authenticate using Keystone Application Credentials[11]. How would network integration work? ----------------------------------- Kuryr[12] allows us to connect application containers in Kubernetes to Neutron networks in OpenStack. It would be desirable if, when the user requests a VM or bare-metal server through the service broker, it were possible to choose between attaching to the same network as Kubernetes pods, or to a different network. [1] https://www.openservicebrokerapi.org/ [2] https://kubernetes.io/docs/concepts/service-catalog/ [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker [4] https://github.com/Azure/open-service-broker-azure#open-service-broker-for-azure [5] https://github.com/GoogleCloudPlatform/gcp-service-broker#cloud-foundry-service-broker-for-google-cloud-platform [6] https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#remove-cloud-provider-code-from-kubernetes-core [7] https://github.com/kubernetes/cloud-provider-openstack#openstack-cloud-controller-manager [8] http://automationbroker.io/ [9] https://docs.openshift.org/latest/apb_devel/index.html [10] https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html#config-files [11] https://docs.openstack.org/keystone/latest/user/application_credentials.html [12] https://docs.openstack.org/kuryr/latest/devref/goals_and_use_cases.html From emilien at redhat.com Tue Jun 5 16:25:05 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 5 Jun 2018 09:25:05 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 23th Edition Message-ID: Welcome to the twenty third edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130926.html +---------------------------------+ | General announcements | +---------------------------------+ +--> This week is Rocky Milestone 2. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is arxcruz and Rover is rlandy. Please let them know any new CI issue. +--> Master promotion is 1 day, Queens is 0 day, Pike is 0 day and Ocata is 0 day. Really nice work CI folks! +--> Sprint 14 is ongoing: Checkout https://trello.com/c/1W62zvhh/770-sprint-14-goals but focus is to finish upgrade CI work. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Reviews are requested on different topics: CI, Newton, FFU +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Good progress done on All-In-One blueprint, update sent on the ML: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131135.html +--> Still working on Containerized undercloud upgrades (bug with rabbitmq upgrade: https://review.openstack.org/#/c/572449/) +--> Enabling the containerized undercloud everywhere in CI +--> Working on updating containers in CI when deploying a containerized undercloud so we can test changes in all repos +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> checkout the new command: "openstack overcloud failures" for better deployment failures output +--> Documentation was improved with recent changes +--> UI integration is still in progress +--> More: https://etherpad.openstack.org/p/tripleo-config-downlo ad-squad-status +--------------+ | Integration | +--------------+ +--> Working on : "Persist ceph-ansible fetch_directory", check it out: https://review.openstack.org/#/c/567782/ +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Beginning trial of using storyboard not just for bugs but also for stories/epics +--> Review of existing config-download patches that still need to merge. Hoping to finalize this week. +--> Network config initial patches are up - very cool so far! +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Custom validations work +--> Nova event callback validation +--> OpenShift on OpenStack validation work +--> Mistral workflow plugin +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Public TLS is being refactored +--> Working on limiting sudoers rights +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls were once a sign of victory in battle. In ancient Greece, the Little Owl was the companion of Athena, the Greek goddess of wisdom, which is one reason why owls symbolize learning and knowledge. But Athena was also a warrior goddess and the owl was considered the protector of armies going into war. If Greek soldiers saw an owl fly by during battle, they took it as a sign of coming victory. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jun 5 16:29:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 12:29:26 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0DB05A@EX10MBOX03.pnnl.gov> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <20180602185147.b45pc4kpmohcqcx4@yuggoth.org> <1527966421-sup-6019@lrrr.local> <1528209520-sup-5595@lrrr.local> <1A3C52DFCD06494D8528644858247BF01C0DB05A@EX10MBOX03.pnnl.gov> Message-ID: <1528215728-sup-9184@lrrr.local> Excerpts from Fox, Kevin M's message of 2018-06-05 16:09:24 +0000: > That might not be a good idea. That may just push the problem underground as people are afraid to speak up publicly. > > Perhaps an anonymous poll kind of thing, so that it can be counted publicly but doesn't cause people to fear retaliation? I have no idea how to judge the outcome of any sort of anonymous poll. And I really don't want my inbox to become one. :-) We do our best to make governance decisions openly, based on the information we have. But in more cases than I like we end up making assumptions based on extrapolating from a small number of experiences relayed privately. I don't want to base a review diversity policy that may end up making it harder to accept contribution on assumptions. Maybe if folks aren't comfortable talking publicly, they can talk to their PTLs privately? Then we can get a sense of which teams feel this sort of pressure, overall, instead of individuals. > > Thanks, > Kevin > ________________________________________ > From: Doug Hellmann [doug at doughellmann.com] > Sent: Tuesday, June 05, 2018 7:39 AM > To: openstack-dev > Subject: Re: [openstack-dev] [tc] Organizational diversity tag > > Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400: > > Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +0000: > > > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote: > > > [...] > > > > It feels like we would be saying that we don't trust 2 core reviewers > > > > from the same company to put the project's goals or priorities over > > > > their employer's. And that doesn't feel like an assumption I would > > > > want us to encourage through a tag meant to show the health of the > > > > project. > > > [...] > > > > > > That's one way of putting it. On the other hand, if we ostensibly > > > have that sort of guideline (say, two core reviewers shouldn't be > > > the only ones to review a change submitted by someone else from > > > their same organization if the team is large and diverse enough to > > > support such a pattern) then it gives our reviewers a better > > > argument to push back on their management _if_ they're being > > > strongly urged to review/approve certain patches. At least then they > > > can say, "this really isn't going to fly because we have to get a > > > reviewer from another organization to agree it's in the best > > > interests of the project" rather than "fire me if you want but I'm > > > not approving that change, no matter how much your product launch is > > > going to be delayed." > > > > Do we have that problem? I honestly don't know how much pressure other > > folks are feeling. My impression is that we've mostly become good at > > finding the necessary compromises, but my experience doesn't cover all > > of our teams. > > To all of the people who have replied to me privately that they have > experienced this problem: > > We can't really start to address it until it's out here in the open. > Please post to the list. > > Doug > From thierry at openstack.org Tue Jun 5 16:42:31 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 5 Jun 2018 18:42:31 +0200 Subject: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team In-Reply-To: <1528214965-sup-1044@lrrr.local> References: <1528214965-sup-1044@lrrr.local> Message-ID: Doug Hellmann wrote: > The review team [1] for the project-team-guide repository (managed > by the TC) hasn't been updated for a while. I would like to propose > removing a few reviewers who are no longer active, and adding one > new reviewer. > > My understanding is that Kyle Mestery and Nikhil Komawar have both > moved on from OpenStack to other projects, so I propose that we > remove them from the core team. > > Chris Dent has been active with reviews lately and has expressed > willingness to help manage the guide. I propose that we add him to > the team. > > Please let me know what you think, +1 -- Thierry Carrez (ttx) From sean.mcginnis at gmx.com Tue Jun 5 16:47:20 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 5 Jun 2018 11:47:20 -0500 Subject: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team In-Reply-To: <1528214965-sup-1044@lrrr.local> References: <1528214965-sup-1044@lrrr.local> Message-ID: <0e3d420f-08f0-b6db-b199-e0fa38596c6d@gmx.com> On 06/05/2018 11:14 AM, Doug Hellmann wrote: > [snip] > > My understanding is that Kyle Mestery and Nikhil Komawar have both > moved on from OpenStack to other projects, so I propose that we > remove them from the core team. > > Chris Dent has been active with reviews lately and has expressed > willingness to help manage the guide. I propose that we add him to > the team. > > Please let me know what you think, > Doug +1 from me. I think Chris would be a good addition. From remo at rm.ht Tue Jun 5 16:52:12 2018 From: remo at rm.ht (Remo Mattei) Date: Tue, 5 Jun 2018 09:52:12 -0700 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: References: Message-ID: <09B8E7CB-BB9A-4B02-BEA6-21E4B1EF17D7@rm.ht> I will be happy to check it out. Remo > On Jun 5, 2018, at 9:19 AM, Zane Bitter wrote: > > I've been doing some investigation into the Service Catalog in Kubernetes and how we can get OpenStack resources to show up in the catalog for use by applications running in Kubernetes. (The Big 3 public clouds already support this.) The short answer is via an implementation of something called the Open Service Broker API, but there are shortcuts available to make it easier to do. > > I'm convinced that this is readily achievable and something we ought to do as a community. > > I've put together a (long-winded) FAQ below to answer all of your questions about it. > > Would you be interested in working on a new project to implement this integration? Reply to this thread and let's collect a list of volunteers to form the initial core review team. > > cheers, > Zane. > > > What is the Open Service Broker API? > ------------------------------------ > > The Open Service Broker API[1] is a standard way to expose external resources to applications running in a PaaS. It was originally developed in the context of CloudFoundry, but the same standard was adopted by Kubernetes (and hence OpenShift) in the form of the Service Catalog extension[2]. (The Service Catalog in Kubernetes is the component that calls out to a service broker.) So a single implementation can cover the most popular open-source PaaS offerings. > > In many cases, the services take the form of simply a pre-packaged application that also runs inside the PaaS. But they don't have to be - services can be anything. Provisioning via the service broker ensures that the services requested are tied in to the PaaS's orchestration of the application's lifecycle. > > (This is certainly not the be-all and end-all of integration between OpenStack and containers - we also need ways to tie PaaS-based applications into the OpenStack's orchestration of a larger group of resources. Some applications may even use both. But it's an important part of the story.) > > What sorts of services would OpenStack expose? > ---------------------------------------------- > > Some example use cases might be: > > * The application needs a reliable message queue. Rather than spinning up multiple storage-backed containers with anti-affinity policies and dealing with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar queue from an OpenStack cloud. The overhead of running the queueing service is amortised across all of the applications in the cloud. The queue gets cleaned up correctly when the application is removed, since it is tied into the application definition. > > * The application needs a database. Rather than spinning one up in a storage-backed container and dealing with the overhead of managing it, the application requests a Trove DB from an OpenStack cloud. > > * The application includes a service that needs to run on bare metal for performance reasons (e.g. could also be a database). The application requests a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to requesting a VM, but there are alternatives like KubeVirt - which also operates through the Service Catalog - available for getting a VM in Kubernetes. There are no non-proprietary alternatives for getting a bare-metal server.) > > AWS[3], Azure[4], and GCP[5] all have service brokers available that support these and many more services that they provide. I don't know of any reason in principle not to expose every type of resource that OpenStack provides via a service broker. > > How is this different from cloud-provider-openstack? > ---------------------------------------------------- > > The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to access features of the cloud to provide its service. For example, if k8s needs persistent storage for a container then it can request that from Cinder through cloud-provider-openstack[7]. It can also request a load balancer from Octavia instead of having to start a container running HAProxy to load balance between multiple instances of an application container (thus enabling use of hardware load balancers via the cloud's abstraction for them). > > In contrast, the Service Catalog interface allows the *application* running on Kubernetes to access features of the cloud. > > What does a service broker look like? > ------------------------------------- > > A service broker provides an HTTP API with 5 actions: > > * List the services provided by the broker > * Create an instance of a resource > * Bind the resource into an instance of the application > * Unbind the resource from an instance of the application > * Delete the resource > > The binding step is used for things like providing a set of DB credentials to a container. You can rotate credentials when replacing a container by revoking the existing credentials on unbind and creating a new set on bind, without replacing the entire resource. > > Is there an easier way? > ----------------------- > > Yes! Folks from OpenShift came up with a project called the Automation Broker[8]. To add support for a service to Automation Broker you just create a container with an Ansible playbook to handle each of the actions (create/bind/unbind/delete). This eliminates the need to write another implementation of the service broker API, and allows us to simply write Ansible playbooks instead.[9] > > (Aside: Heat uses a comparable method to allow users to manage an external resource using Mistral workflows: the OS::Mistral::ExternalResource resource type.) > > Support for accessing AWS resources through a service broker is also implemented using these Ansible Playbook Bundles.[3] > > Does this mean maintaining another client interface? > ---------------------------------------------------- > > Maybe not. We already have per-project Python libraries, (deprecated) per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat resource plugins, and Horizon dashboards. (Mistral actions are generated automatically from the clients.) Some consolidation is already planned, but it would be great not to require projects to maintain yet another interface. > > One option is to implement a tool that generates a set of playbooks for each of the resources already exposed (via shade) in the OpenStack Ansible modules. Then in theory we'd only need to implement the common parts once, and then every service with support in shade would get this for free. Ideally the same broker could be used against any OpenStack cloud (so e.g. k8s might be running in your private cloud, but you may want its service catalog to allow you to connect to resources in one or more public clouds) - using shade is an advantage there because it is designed to abstract the differences between clouds. > > Another option might be to write or generate Heat templates for each resource type we want to expose. Then we'd only need to implement a common way of creating a Heat stack, and just have a different template for each resource type. This is the approach taken by the AWS playbook bundles (except with CloudFormation, obviously). An advantage is that this allows Heat to do any checking and type conversion required on the input parameters. Heat templates can also be made to be fairly cloud-independent, mainly because they make it easier to be explicit about things like ports and subnets than on the command line, where it's more tempting to allow things to happen in a magical but cloud-specific way. > > I'd prefer to go with the pure-Ansible autogenerated way so we can have support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] brokers they have 10, 11 and 17 services respectively, so arguably we could get a comparable number of features exposed without investing crazy amounts of time if we had to write templates explicitly. > > How would authentication work? > ------------------------------ > > There are two main deployment topologies we need to consider: Kubernetes deployed by an OpenStack tenant (Magnum-style, though not necessarily using Magnum) and accessing resources in that tenant's project in the local cloud, or accessing resources in some remote OpenStack cloud. > > We also need to take into account that in the second case, the Kubernetes cluster may 'belong' to a single cloud tenant (as in the first case) or may be shared by applications that each need to authenticate to different OpenStack tenants. (Kubernetes has traditionally assumed the former, but I expect it to move in the direction of allowing the latter, and it's already fairly common for OpenShift deployments.) > > The way e.g. the AWS broker[3] works is that you can either use the credentials provisioned to the VM that k8s is installed on (a 'Role' in AWS parlance - note that this is completely different to a Keystone Role), or supply credentials to authenticate to AWS remotely. > > OpenStack doesn't yet support per-instance credentials, although we're working on it. (One thing to keep in mind is that ideally we'll want a way to provide different permissions to the service broker and cloud-provider-openstack.) An option in the meantime might be to provide a way to set up credentials as part of the k8s installation. We'd also need to have a way to specify credentials manually. Unlike for proprietary clouds, the credentials also need to include the Keystone auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml format[10] if possible. > > The OpenShift Ansible Broker works by starting up an Ansible container on k8s to run a playbook from the bundle, so presumably credentials can be passed as regular k8s secrets. > > In all cases we'll want to encourage users to authenticate using Keystone Application Credentials[11]. > > How would network integration work? > ----------------------------------- > > Kuryr[12] allows us to connect application containers in Kubernetes to Neutron networks in OpenStack. It would be desirable if, when the user requests a VM or bare-metal server through the service broker, it were possible to choose between attaching to the same network as Kubernetes pods, or to a different network. > > > [1] https://www.openservicebrokerapi.org/ > [2] https://kubernetes.io/docs/concepts/service-catalog/ > [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker > [4] https://github.com/Azure/open-service-broker-azure#open-service-broker-for-azure > [5] https://github.com/GoogleCloudPlatform/gcp-service-broker#cloud-foundry-service-broker-for-google-cloud-platform > [6] https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#remove-cloud-provider-code-from-kubernetes-core > [7] https://github.com/kubernetes/cloud-provider-openstack#openstack-cloud-controller-manager > [8] http://automationbroker.io/ > [9] https://docs.openshift.org/latest/apb_devel/index.html > [10] https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html#config-files > [11] https://docs.openstack.org/keystone/latest/user/application_credentials.html > [12] https://docs.openstack.org/kuryr/latest/devref/goals_and_use_cases.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Jun 5 17:20:12 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 5 Jun 2018 19:20:12 +0200 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker Message-ID: Hi Zane, > Would you be interested in working on a new project to implement this > integration? Reply to this thread and let's collect a list of volunteers > to form the initial core review team. Yes, I would also like to join. That's exactly what I am looking at in my company as part of K8 over OpenStack offering. Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Jun 5 18:23:36 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 5 Jun 2018 12:23:36 -0600 Subject: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One) In-Reply-To: References: Message-ID: On Tue, Jun 5, 2018 at 3:31 AM Raoul Scarazzini wrote: > On 05/06/2018 02:26, Emilien Macchi wrote: > [...] > > I hope this update was useful, feel free to give feedback or ask any > > questions, > [...] > > I'm no prophet here, but I see a bright future for this approach. I can > imagine how useful this can be on the testing and much more the learning > side. Thanks for sharing! > > -- > Raoul Scarazzini > rasca at redhat.com Real big +10000 to everyone who has contributed to the standalone installer. >From an end user experience, this is simple, fast! This is going to be the base for some really cool work. Emilien, the CI is working, enjoy your PTO :) http://logs.openstack.org/17/572217/6/check/tripleo-ci-centos-7-standalone/b2eb1b7/logs/ara_oooq/result/bb49965e-4fb7-43ea-a9e3-c227702c17de/ Thanks! > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Tue Jun 5 18:42:13 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 5 Jun 2018 14:42:13 -0400 Subject: [openstack-dev] [heat][ci][infra] telemetry test broken on oslo.messaging stable/queens In-Reply-To: References: Message-ID: <20180605184213.GA14592@sinanju.localdomain> On Tue, Jun 05, 2018 at 10:47:17AM -0400, Ken Giusti wrote: > Hi, > > The telemetry integration test for oslo.messaging has started failing > on the stable/queens branch [0]. > > A quick review of the logs points to a change in heat-tempest-plugin > that is incompatible with the version of gabbi from queens upper > constraints (1.40.0) [1][2]. > > The job definition [3] includes required-projects that do not have > stable/queens branches - including heat-tempest-plugin. > > My question - how do I prevent this job from breaking when these > unbranched projects introduce changes that are incompatible with > upper-constrants for a particular branch? Tempest and plugins should be installed in a venv to isolate it's requirements from the rest of what devstack is installing during the job. This should be happening by default, the only place it gets installed on system python and where there is a potential conflict is if INSTALL_TEMPEST is set to True. See: https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n57 That flag only exists so we test tempest coinstallability in the gate, as well as for local devstack users. We don't install branchless projects on system python in stable jobs exactly because they're is a likely conflict between the stable branch's requirements and master's (which is what branchless projects follow). -Matt Treinish > > I've tried to use override-checkout in the job definition, but that > seems a bit hacky in this case since the tagged versions don't appear > to work and I've resorted to a hardcoded ref [4]. > > Advice appreciated, thanks! > > [0] https://review.openstack.org/#/c/567124/ > [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624 > [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332 > [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250 > [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dprince at redhat.com Tue Jun 5 19:35:59 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 5 Jun 2018 15:35:59 -0400 Subject: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2018 at 8:26 PM, Emilien Macchi wrote: > TL;DR: we made nice progress and you can checkout this demo: > https://asciinema.org/a/185533 > > We started the discussion back in Dublin during the last PTG. The idea of > Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud) > is to deploy a single node OpenStack where the provisioning happens on the > same node (there is no notion of {under/over}cloud). > > A kind of a "packstack" or "devstack" but using TripleO which has can offer: > - composable containerized services > - composable upgrades > - composable roles > - Ansible driven deployment > > One of the key features we have been focusing so far are: > - low bar to be able to dev/test TripleO (single machine: VM), with simpler > tooling One idea might be worth considering adding to this list is the idea of "zero-footprint". Right now you can use a VM to isolate the installation of the all-in-one installer on your laptop which is cool and you can always use a VM to isolate things. But now that we have containers it might also be cool to have the installer itself ran in a container rather than require the end user to install python-tripleoclient at all. A few of us tried out a similar sort of idea in Pike with the undercloud_deploy interface (docker in docker, etc.). At the time we didn't have config-download working so it had to all be done inside the container. But now that we have config download working with the undercloud/all-in-one installers the Ansible which is generated can run anywhere so long as the relevant hooks are installed. (paunch, etc.) The benefit here is that the requirements are even less... the developer can just use the framework to generate Ansible that spins up containers on his/her laptop directly. Again, only the required Ansible/heat hooks would need to be installed. I mentioned a few months ago my old attempt was here (uses undercloud_deploy) [1]. Also, worth mentioning that I got it working without installing Puppet on my laptop too [2]. The idea being that now that our containers have all the puppet-modules in them no real need to bind mount them in from the host anymore unless you are using the last few (HA??!!) services that require puppet modules on baremetal. Perhaps we should switch to installing the required puppet modules there dynamically instead of requiring them for any old undercloud/all-in-one installer which largely focus on non-HA deployments anyway I think. Is anyone else interested in the zero-footprint idea? Perhaps this is the next iteration of the all-in-one installer?... but the one I'm perhaps most interested in as a developer. [1] https://github.com/dprince/talon [2] https://review.openstack.org/#/c/550848/ (Add DockerPuppetMountHostPuppet parameter) Dan > - make it fast (being able to deploy OpenStack in minutes) > - being able to make a change in OpenStack (e.g. Keystone) and test the > change immediately > > The workflow that we're currently targeting is: > - deploy the system by yourself (centos7 or rhel7) > - deploy the repos, install python-tripleoclient > - run 'openstack tripleo deploy (+ few args) > - (optional) modify your container with a Dockerfile + Ansible > - Test your change > > Status: > - tripleoclient was refactored in a way that the undercloud is actually a > special configuration of the standalone deployment (still work in progress). > We basically refactored the containerized undercloud to be more generic and > configurable for standalone. > - we can now deploy a standalone OpenStack with just Keystone + dependencies Fwiw you could always do this with undercloud_deploy as well. But the new interface is much nicer I agree. :) > - which takes 12 minutes total (demo here: https://asciinema.org/a/185533 > and doc in progress: > http://logs.openstack.org/27/571827/6/check/build-openstack-sphinx-docs/1885304/html/install/containers_deployment/standalone.html) > - we have an Ansible role to push modifications to containers via a Docker > file: https://github.com/openstack/ansible-role-tripleo-modify-image/ > > What's next: > - Documentation: as you can see the documentation is still in progress > (https://review.openstack.org/#/c/571827/) > - Continuous Integration: we're working on a new CI job: > tripleo-ci-centos-7-standalone > https://trello.com/c/HInL8pNm/7-upstream-ci-testing > - Working on the standalone configuration interface, still WIP: > https://review.openstack.org/#/c/569535/ > - Investigate the use case where a developer wants to prepare the containers > before the deployment > > I hope this update was useful, feel free to give feedback or ask any > questions, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Tue Jun 5 19:55:49 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 5 Jun 2018 15:55:49 -0400 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 Message-ID: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> We've talked a bit about migrating to Python 3, but (unless I missed it) not a lot about which version of Python 3. Currently all projects that support Python 3 are gating against 3.5. However, Ubuntu Artful and Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have been released since then.) The one time it did come up in a thread, we decided it was blocked on the availability of 3.6 in Ubuntu to run on the test nodes, so it's time to discuss it again. AIUI we're planning to switch the test nodes to Bionic, since it's the latest LTS release, so I'd assume that means that when we talk about running docs jobs, pep8 &c. with Python3 (under the python3-first project-wide goal) that means 3.6. And while 3.5 jobs should continue to work, it seems like we ought to start testing ASAP with the version that users are going to get by default if they choose to use our Python3 packages. The list of breaking changes in 3.6 is quite short (although not zero), so I wouldn't expect too many roadblocks: https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6 I think we can split the problem into two parts: * How can we detect any issues ASAP. Would it be sane to give all projects with a py35 unit tests job a non-voting py36 job so that they can start fixing any issues right away? Like this: https://review.openstack.org/572535 * How can we ensure every project fixes any issues and migrates to voting gates, including for functional test jobs? Would it make sense to make this part of the 'python3-first' project-wide goal? cheers, Zane. (Disclaimer for the conspiracy-minded: you might assume that I'm cleverly concealing inside knowledge of which version of Python 3 will replace Python 2 in the next major release of RHEL/CentOS, but in fact you would be mistaken. The truth is I've been too lazy to find out, so I'm as much in the dark as anybody. Really. Anyway, this isn't about that, it's about testing within upstream OpenStack.) From fungi at yuggoth.org Tue Jun 5 20:29:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 Jun 2018 20:29:20 +0000 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 In-Reply-To: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> References: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> Message-ID: <20180605202919.ywalxrjxjt3vaaby@yuggoth.org> On 2018-06-05 15:55:49 -0400 (-0400), Zane Bitter wrote: [...] > AIUI we're planning to switch the test nodes to Bionic, since it's > the latest LTS release, so I'd assume that means that when we talk > about running docs jobs, pep8 &c. with Python3 (under the > python3-first project-wide goal) that means 3.6. And while 3.5 > jobs should continue to work, it seems like we ought to start > testing ASAP with the version that users are going to get by > default if they choose to use our Python3 packages. [...] Yes, though to clarify it's sanest to interpret our LTS distro statement as testing on whatever the latest LTS release is at the _start_ of the development cycle. Switching default testing platforms has proven to be extremely disruptive to the development process so we want that to happen as soon after the coordinated release as feasible. That means the plan is to have the mandatory PTI jobs for the Rocky cycle stick with Ubuntu 16.04 LTS (our ubuntu-xenial nodes) which provides Python 3.5, but encourage teams to add jobs running on Ubuntu 18.04 LTS (our ubuntu-bionic nodes) as soon as they can to get a leg up on any potential disruption (including the Python 3.6 it provides) before we force the PTI jobs over to it at the start of the Stein cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Jun 5 20:38:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 05 Jun 2018 16:38:51 -0400 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 In-Reply-To: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> References: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> Message-ID: <1528230987-sup-1889@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400: > We've talked a bit about migrating to Python 3, but (unless I missed it) > not a lot about which version of Python 3. Currently all projects that > support Python 3 are gating against 3.5. However, Ubuntu Artful and > Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have > been released since then.) The one time it did come up in a thread, we > decided it was blocked on the availability of 3.6 in Ubuntu to run on > the test nodes, so it's time to discuss it again. > > AIUI we're planning to switch the test nodes to Bionic, since it's the > latest LTS release, so I'd assume that means that when we talk about > running docs jobs, pep8 &c. with Python3 (under the python3-first > project-wide goal) that means 3.6. And while 3.5 jobs should continue to > work, it seems like we ought to start testing ASAP with the version that > users are going to get by default if they choose to use our Python3 > packages. > > The list of breaking changes in 3.6 is quite short (although not zero), > so I wouldn't expect too many roadblocks: > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6 > > I think we can split the problem into two parts: > > * How can we detect any issues ASAP. > > Would it be sane to give all projects with a py35 unit tests job a > non-voting py36 job so that they can start fixing any issues right away? > Like this: https://review.openstack.org/572535 That seems like a good way to start. Maybe we want to rename that project template to openstack-python3-jobs to keep it version-agnostic? > > * How can we ensure every project fixes any issues and migrates to > voting gates, including for functional test jobs? > > Would it make sense to make this part of the 'python3-first' > project-wide goal? Yes, that seems like a good idea. We can be specific about the version of python 3 to be used to achieve that goal (assuming it is selected as a goal). The instructions I've been putting together are based on just using "python3" in the tox.ini file because I didn't want to have to update that every time we update to a new version of python. Do you think we should be more specific there, too? Doug > > cheers, > Zane. > > > (Disclaimer for the conspiracy-minded: you might assume that I'm > cleverly concealing inside knowledge of which version of Python 3 will > replace Python 2 in the next major release of RHEL/CentOS, but in fact > you would be mistaken. The truth is I've been too lazy to find out, so > I'm as much in the dark as anybody. Really. Anyway, this isn't about > that, it's about testing within upstream OpenStack.) > From zbitter at redhat.com Tue Jun 5 20:48:00 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 5 Jun 2018 16:48:00 -0400 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 In-Reply-To: <1528230987-sup-1889@lrrr.local> References: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> <1528230987-sup-1889@lrrr.local> Message-ID: <654866a4-0a53-7a61-e2e3-962be0d6d45d@redhat.com> On 05/06/18 16:38, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400: >> We've talked a bit about migrating to Python 3, but (unless I missed it) >> not a lot about which version of Python 3. Currently all projects that >> support Python 3 are gating against 3.5. However, Ubuntu Artful and >> Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have >> been released since then.) The one time it did come up in a thread, we >> decided it was blocked on the availability of 3.6 in Ubuntu to run on >> the test nodes, so it's time to discuss it again. >> >> AIUI we're planning to switch the test nodes to Bionic, since it's the >> latest LTS release, so I'd assume that means that when we talk about >> running docs jobs, pep8 &c. with Python3 (under the python3-first >> project-wide goal) that means 3.6. And while 3.5 jobs should continue to >> work, it seems like we ought to start testing ASAP with the version that >> users are going to get by default if they choose to use our Python3 >> packages. >> >> The list of breaking changes in 3.6 is quite short (although not zero), >> so I wouldn't expect too many roadblocks: >> https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6 >> >> I think we can split the problem into two parts: >> >> * How can we detect any issues ASAP. >> >> Would it be sane to give all projects with a py35 unit tests job a >> non-voting py36 job so that they can start fixing any issues right away? >> Like this: https://review.openstack.org/572535 > > That seems like a good way to start. > > Maybe we want to rename that project template to openstack-python3-jobs > to keep it version-agnostic? You mean the 35_36 one? Actually, let's discuss this on the review. >> >> * How can we ensure every project fixes any issues and migrates to >> voting gates, including for functional test jobs? >> >> Would it make sense to make this part of the 'python3-first' >> project-wide goal? > > Yes, that seems like a good idea. We can be specific about the version > of python 3 to be used to achieve that goal (assuming it is selected as > a goal). > > The instructions I've been putting together are based on just using > "python3" in the tox.ini file because I didn't want to have to update > that every time we update to a new version of python. Do you think we > should be more specific there, too? That's probably fine IMHO. We should just be aware that e.g. when distros start switching to 3.7 then people's local jobs will start running in 3.7. For me, at least, this has already been the case with 3.6 - tox is now python3 by default in Fedora, so e.g. pep8 jobs have been running under 3.6 for a while now. There were a *lot* of deprecation warnings at first. > Doug > >> >> cheers, >> Zane. >> >> >> (Disclaimer for the conspiracy-minded: you might assume that I'm >> cleverly concealing inside knowledge of which version of Python 3 will >> replace Python 2 in the next major release of RHEL/CentOS, but in fact >> you would be mistaken. The truth is I've been too lazy to find out, so >> I'm as much in the dark as anybody. Really. Anyway, this isn't about >> that, it's about testing within upstream OpenStack.) >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Tue Jun 5 21:50:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 5 Jun 2018 16:50:05 -0500 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 In-Reply-To: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> References: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> Message-ID: <16368259-d45a-af0f-2092-d983646cd8d0@gmx.com> On 06/05/2018 02:55 PM, Zane Bitter wrote: > [snip] > The list of breaking changes in 3.6 is quite short (although not > zero), so I wouldn't expect too many roadblocks: > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6 > > I think we can split the problem into two parts: > > * How can we detect any issues ASAP. > > Would it be sane to give all projects with a py35 unit tests job a > non-voting py36 job so that they can start fixing any issues right > away? Like this: https://review.openstack.org/572535 FWIW, Cinder has had py36 jobs running (and voting) for both unit tests and functional tests for just over a month now with no issues - https://review.openstack.org/#/c/564513/ > > * How can we ensure every project fixes any issues and migrates to > voting gates, including for functional test jobs? > > Would it make sense to make this part of the 'python3-first' > project-wide goal? +1 > > cheers, > Zane. From anlin.kong at gmail.com Tue Jun 5 23:13:08 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 6 Jun 2018 11:13:08 +1200 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: <09B8E7CB-BB9A-4B02-BEA6-21E4B1EF17D7@rm.ht> References: <09B8E7CB-BB9A-4B02-BEA6-21E4B1EF17D7@rm.ht> Message-ID: Hi Zane, please count me in :-) Cheers, Lingxian Kong On Wed, Jun 6, 2018 at 4:52 AM, Remo Mattei wrote: > I will be happy to check it out. > > Remo > > > On Jun 5, 2018, at 9:19 AM, Zane Bitter wrote: > > I've been doing some investigation into the Service Catalog in Kubernetes > and how we can get OpenStack resources to show up in the catalog for use by > applications running in Kubernetes. (The Big 3 public clouds already > support this.) The short answer is via an implementation of something > called the Open Service Broker API, but there are shortcuts available to > make it easier to do. > > I'm convinced that this is readily achievable and something we ought to do > as a community. > > I've put together a (long-winded) FAQ below to answer all of your > questions about it. > > Would you be interested in working on a new project to implement this > integration? Reply to this thread and let's collect a list of volunteers to > form the initial core review team. > > cheers, > Zane. > > > What is the Open Service Broker API? > ------------------------------------ > > The Open Service Broker API[1] is a standard way to expose external > resources to applications running in a PaaS. It was originally developed in > the context of CloudFoundry, but the same standard was adopted by > Kubernetes (and hence OpenShift) in the form of the Service Catalog > extension[2]. (The Service Catalog in Kubernetes is the component that > calls out to a service broker.) So a single implementation can cover the > most popular open-source PaaS offerings. > > In many cases, the services take the form of simply a pre-packaged > application that also runs inside the PaaS. But they don't have to be - > services can be anything. Provisioning via the service broker ensures that > the services requested are tied in to the PaaS's orchestration of the > application's lifecycle. > > (This is certainly not the be-all and end-all of integration between > OpenStack and containers - we also need ways to tie PaaS-based applications > into the OpenStack's orchestration of a larger group of resources. Some > applications may even use both. But it's an important part of the story.) > > What sorts of services would OpenStack expose? > ---------------------------------------------- > > Some example use cases might be: > > * The application needs a reliable message queue. Rather than spinning up > multiple storage-backed containers with anti-affinity policies and dealing > with the overhead of managing e.g. RabbitMQ, the application requests a > Zaqar queue from an OpenStack cloud. The overhead of running the queueing > service is amortised across all of the applications in the cloud. The queue > gets cleaned up correctly when the application is removed, since it is tied > into the application definition. > > * The application needs a database. Rather than spinning one up in a > storage-backed container and dealing with the overhead of managing it, the > application requests a Trove DB from an OpenStack cloud. > > * The application includes a service that needs to run on bare metal for > performance reasons (e.g. could also be a database). The application > requests a bare-metal server from Nova w/ Ironic for the purpose. (The same > applies to requesting a VM, but there are alternatives like KubeVirt - > which also operates through the Service Catalog - available for getting a > VM in Kubernetes. There are no non-proprietary alternatives for getting a > bare-metal server.) > > AWS[3], Azure[4], and GCP[5] all have service brokers available that > support these and many more services that they provide. I don't know of any > reason in principle not to expose every type of resource that OpenStack > provides via a service broker. > > How is this different from cloud-provider-openstack? > ---------------------------------------------------- > > The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself > to access features of the cloud to provide its service. For example, if k8s > needs persistent storage for a container then it can request that from > Cinder through cloud-provider-openstack[7]. It can also request a load > balancer from Octavia instead of having to start a container running > HAProxy to load balance between multiple instances of an application > container (thus enabling use of hardware load balancers via the cloud's > abstraction for them). > > In contrast, the Service Catalog interface allows the *application* > running on Kubernetes to access features of the cloud. > > What does a service broker look like? > ------------------------------------- > > A service broker provides an HTTP API with 5 actions: > > * List the services provided by the broker > * Create an instance of a resource > * Bind the resource into an instance of the application > * Unbind the resource from an instance of the application > * Delete the resource > > The binding step is used for things like providing a set of DB credentials > to a container. You can rotate credentials when replacing a container by > revoking the existing credentials on unbind and creating a new set on bind, > without replacing the entire resource. > > Is there an easier way? > ----------------------- > > Yes! Folks from OpenShift came up with a project called the Automation > Broker[8]. To add support for a service to Automation Broker you just > create a container with an Ansible playbook to handle each of the actions > (create/bind/unbind/delete). This eliminates the need to write another > implementation of the service broker API, and allows us to simply write > Ansible playbooks instead.[9] > > (Aside: Heat uses a comparable method to allow users to manage an external > resource using Mistral workflows: the OS::Mistral::ExternalResource > resource type.) > > Support for accessing AWS resources through a service broker is also > implemented using these Ansible Playbook Bundles.[3] > > Does this mean maintaining another client interface? > ---------------------------------------------------- > > Maybe not. We already have per-project Python libraries, (deprecated) > per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat resource > plugins, and Horizon dashboards. (Mistral actions are generated > automatically from the clients.) Some consolidation is already planned, but > it would be great not to require projects to maintain yet another interface. > > One option is to implement a tool that generates a set of playbooks for > each of the resources already exposed (via shade) in the OpenStack Ansible > modules. Then in theory we'd only need to implement the common parts once, > and then every service with support in shade would get this for free. > Ideally the same broker could be used against any OpenStack cloud (so e.g. > k8s might be running in your private cloud, but you may want its service > catalog to allow you to connect to resources in one or more public clouds) > - using shade is an advantage there because it is designed to abstract the > differences between clouds. > > Another option might be to write or generate Heat templates for each > resource type we want to expose. Then we'd only need to implement a common > way of creating a Heat stack, and just have a different template for each > resource type. This is the approach taken by the AWS playbook bundles > (except with CloudFormation, obviously). An advantage is that this allows > Heat to do any checking and type conversion required on the input > parameters. Heat templates can also be made to be fairly cloud-independent, > mainly because they make it easier to be explicit about things like ports > and subnets than on the command line, where it's more tempting to allow > things to happen in a magical but cloud-specific way. > > I'd prefer to go with the pure-Ansible autogenerated way so we can have > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] brokers > they have 10, 11 and 17 services respectively, so arguably we could get a > comparable number of features exposed without investing crazy amounts of > time if we had to write templates explicitly. > > How would authentication work? > ------------------------------ > > There are two main deployment topologies we need to consider: Kubernetes > deployed by an OpenStack tenant (Magnum-style, though not necessarily using > Magnum) and accessing resources in that tenant's project in the local > cloud, or accessing resources in some remote OpenStack cloud. > > We also need to take into account that in the second case, the Kubernetes > cluster may 'belong' to a single cloud tenant (as in the first case) or may > be shared by applications that each need to authenticate to different > OpenStack tenants. (Kubernetes has traditionally assumed the former, but I > expect it to move in the direction of allowing the latter, and it's already > fairly common for OpenShift deployments.) > > The way e.g. the AWS broker[3] works is that you can either use the > credentials provisioned to the VM that k8s is installed on (a 'Role' in AWS > parlance - note that this is completely different to a Keystone Role), or > supply credentials to authenticate to AWS remotely. > > OpenStack doesn't yet support per-instance credentials, although we're > working on it. (One thing to keep in mind is that ideally we'll want a way > to provide different permissions to the service broker and > cloud-provider-openstack.) An option in the meantime might be to provide a > way to set up credentials as part of the k8s installation. We'd also need > to have a way to specify credentials manually. Unlike for proprietary > clouds, the credentials also need to include the Keystone auth_url. We > should try to reuse openstacksdk's clouds.yaml/secure.yaml format[10] if > possible. > > The OpenShift Ansible Broker works by starting up an Ansible container on > k8s to run a playbook from the bundle, so presumably credentials can be > passed as regular k8s secrets. > > In all cases we'll want to encourage users to authenticate using Keystone > Application Credentials[11]. > > How would network integration work? > ----------------------------------- > > Kuryr[12] allows us to connect application containers in Kubernetes to > Neutron networks in OpenStack. It would be desirable if, when the user > requests a VM or bare-metal server through the service broker, it were > possible to choose between attaching to the same network as Kubernetes > pods, or to a different network. > > > [1] https://www.openservicebrokerapi.org/ > [2] https://kubernetes.io/docs/concepts/service-catalog/ > [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker > [4] https://github.com/Azure/open-service-broker-azure# > open-service-broker-for-azure > [5] https://github.com/GoogleCloudPlatform/gcp- > service-broker#cloud-foundry-service-broker-for-google-cloud-platform > [6] https://github.com/kubernetes/community/blob/ > master/keps/0002-controller-manager.md#remove-cloud- > provider-code-from-kubernetes-core > [7] https://github.com/kubernetes/cloud-provider- > openstack#openstack-cloud-controller-manager > [8] http://automationbroker.io/ > [9] https://docs.openshift.org/latest/apb_devel/index.html > [10] https://docs.openstack.org/openstacksdk/latest/user/ > config/configuration.html#config-files > [11] https://docs.openstack.org/keystone/latest/user/ > application_credentials.html > [12] https://docs.openstack.org/kuryr/latest/devref/goals_ > and_use_cases.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Jun 5 23:28:18 2018 From: soulxu at gmail.com (Alex Xu) Date: Wed, 6 Jun 2018 07:28:18 +0800 Subject: [openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs In-Reply-To: References: <20bd7bbc15fa7b2ac319e2787c462212e3f67008.camel@redhat.com> Message-ID: 2018-06-05 22:53 GMT+08:00 Eric Fried : > Alex- > > Allocations for an instance are pulled down by the compute manager > and > passed into the virt driver's spawn method since [1]. An allocation > comprises a consumer, provider, resource class, and amount. Once we can > schedule to trees, the allocations pulled down by the compute manager > will span the tree as appropriate. So in that sense, yes, nova-compute > knows which amounts of which resource classes come from which providers. > Eric, thanks, that is the thing I missed. Initial I thought we will return the allocations from the scheduler and down to the compute manager. I see we already pull the allocations in the compute manager now. > > However, if you're asking about the situation where we have two > different allocations of the same resource class coming from two > separate providers: Yes, we can still tell which RCxAMOUNT is associated > with which provider; but No, we still have no inherent way to correlate > a specific one of those allocations with the part of the *request* it > came from. If just the provider UUID isn't enough for the virt driver > to figure out what to do, it may have to figure it out by looking at the > flavor (and/or image metadata), inspecting the traits on the providers > associated with the allocations, etc. (The theory here is that, if the > virt driver can't tell the difference at that point, then it actually > doesn't matter.) > > [1] https://review.openstack.org/#/c/511879/ > > On 06/05/2018 09:05 AM, Alex Xu wrote: > > Maybe I missed something. Is there anyway the nova-compute can know the > > resources are allocated from which child resource provider? For example, > > the host has two PFs. The request is asking one VF, then the > > nova-compute needs to know the VF is allocated from which PF (resource > > provider). As my understand, currently we only return a list of > > alternative resource provider to the nova-compute, those alternative is > > root resource provider. > > > > 2018-06-05 21:29 GMT+08:00 Jay Pipes > >: > > > > On 06/05/2018 08:50 AM, Stephen Finucane wrote: > > > > I thought nested resource providers were already supported by > > placement? To the best of my knowledge, what is /not/ supported > > is virt drivers using these to report NUMA topologies but I > > doubt that affects you. The placement guys will need to weigh in > > on this as I could be missing something but it sounds like you > > can start using this functionality right now. > > > > > > To be clear, this is what placement and nova *currently* support > > with regards to nested resource providers: > > > > 1) When creating a resource provider in placement, you can specify a > > parent_provider_uuid and thus create trees of providers. This was > > placement API microversion 1.14. Also included in this microversion > > was support for displaying the parent and root provider UUID for > > resource providers. > > > > 2) The nova "scheduler report client" (terrible name, it's mostly > > just the placement client at this point) understands how to call > > placement API 1.14 and create resource providers with a parent > provider. > > > > 3) The nova scheduler report client uses a ProviderTree object [1] > > to cache information about the hierarchy of providers that it knows > > about. For nova-compute workers managing hypervisors, that means the > > ProviderTree object contained in the report client is rooted in a > > resource provider that represents the compute node itself (the > > hypervisor). For nova-compute workers managing baremetal, that means > > the ProviderTree object contains many root providers, each > > representing an Ironic baremetal node. > > > > 4) The placement API's GET /allocation_candidates endpoint now > > understands the concept of granular request groups [2]. Granular > > request groups are only relevant when a user wants to specify that > > child providers in a provider tree should be used to satisfy part of > > an overall scheduling request. However, this support is yet > > incomplete -- see #5 below. > > > > The following parts of the nested resource providers modeling are > > *NOT* yet complete, however: > > > > 5) GET /allocation_candidates does not currently return *results* > > when granular request groups are specified. So, while the placement > > service understands the *request* for granular groups, it doesn't > > yet have the ability to constrain the returned candidates > > appropriately. Tetsuro is actively working on this functionality in > > this patch series: > > > > https://review.openstack.org/#/q/status:open+project: > openstack/nova+branch:master+topic:bp/nested-resource- > providers-allocation-candidates > > openstack/nova+branch:master+topic:bp/nested-resource- > providers-allocation-candidates> > > > > 6) The virt drivers need to implement the update_provider_tree() > > interface [3] and construct the tree of resource providers along > > with appropriate inventory records for each child provider in the > > tree. Both libvirt and XenAPI virt drivers have patch series up that > > begin to take advantage of the nested provider modeling. However, a > > number of concerns [4] about in-place nova-compute upgrades when > > moving from a single resource provider to a nested provider tree > > model were raised, and we have begun brainstorming how to handle the > > migration of existing data in the single-provider model to the > > nested provider model. [5] We are blocking any reviews on patch > > series that modify the local provider modeling until these migration > > concerns are fully resolved. > > > > 7) The scheduler does not currently pass granular request groups to > > placement. Once #5 and #6 are resolved, and once the > > migration/upgrade path is resolved, clearly we will need to have the > > scheduler start making requests to placement that represent the > > granular request groups and have the scheduler pass the resulting > > allocation candidates to its filters and weighers. > > > > Hope this helps highlight where we currently are and the work still > > left to do (in Rocky) on nested resource providers. > > > > Best, > > -jay > > > > > > [1] > > https://github.com/openstack/nova/blob/master/nova/compute/ > provider_tree.py > > provider_tree.py> > > > > [2] > > https://specs.openstack.org/openstack/nova-specs/specs/ > queens/approved/granular-resource-requests.html > > queens/approved/granular-resource-requests.html> > > > > [3] > > https://github.com/openstack/nova/blob/ > f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833 > > f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/virt/driver.py#L833> > > > > [4] > > http://lists.openstack.org/pipermail/openstack-dev/2018- > May/130783.html > > May/130783.html> > > > > [5] https://etherpad.openstack.org/p/placement-making-the-(up)grade > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Jun 6 00:07:15 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 5 Jun 2018 20:07:15 -0400 Subject: [openstack-dev] [all][python3][tc][infra] Python 3.6 In-Reply-To: <654866a4-0a53-7a61-e2e3-962be0d6d45d@redhat.com> References: <868858f5-91d8-dce8-682a-4d4a94b3a931@redhat.com> <1528230987-sup-1889@lrrr.local> <654866a4-0a53-7a61-e2e3-962be0d6d45d@redhat.com> Message-ID: <20180606000701.GA16606@localhost.localdomain> On Tue, Jun 05, 2018 at 04:48:00PM -0400, Zane Bitter wrote: > On 05/06/18 16:38, Doug Hellmann wrote: > > Excerpts from Zane Bitter's message of 2018-06-05 15:55:49 -0400: > > > We've talked a bit about migrating to Python 3, but (unless I missed it) > > > not a lot about which version of Python 3. Currently all projects that > > > support Python 3 are gating against 3.5. However, Ubuntu Artful and > > > Fedora 26 already ship Python 3.6 by default. (And Bionic and F28 have > > > been released since then.) The one time it did come up in a thread, we > > > decided it was blocked on the availability of 3.6 in Ubuntu to run on > > > the test nodes, so it's time to discuss it again. > > > > > > AIUI we're planning to switch the test nodes to Bionic, since it's the > > > latest LTS release, so I'd assume that means that when we talk about > > > running docs jobs, pep8 &c. with Python3 (under the python3-first > > > project-wide goal) that means 3.6. And while 3.5 jobs should continue to > > > work, it seems like we ought to start testing ASAP with the version that > > > users are going to get by default if they choose to use our Python3 > > > packages. > > > > > > The list of breaking changes in 3.6 is quite short (although not zero), > > > so I wouldn't expect too many roadblocks: > > > https://docs.python.org/3/whatsnew/3.6.html#porting-to-python-3-6 > > > > > > I think we can split the problem into two parts: > > > > > > * How can we detect any issues ASAP. > > > > > > Would it be sane to give all projects with a py35 unit tests job a > > > non-voting py36 job so that they can start fixing any issues right away? > > > Like this: https://review.openstack.org/572535 > > > > That seems like a good way to start. > > > > Maybe we want to rename that project template to openstack-python3-jobs > > to keep it version-agnostic? > > You mean the 35_36 one? Actually, let's discuss this on the review. > Yes please lets keep python35 / python36 project-templates, I've left comments in review. > > > > > > * How can we ensure every project fixes any issues and migrates to > > > voting gates, including for functional test jobs? > > > > > > Would it make sense to make this part of the 'python3-first' > > > project-wide goal? > > > > Yes, that seems like a good idea. We can be specific about the version > > of python 3 to be used to achieve that goal (assuming it is selected as > > a goal). > > > > The instructions I've been putting together are based on just using > > "python3" in the tox.ini file because I didn't want to have to update > > that every time we update to a new version of python. Do you think we > > should be more specific there, too? > > That's probably fine IMHO. We should just be aware that e.g. when distros > start switching to 3.7 then people's local jobs will start running in 3.7. > > For me, at least, this has already been the case with 3.6 - tox is now > python3 by default in Fedora, so e.g. pep8 jobs have been running under 3.6 > for a while now. There were a *lot* of deprecation warnings at first. > > > Doug > > > > > > > > cheers, > > > Zane. > > > > > > > > > (Disclaimer for the conspiracy-minded: you might assume that I'm > > > cleverly concealing inside knowledge of which version of Python 3 will > > > replace Python 2 in the next major release of RHEL/CentOS, but in fact > > > you would be mistaken. The truth is I've been too lazy to find out, so > > > I'm as much in the dark as anybody. Really. Anyway, this isn't about > > > that, it's about testing within upstream OpenStack.) > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rodrigodsousa at gmail.com Wed Jun 6 00:25:50 2018 From: rodrigodsousa at gmail.com (Rodrigo Duarte) Date: Tue, 5 Jun 2018 17:25:50 -0700 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: Henry, I'm really sad to see you go, you were a terrific mentor when I first joined the community - I remember all the thorough reviews and nice discussions ranging from topics on how to model the root domain for the reseller usecase to how to improve the role assignments API. :) Thanks for everything! On Wed, May 30, 2018 at 11:22 AM, Gage Hugo wrote: > It was great working with you Henry. Hope to see you around sometime and > wishing you all the best! > > On Wed, May 30, 2018 at 3:45 AM, Henry Nash wrote: > >> Hi >> >> It is with a somewhat heavy heart that I have decided that it is time to >> hang up my keystone core status. Having been involved since the closing >> stages of Folsom, I've had a good run! When I look at how far keystone has >> come since the v2 days, it is remarkable - and we should all feel a sense >> of pride in that. >> >> Thanks to all the hard work, commitment, humour and support from all the >> keystone folks over the years - I am sure we will continue to interact and >> meet among the many other open source projects that many of us are becoming >> involved with. Ad astra! >> >> Best regards, >> >> Henry >> Twitter: @henrynash >> linkedIn: www.linkedin.com/in/henrypnash >> >> Unless stated otherwise above: >> IBM United Kingdom Limited - Registered in England and Wales with number >> 741598. >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Rodrigo http://rodrigods.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 6 00:41:12 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Wed, 06 Jun 2018 09:41:12 +0900 Subject: [openstack-dev] [qa][ptg] Mark your availability for Denver PTG, 2018 Message-ID: <163d28a5bf3.10556b18712388.6079662271487630227@ghanshyammann.com> Hi All, As you all might know that next PTG is in Denver [1] and we will plan the QA space in PTG. Please let me know if you are planning to attend the QA sessions (Not necessary to be full time in QA area). This is to get the rough number of attendees in QA. I know it might be little early to ask but you can reply your tentative plan. Either reply to this ML Or ping me on IRC. It will be helpful if you can let me know by 13th June. Thanks and hope to see more attendee in PTG. [1] https://www.openstack.org/ptg/ http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html From fungi at yuggoth.org Wed Jun 6 01:29:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jun 2018 01:29:49 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <20180531130047.q2x2gmhkredaqxis@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> <20180531130047.q2x2gmhkredaqxis@yuggoth.org> Message-ID: <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> On 2018-05-31 13:00:47 +0000 (+0000), Jeremy Stanley wrote: > On 2018-05-31 10:33:51 +0200 (+0200), Thierry Carrez wrote: > > Ade Lee wrote: > > > [...] > > > So it seems that the two blockers above have been resolved. So is it > > > time to ad a castellan compatible secret store to the base services? > > > > It's definitely time to start a discussion about it, at least :) > > > > Would you be interested in starting a ML thread about it ? If not, that's > > probably something I can do :) > > That was, in fact, the entire reason I started this subthread, > changed the subject and added the [tc] tag. ;) > > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html > > I figured I'd let it run through the summit to garner feedback > before proposing the corresponding Gerrit change. Seeing no further objections, I give you https://review.openstack.org/572656 for the next step. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ksnhr.tech at gmail.com Wed Jun 6 01:45:35 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Wed, 6 Jun 2018 10:45:35 +0900 Subject: [openstack-dev] [horizon][plugins][heat][searchlight][murano][sahara][watcher] Use default Django test runner instead of nose In-Reply-To: References: <1528206617-sup-8376@lrrr.local> Message-ID: Thanks Ivan, will check your patch soon. Regards, Kaz(Heat) 2018-06-05 22:59 GMT+09:00 Akihiro Motoki : > This is an important step to drop nose and nosehtmloutput :) > We plan to switch the test runner and then re-enable integration tests > (with selenium) for cross project testing. > > In addition, we horizon team are trying to minimize gate breakage in > horizon plugins for recent changes (this and django 2.0). > Hopefully pending related patches will land soon. > > > 2018年6月5日(火) 22:52 Doug Hellmann : > >> Excerpts from Ivan Kolodyazhny's message of 2018-06-05 16:32:22 +0300: >> > Hi team, >> > >> > In Horizon, we're going to get rid of unsupported Nose and use Django >> Test >> > Runner instead of it [1]. Nose has some issues and limitations which >> blocks >> > us in our testing improvement efforts. >> > >> > Nose has different test discovery mechanism than Django does. So, there >> was >> > a chance to break some Horizon Plugins:(. Unfortunately, we haven't >> > cross-project CI yet (TBH, I'm working on it and it's one of the first >> > steps to get it done), that's why I tested this change [2] against all >> > known plugins [3]. >> > >> > Most of the projects don't need any changes. I proposed few changed to >> > plugins repositories [4] and most of them are merged already. Thanks a >> lot >> > to everybody who helped me with it. Patches for heat-dashboard [5] and >> > searchlight-ui [6] are under review. >> > >> > Additional efforts are needed for murano-dashboard, sahara-dashboard, >> and >> > watcher-dashboard projects. murano-dashboard has Nose test runner >> enabled >> > in the config, so Horizon change won't affect it. >> > >> > I proposed patches for sahara-dashboard [7] and watcher-dashboard [8] to >> > explicitly enable Nose test runner there until we'll fix all related >> > issues. I hope we'll have a good number of cross-project activities with >> > these teams. >> > >> > Once all patches above will be merged, we'll be ready to the next step >> to >> > make Horizon and plugins CI better. >> > >> > >> > [1] https://review.openstack.org/#/c/544296/ >> > [2] >> > https://docs.google.com/spreadsheets/d/17Yiso6JLeRHBSqJhAiQYkqIAvQhvN >> FM8NgTkrPxovMo/edit?usp=sharing >> > [3] https://docs.openstack.org/horizon/latest/install/plugin- >> registry.html >> > [4] >> > https://review.openstack.org/#/q/topic:bp/improve-horizon- >> testing+(status:open+OR+status:merged) >> > [5] https://review.openstack.org/572095 >> > [6] https://review.openstack.org/572124 >> > [7] https://review.openstack.org/572390 >> > [8] https://review.openstack.org/572391 >> > >> > >> > >> > Regards, >> > Ivan Kolodyazhny, >> > http://blog.e0ne.info/ >> >> Nice work! Thanks for taking the initiative on updating our tooling. >> >> Doug >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Jun 6 02:28:41 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 6 Jun 2018 12:28:41 +1000 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: <20180606022840.GA8663@thor.bakeyournoodle.com> On Mon, Jun 04, 2018 at 01:31:11PM -0700, Ihar Hrachyshka wrote: > Hi neutrinos and all, > > As some of you've already noticed, the last several months I was > scaling down my involvement in Neutron and, more generally, OpenStack. > I am at a point where I feel confident my disappearance won't disturb > the project, and so I am ready to make it official. > > I am stepping down from all administrative roles I so far accumulated > in Neutron and Stable teams. I shifted my focus to another project, > and so I just removed myself from all relevant admin groups to reflect > the change. > > It was a nice 4.5 year ride for me. I am very happy with what we > achieved in all these years and a bit sad to leave. The community is > the most brilliant and compassionate and dedicated to openness group > of people I was lucky to work with, and I am reminded daily how > awesome it is. > > I am far from leaving the industry, or networking, or the promise of > open source infrastructure, so I am sure we will cross our paths once > in a while with most of you. :) I also plan to hang out in our IRC > channels and make snarky comments, be aware! Thanks for all your help and support with Stable Maintenance. Your input, and snarky comments will be missed! Best of luck with your new adventure! Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From xinni.ge1990 at gmail.com Wed Jun 6 02:54:21 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Wed, 6 Jun 2018 11:54:21 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi, akihiro and other guys, I understand why minified is considered to be non-free, but I was confused about the statement "At the very least, a non-minified version should be present next to the minified version" [1] in the documentation. Actually in existing xstatic repo, I observed several minified files in angular_fileupload, jquery-migrate, or bootstrap_scss. So, I uploaded those minified files as in the release package of angular/material. Personally I don't insist on minified files, and I will delete all minified files and re-upload the patch. Thanks a lot for the advice. [1] https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy ==================== Ge Xinni Email: xinni.ge1990 at gmail.com ==================== On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki wrote: > Hi, > > Sorry for re-using the ancient ML thread. > Looking at recent xstatic-* repo reviews, I am a bit afraid that > xstatic-cores do not have a common understanding on the principle of > xstatic packages. > I hope all xstatic-cores re-read "Packing Software" in the horizon > contributor docs [1], especially "Minified Javascript policy" [2], > carefully. > > Thanks, > Akihiro > > [1] https://docs.openstack.org/horizon/latest/contributor/ > topics/packaging.html > [2] https://docs.openstack.org/horizon/latest/ > contributor/topics/packaging.html#minified-javascript-policy > > > 2018年4月4日(水) 14:35 Xinni Ge : > >> Hi Ivan and other Horizon team member, >> >> Thanks for adding us into xstatic-core group. >> But I still need your opinion and help to release the newly-added xstatic >> packages to pypi index. >> >> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED >> TAG, and I cannot release the first non-trivial version. >> >> If I (or maybe Kaz) could be added into xstatic-release group, we can >> release all the 8 packages by ourselves. >> >> Or, we are very appreciate if any member of xstatic-release could help to >> do it. >> >> Just for your quick access, here is the link of access permission page of >> one xstatic package. >> https://review.openstack.org/#/admin/projects/openstack/ >> xstatic-angular-material,access >> >> -- >> Best Regards, >> Xinni >> >> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara >> wrote: >> >>> Hi Ivan, >>> >>> >>> Thank you very much. >>> I've confirmed that all of us have been added to xstatic-core. >>> >>> As discussed, we will focus on the followings what we added for >>> heat-dashboard, will not touch other xstatic repos as core. >>> >>> xstatic-angular-material >>> xstatic-angular-notify >>> xstatic-angular-uuid >>> xstatic-angular-vis >>> xstatic-filesaver >>> xstatic-js-yaml >>> xstatic-json2yaml >>> xstatic-vis >>> >>> Regards, >>> Kaz >>> >>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >>> > Hi Kuz, >>> > >>> > Don't worry, we're on the same page with you. I added both you, Xinni >>> and >>> > Keichii to the xstatic-core group. Thank you for your contributions! >>> > >>> > Regards, >>> > Ivan Kolodyazhny, >>> > http://blog.e0ne.info/ >>> > >>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >>> wrote: >>> >> >>> >> Hi Ivan & Horizon folks >>> >> >>> >> >>> >> AFAIK, Horizon team had conclusion that you will add the specific >>> >> members to xstatic-core, correct ? >>> >> Can I ask you to add the following members ? >>> >> # All of tree are heat-dashboard core. >>> >> >>> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >>> >> Xinni Ge / xinni.ge1990 at gmail.com >>> >> Keiichi Hikita / keiichi.hikita at gmail.com >>> >> >>> >> Please give me a shout, if we are not on same page or any concern. >>> >> >>> >> Regards, >>> >> Kaz >>> >> >>> >> >>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >>> >> > Hi Ivan, Akihiro, >>> >> > >>> >> > >>> >> > Thanks for your kind arrangement. >>> >> > Looking forward to hearing your decision soon. >>> >> > >>> >> > Regards, >>> >> > Kaz >>> >> > >>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >>> >> >> HI Team, >>> >> >> >>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >>> that >>> >> >> #4 >>> >> >> could be too complicated for us. Anyway, we've got this topic on >>> the >>> >> >> meeting >>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision >>> after >>> >> >> the >>> >> >> meeting. >>> >> >> >>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >>> >> >> >>> >> >> >>> >> >> >>> >> >> Regards, >>> >> >> Ivan Kolodyazhny, >>> >> >> http://blog.e0ne.info/ >>> >> >> >>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki < >>> amotoki at gmail.com> >>> >> >> wrote: >>> >> >>> >>> >> >>> Hi Kaz and Ivan, >>> >> >>> >>> >> >>> Yeah, it is worth discussed officially in the horizon team >>> meeting or >>> >> >>> the >>> >> >>> mailing list thread to get a consensus. >>> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >>> >> >>> >>> >> >>> After sending the previous mail, I noticed anther option. I see >>> there >>> >> >>> are >>> >> >>> several options now. >>> >> >>> (1) Keep xstatic-core and horizon-core same. >>> >> >>> (2) Add specific members to xstatic-core >>> >> >>> (3) Add specific horizon-plugin core to xstatic-core >>> >> >>> (4) Split core membership into per-repo basis (perhaps too >>> >> >>> complicated!!) >>> >> >>> >>> >> >>> My current vote is (2) as xstatic-core needs to understand what is >>> >> >>> xstatic >>> >> >>> and how it is maintained. >>> >> >>> >>> >> >>> Thanks, >>> >> >>> Akihiro >>> >> >>> >>> >> >>> >>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >>> >> >>>> >>> >> >>>> Hi Akihiro, >>> >> >>>> >>> >> >>>> >>> >> >>>> Thanks for your comment. >>> >> >>>> The background of my request to add us to xstatic-core comes from >>> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >>> discussion. >>> >> >>>> >>> >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg- >>> rocky-discussion >>> >> >>>> Line135, "we can share ownership if needed - e0ne" >>> >> >>>> >>> >> >>>> Just in case, could you guys confirm unified opinion on this >>> matter >>> >> >>>> as >>> >> >>>> Horizon team ? >>> >> >>>> >>> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >>> >> >>>> because it's easier & smoother to manage what we are taking for >>> >> >>>> heat-dashboard. >>> >> >>>> On the other hand, I can understand what Akihiro you are saying, >>> the >>> >> >>>> newly added repos belong to Horizon project & being managed by >>> not >>> >> >>>> Horizon core is not consistent. >>> >> >>>> Also having exception might make unexpected confusion in near >>> future. >>> >> >>>> >>> >> >>>> Eventually we will follow your opinion, let me hear Horizon >>> team's >>> >> >>>> conclusion. >>> >> >>>> >>> >> >>>> Regards, >>> >> >>>> Kaz >>> >> >>>> >>> >> >>>> >>> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >>> >> >>>> > Hi Kaz, >>> >> >>>> > >>> >> >>>> > These repositories are under horizon project. It looks better >>> to >>> >> >>>> > keep >>> >> >>>> > the >>> >> >>>> > current core team. >>> >> >>>> > It potentially brings some confusion if we treat some horizon >>> >> >>>> > plugin >>> >> >>>> > team >>> >> >>>> > specially. >>> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >>> >> >>>> > would >>> >> >>>> > work >>> >> >>>> > without problem even if only horizon-core can approve xstatic >>> >> >>>> > reviews. >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara >> >: >>> >> >>>> >> >>> >> >>>> >> Hi Ivan, Horizon folks, >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >>> >> >>>> >> landed. >>> >> >>>> >> >>> >> >>>> >> In project-config for them, I've set same acl-config as the >>> >> >>>> >> existing >>> >> >>>> >> xstatic repos. >>> >> >>>> >> It means only "xstatic-core" can manage the newly created >>> repos on >>> >> >>>> >> gerrit. >>> >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" >>> >> >>>> >> like as >>> >> >>>> >> what horizon-core is doing ? >>> >> >>>> >> >>> >> >>>> >> xstatic-core >>> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >>> >> >>>> >> >>> >> >>>> >> heat-dashboard-core >>> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >>> >> >>>> >> >>> >> >>>> >> Of course, we will surely touch only what we made, just would >>> like >>> >> >>>> >> to >>> >> >>>> >> manage them smoothly by ourselves. >>> >> >>>> >> In case we need to touch the other ones, will ask Horizon >>> team for >>> >> >>>> >> help. >>> >> >>>> >> >>> >> >>>> >> Thanks in advance. >>> >> >>>> >> >>> >> >>>> >> Regards, >>> >> >>>> >> Kaz >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >>> >> >>>> >> > Hi Horizon Team, >>> >> >>>> >> > >>> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` >>> plugin >>> >> >>>> >> > option, >>> >> >>>> >> > and submitted a patch for it. >>> >> >>>> >> > Could you please help to review the patch. >>> >> >>>> >> > >>> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >>> >> >>>> >> > https://review.openstack.org/#/c/552259/ >>> >> >>>> >> > >>> >> >>>> >> > Thank you very much. >>> >> >>>> >> > >>> >> >>>> >> > Best Regards, >>> >> >>>> >> > Xinni >>> >> >>>> >> > >>> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >>> >> >>>> >> > >>> >> >>>> >> > wrote: >>> >> >>>> >> >> >>> >> >>>> >> >> Hi Kaz, >>> >> >>>> >> >> >>> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >>> patches >>> >> >>>> >> >> >>> >> >>>> >> >> Regards, >>> >> >>>> >> >> Ivan Kolodyazhny, >>> >> >>>> >> >> http://blog.e0ne.info/ >>> >> >>>> >> >> >>> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >>> >> >>>> >> >> >>> >> >>>> >> >> wrote: >>> >> >>>> >> >>> >>> >> >>>> >> >>> Hi Ivan & Horizon folks, >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >>> >> >>>> >> >>> xstatic >>> >> >>>> >> >>> modules. >>> >> >>>> >> >>> Let me request you to have review the following patches. >>> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >>> >> >>>> >> >>> >>> >> >>>> >> >>> project-config >>> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >>> >> >>>> >> >>> >>> >> >>>> >> >>> governance >>> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >>> >> >>>> >> >>> >>> >> >>>> >> >>> Thanks in advance:) >>> >> >>>> >> >>> >>> >> >>>> >> >>> Regards, >>> >> >>>> >> >>> Kaz >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >>> >> >>>> >> >>> : >>> >> >>>> >> >>> > Yes, please do that. We can then discuss in the review >>> about >>> >> >>>> >> >>> > technical >>> >> >>>> >> >>> > details. >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > wrote: >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> Hi, Akihiro >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> Thanks for the quick reply. >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >>> should >>> >> >>>> >> >>> >> not >>> >> >>>> >> >>> >> be >>> >> >>>> >> >>> >> modified. >>> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >>> >> >>>> >> >>> >> and I think maybe there could be one option like >>> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >>> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >>> >> >>>> >> >>> >> STATICFILES_DIRS. >>> >> >>>> >> >>> >> I am considering to add a bug report to describe it at >>> >> >>>> >> >>> >> first, >>> >> >>>> >> >>> >> and >>> >> >>>> >> >>> >> give >>> >> >>>> >> >>> >> a >>> >> >>>> >> >>> >> patch later maybe. >>> >> >>>> >> >>> >> Is that ok with the Horizon team? >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> Best Regards. >>> >> >>>> >> >>> >> Xinni >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> wrote: >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> Hi Xinni, >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >>> >> >>>> >> >>> >>> : >>> >> >>>> >> >>> >>> > Hello Horizon Team, >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > I would like to hear about your opinions about how >>> to >>> >> >>>> >> >>> >>> > add >>> >> >>>> >> >>> >>> > new >>> >> >>>> >> >>> >>> > xstatic >>> >> >>>> >> >>> >>> > modules to horizon settings. >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >>> files >>> >> >>>> >> >>> >>> > issue, >>> >> >>>> >> >>> >>> > thanks >>> >> >>>> >> >>> >>> > for >>> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing >>> them and >>> >> >>>> >> >>> >>> > referencing as >>> >> >>>> >> >>> >>> > new >>> >> >>>> >> >>> >>> > xstatic-* libs. >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> Thanks for moving this forward. >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded >>> as >>> >> >>>> >> >>> >>> > openstack >>> >> >>>> >> >>> >>> > official >>> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >>> >> >>>> >> >>> >>> > hesitate >>> >> >>>> >> >>> >>> > to >>> >> >>>> >> >>> >>> > decide >>> >> >>>> >> >>> >>> > how to >>> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >>> >> >>>> >> >>> >>> > STATICFILES_DIRS >>> >> >>>> >> >>> >>> > in >>> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static >>> files >>> >> >>>> >> >>> >>> > could >>> >> >>>> >> >>> >>> > be >>> >> >>>> >> >>> >>> > automatically >>> >> >>>> >> >>> >>> > collected by *collectstatic* process. >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >>> relevant >>> >> >>>> >> >>> >>> > static >>> >> >>>> >> >>> >>> > fils >>> >> >>>> >> >>> >>> > are >>> >> >>>> >> >>> >>> > added >>> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >>> plugin >>> >> >>>> >> >>> >>> > dashboard. >>> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >>> >> >>>> >> >>> >>> > similar >>> >> >>>> >> >>> >>> > to >>> >> >>>> >> >>> >>> > ADD_JS_FILES) >>> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly >>> update >>> >> >>>> >> >>> >>> > STATICFILES_DIRS). >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >>> xstatic >>> >> >>>> >> >>> >>> modules >>> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a >>> >> >>>> >> >>> >>> good >>> >> >>>> >> >>> >>> idea >>> >> >>>> >> >>> >>> to >>> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >>> horizon >>> >> >>>> >> >>> >>> plugin >>> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >>> >> >>>> >> >>> >>> xstatic >>> >> >>>> >> >>> >>> module >>> >> >>>> >> >>> >>> in >>> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >>> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >>> horizon >>> >> >>>> >> >>> >>> code >>> >> >>>> >> >>> >>> to >>> >> >>>> >> >>> >>> handle plugin settings should merge multiple entries >>> to a >>> >> >>>> >> >>> >>> single >>> >> >>>> >> >>> >>> one >>> >> >>>> >> >>> >>> hopefully. >>> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> Akihiro >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >>> >> >>>> >> >>> >>> > guys, >>> >> >>>> >> >>> >>> > and >>> >> >>>> >> >>> >>> > Best Regards, >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > Xinni Ge >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > ______________________________ >>> ____________________________________________ >>> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >>> >> >>>> >> >>> >>> > questions) >>> >> >>>> >> >>> >>> > Unsubscribe: >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> > http://lists.openstack.org/ >>> cgi-bin/mailman/listinfo/openstack-dev >>> >> >>>> >> >>> >>> > >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> ______________________________ >>> ____________________________________________ >>> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >>> >> >>>> >> >>> >>> questions) >>> >> >>>> >> >>> >>> Unsubscribe: >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> >>> >> >>>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> -- >>> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> ______________________________ >>> ____________________________________________ >>> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >>> >> >>>> >> >>> >> questions) >>> >> >>>> >> >>> >> Unsubscribe: >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> >>> >> >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > ______________________________ >>> ____________________________________________ >>> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >>> questions) >>> >> >>>> >> >>> > Unsubscribe: >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > >>> >> >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> >>> > >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> >>> >> >>>> >> >>> ______________________________ >>> ____________________________________________ >>> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >>> questions) >>> >> >>>> >> >>> Unsubscribe: >>> >> >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >>> >>> >> >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> >> >>> >> >>>> >> >> >>> >> >>>> >> >> >>> >> >>>> >> >> >>> >> >>>> >> >> >>> >> >>>> >> >> >>> >> >>>> >> >> ______________________________ >>> ____________________________________________ >>> >> >>>> >> >> OpenStack Development Mailing List (not for usage >>> questions) >>> >> >>>> >> >> Unsubscribe: >>> >> >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> >> >>> >> >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> >> >>> >> >>>> >> > >>> >> >>>> >> > >>> >> >>>> >> > >>> >> >>>> >> > -- >>> >> >>>> >> > 葛馨霓 Xinni Ge >>> >> >>>> >> > >>> >> >>>> >> > >>> >> >>>> >> > >>> >> >>>> >> > >>> >> >>>> >> > ______________________________ >>> ____________________________________________ >>> >> >>>> >> > OpenStack Development Mailing List (not for usage questions) >>> >> >>>> >> > Unsubscribe: >>> >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> >>>> >> > >>> >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> >> > >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> >>> >> >>>> >> ____________________________________________________________ >>> ______________ >>> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >>> >> >>>> >> Unsubscribe: >>> >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > >>> >> >>>> > ____________________________________________________________ >>> ______________ >>> >> >>>> > OpenStack Development Mailing List (not for usage questions) >>> >> >>>> > Unsubscribe: >>> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>>> > >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> ____________________________________________________________ >>> ______________ >>> >> >>>> OpenStack Development Mailing List (not for usage questions) >>> >> >>>> Unsubscribe: >>> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>> openstack-dev >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> ____________________________________________________________ >>> ______________ >>> >> >>> OpenStack Development Mailing List (not for usage questions) >>> >> >>> Unsubscribe: >>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >>> >>> >> >> >>> >> >> >>> >> >> >>> >> >> ____________________________________________________________ >>> ______________ >>> >> >> OpenStack Development Mailing List (not for usage questions) >>> >> >> Unsubscribe: >>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >>> >> >>> >> ____________________________________________________________ >>> ______________ >>> >> OpenStack Development Mailing List (not for usage questions) >>> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> > >>> > >>> > ____________________________________________________________ >>> ______________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> 葛馨霓 Xinni Ge >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Jun 6 03:45:59 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 6 Jun 2018 12:45:59 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: 2018年6月6日(水) 11:54 Xinni Ge : > Hi, akihiro and other guys, > > I understand why minified is considered to be non-free, but I was confused > about the statement > "At the very least, a non-minified version should be present next to the > minified version" [1] > in the documentation. > > Actually in existing xstatic repo, I observed several minified files in > angular_fileupload, jquery-migrate, or bootstrap_scss. > So, I uploaded those minified files as in the release package of > angular/material. > Good point. My interpretation is: - Basically minified files should not be included in xstatic deliverables. - Even though not suggested, if minified files are included, corresponding non-minified version must be included. Considering this, I believe we should not include minified files for new xstatic deliverables. Makes sense? > > Personally I don't insist on minified files, and I will delete all > minified files and re-upload the patch. > Thanks a lot for the advice. > Thanks for understanding and your patience. Let's land pending reviews soon :) Akihiro > > [1] > https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy > > ==================== > Ge Xinni > Email: xinni.ge1990 at gmail.com > ==================== > > On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki wrote: > >> Hi, >> >> Sorry for re-using the ancient ML thread. >> Looking at recent xstatic-* repo reviews, I am a bit afraid that >> xstatic-cores do not have a common understanding on the principle of >> xstatic packages. >> I hope all xstatic-cores re-read "Packing Software" in the horizon >> contributor docs [1], especially "Minified Javascript policy" [2], >> carefully. >> >> Thanks, >> Akihiro >> >> [1] >> https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html >> [2] >> https://docs.openstack.org/horizon/latest/contributor/topics/packaging.html#minified-javascript-policy >> >> >> 2018年4月4日(水) 14:35 Xinni Ge : >> >>> Hi Ivan and other Horizon team member, >>> >>> Thanks for adding us into xstatic-core group. >>> But I still need your opinion and help to release the newly-added >>> xstatic packages to pypi index. >>> >>> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED >>> TAG, and I cannot release the first non-trivial version. >>> >>> If I (or maybe Kaz) could be added into xstatic-release group, we can >>> release all the 8 packages by ourselves. >>> >>> Or, we are very appreciate if any member of xstatic-release could help >>> to do it. >>> >>> Just for your quick access, here is the link of access permission page >>> of one xstatic package. >>> >>> https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access >>> >>> >>> -- >>> Best Regards, >>> Xinni >>> >>> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara >>> wrote: >>> >>>> Hi Ivan, >>>> >>>> >>>> Thank you very much. >>>> I've confirmed that all of us have been added to xstatic-core. >>>> >>>> As discussed, we will focus on the followings what we added for >>>> heat-dashboard, will not touch other xstatic repos as core. >>>> >>>> xstatic-angular-material >>>> xstatic-angular-notify >>>> xstatic-angular-uuid >>>> xstatic-angular-vis >>>> xstatic-filesaver >>>> xstatic-js-yaml >>>> xstatic-json2yaml >>>> xstatic-vis >>>> >>>> Regards, >>>> Kaz >>>> >>>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >>>> > Hi Kuz, >>>> > >>>> > Don't worry, we're on the same page with you. I added both you, Xinni >>>> and >>>> > Keichii to the xstatic-core group. Thank you for your contributions! >>>> > >>>> > Regards, >>>> > Ivan Kolodyazhny, >>>> > http://blog.e0ne.info/ >>>> > >>>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >>>> wrote: >>>> >> >>>> >> Hi Ivan & Horizon folks >>>> >> >>>> >> >>>> >> AFAIK, Horizon team had conclusion that you will add the specific >>>> >> members to xstatic-core, correct ? >>>> >> Can I ask you to add the following members ? >>>> >> # All of tree are heat-dashboard core. >>>> >> >>>> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >>>> >> Xinni Ge / xinni.ge1990 at gmail.com >>>> >> Keiichi Hikita / keiichi.hikita at gmail.com >>>> >> >>>> >> Please give me a shout, if we are not on same page or any concern. >>>> >> >>>> >> Regards, >>>> >> Kaz >>>> >> >>>> >> >>>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >>>> >> > Hi Ivan, Akihiro, >>>> >> > >>>> >> > >>>> >> > Thanks for your kind arrangement. >>>> >> > Looking forward to hearing your decision soon. >>>> >> > >>>> >> > Regards, >>>> >> > Kaz >>>> >> > >>>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >>>> >> >> HI Team, >>>> >> >> >>>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >>>> that >>>> >> >> #4 >>>> >> >> could be too complicated for us. Anyway, we've got this topic on >>>> the >>>> >> >> meeting >>>> >> >> agenda [1] so we'll discuss it there too. I'll share our decision >>>> after >>>> >> >> the >>>> >> >> meeting. >>>> >> >> >>>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> Regards, >>>> >> >> Ivan Kolodyazhny, >>>> >> >> http://blog.e0ne.info/ >>>> >> >> >>>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki < >>>> amotoki at gmail.com> >>>> >> >> wrote: >>>> >> >>> >>>> >> >>> Hi Kaz and Ivan, >>>> >> >>> >>>> >> >>> Yeah, it is worth discussed officially in the horizon team >>>> meeting or >>>> >> >>> the >>>> >> >>> mailing list thread to get a consensus. >>>> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >>>> >> >>> >>>> >> >>> After sending the previous mail, I noticed anther option. I see >>>> there >>>> >> >>> are >>>> >> >>> several options now. >>>> >> >>> (1) Keep xstatic-core and horizon-core same. >>>> >> >>> (2) Add specific members to xstatic-core >>>> >> >>> (3) Add specific horizon-plugin core to xstatic-core >>>> >> >>> (4) Split core membership into per-repo basis (perhaps too >>>> >> >>> complicated!!) >>>> >> >>> >>>> >> >>> My current vote is (2) as xstatic-core needs to understand what >>>> is >>>> >> >>> xstatic >>>> >> >>> and how it is maintained. >>>> >> >>> >>>> >> >>> Thanks, >>>> >> >>> Akihiro >>>> >> >>> >>>> >> >>> >>>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >>>> >> >>>> >>>> >> >>>> Hi Akihiro, >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> Thanks for your comment. >>>> >> >>>> The background of my request to add us to xstatic-core comes >>>> from >>>> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >>>> discussion. >>>> >> >>>> >>>> >> >>>> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky-discussion >>>> >> >>>> Line135, "we can share ownership if needed - e0ne" >>>> >> >>>> >>>> >> >>>> Just in case, could you guys confirm unified opinion on this >>>> matter >>>> >> >>>> as >>>> >> >>>> Horizon team ? >>>> >> >>>> >>>> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >>>> >> >>>> because it's easier & smoother to manage what we are taking for >>>> >> >>>> heat-dashboard. >>>> >> >>>> On the other hand, I can understand what Akihiro you are >>>> saying, the >>>> >> >>>> newly added repos belong to Horizon project & being managed by >>>> not >>>> >> >>>> Horizon core is not consistent. >>>> >> >>>> Also having exception might make unexpected confusion in near >>>> future. >>>> >> >>>> >>>> >> >>>> Eventually we will follow your opinion, let me hear Horizon >>>> team's >>>> >> >>>> conclusion. >>>> >> >>>> >>>> >> >>>> Regards, >>>> >> >>>> Kaz >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >>>> >> >>>> > Hi Kaz, >>>> >> >>>> > >>>> >> >>>> > These repositories are under horizon project. It looks better >>>> to >>>> >> >>>> > keep >>>> >> >>>> > the >>>> >> >>>> > current core team. >>>> >> >>>> > It potentially brings some confusion if we treat some horizon >>>> >> >>>> > plugin >>>> >> >>>> > team >>>> >> >>>> > specially. >>>> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >>>> >> >>>> > would >>>> >> >>>> > work >>>> >> >>>> > without problem even if only horizon-core can approve xstatic >>>> >> >>>> > reviews. >>>> >> >>>> > >>>> >> >>>> > >>>> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara < >>>> ksnhr.tech at gmail.com>: >>>> >> >>>> >> >>>> >> >>>> >> Hi Ivan, Horizon folks, >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >>>> >> >>>> >> landed. >>>> >> >>>> >> >>>> >> >>>> >> In project-config for them, I've set same acl-config as the >>>> >> >>>> >> existing >>>> >> >>>> >> xstatic repos. >>>> >> >>>> >> It means only "xstatic-core" can manage the newly created >>>> repos on >>>> >> >>>> >> gerrit. >>>> >> >>>> >> Could you kindly add "heat-dashboard-core" into >>>> "xstatic-core" >>>> >> >>>> >> like as >>>> >> >>>> >> what horizon-core is doing ? >>>> >> >>>> >> >>>> >> >>>> >> xstatic-core >>>> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >>>> >> >>>> >> >>>> >> >>>> >> heat-dashboard-core >>>> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >>>> >> >>>> >> >>>> >> >>>> >> Of course, we will surely touch only what we made, just >>>> would like >>>> >> >>>> >> to >>>> >> >>>> >> manage them smoothly by ourselves. >>>> >> >>>> >> In case we need to touch the other ones, will ask Horizon >>>> team for >>>> >> >>>> >> help. >>>> >> >>>> >> >>>> >> >>>> >> Thanks in advance. >>>> >> >>>> >> >>>> >> >>>> >> Regards, >>>> >> >>>> >> Kaz >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge >>> >: >>>> >> >>>> >> > Hi Horizon Team, >>>> >> >>>> >> > >>>> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` >>>> plugin >>>> >> >>>> >> > option, >>>> >> >>>> >> > and submitted a patch for it. >>>> >> >>>> >> > Could you please help to review the patch. >>>> >> >>>> >> > >>>> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >>>> >> >>>> >> > https://review.openstack.org/#/c/552259/ >>>> >> >>>> >> > >>>> >> >>>> >> > Thank you very much. >>>> >> >>>> >> > >>>> >> >>>> >> > Best Regards, >>>> >> >>>> >> > Xinni >>>> >> >>>> >> > >>>> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >>>> >> >>>> >> > >>>> >> >>>> >> > wrote: >>>> >> >>>> >> >> >>>> >> >>>> >> >> Hi Kaz, >>>> >> >>>> >> >> >>>> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >>>> patches >>>> >> >>>> >> >> >>>> >> >>>> >> >> Regards, >>>> >> >>>> >> >> Ivan Kolodyazhny, >>>> >> >>>> >> >> http://blog.e0ne.info/ >>>> >> >>>> >> >> >>>> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >>>> >> >>>> >> >> >>>> >> >>>> >> >> wrote: >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> Hi Ivan & Horizon folks, >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >>>> >> >>>> >> >>> xstatic >>>> >> >>>> >> >>> modules. >>>> >> >>>> >> >>> Let me request you to have review the following patches. >>>> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> project-config >>>> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> governance >>>> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> Thanks in advance:) >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> Regards, >>>> >> >>>> >> >>> Kaz >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >>>> >> >>>> >> >>> : >>>> >> >>>> >> >>> > Yes, please do that. We can then discuss in the review >>>> about >>>> >> >>>> >> >>> > technical >>>> >> >>>> >> >>> > details. >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > wrote: >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> Hi, Akihiro >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> Thanks for the quick reply. >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >>>> should >>>> >> >>>> >> >>> >> not >>>> >> >>>> >> >>> >> be >>>> >> >>>> >> >>> >> modified. >>>> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >>>> >> >>>> >> >>> >> and I think maybe there could be one option like >>>> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >>>> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >>>> >> >>>> >> >>> >> STATICFILES_DIRS. >>>> >> >>>> >> >>> >> I am considering to add a bug report to describe it at >>>> >> >>>> >> >>> >> first, >>>> >> >>>> >> >>> >> and >>>> >> >>>> >> >>> >> give >>>> >> >>>> >> >>> >> a >>>> >> >>>> >> >>> >> patch later maybe. >>>> >> >>>> >> >>> >> Is that ok with the Horizon team? >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> Best Regards. >>>> >> >>>> >> >>> >> Xinni >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> wrote: >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> Hi Xinni, >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >>>> >> >>>> >> >>> >>> : >>>> >> >>>> >> >>> >>> > Hello Horizon Team, >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > I would like to hear about your opinions about how >>>> to >>>> >> >>>> >> >>> >>> > add >>>> >> >>>> >> >>> >>> > new >>>> >> >>>> >> >>> >>> > xstatic >>>> >> >>>> >> >>> >>> > modules to horizon settings. >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >>>> files >>>> >> >>>> >> >>> >>> > issue, >>>> >> >>>> >> >>> >>> > thanks >>>> >> >>>> >> >>> >>> > for >>>> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing >>>> them and >>>> >> >>>> >> >>> >>> > referencing as >>>> >> >>>> >> >>> >>> > new >>>> >> >>>> >> >>> >>> > xstatic-* libs. >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> Thanks for moving this forward. >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> > So we installed the new xstatic files (not >>>> uploaded as >>>> >> >>>> >> >>> >>> > openstack >>>> >> >>>> >> >>> >>> > official >>>> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >>>> >> >>>> >> >>> >>> > hesitate >>>> >> >>>> >> >>> >>> > to >>>> >> >>>> >> >>> >>> > decide >>>> >> >>>> >> >>> >>> > how to >>>> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >>>> >> >>>> >> >>> >>> > STATICFILES_DIRS >>>> >> >>>> >> >>> >>> > in >>>> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static >>>> files >>>> >> >>>> >> >>> >>> > could >>>> >> >>>> >> >>> >>> > be >>>> >> >>>> >> >>> >>> > automatically >>>> >> >>>> >> >>> >>> > collected by *collectstatic* process. >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>>> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >>>> relevant >>>> >> >>>> >> >>> >>> > static >>>> >> >>>> >> >>> >>> > fils >>>> >> >>>> >> >>> >>> > are >>>> >> >>>> >> >>> >>> > added >>>> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >>>> plugin >>>> >> >>>> >> >>> >>> > dashboard. >>>> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >>>> >> >>>> >> >>> >>> > similar >>>> >> >>>> >> >>> >>> > to >>>> >> >>>> >> >>> >>> > ADD_JS_FILES) >>>> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly >>>> update >>>> >> >>>> >> >>> >>> > STATICFILES_DIRS). >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >>>> xstatic >>>> >> >>>> >> >>> >>> modules >>>> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is >>>> a >>>> >> >>>> >> >>> >>> good >>>> >> >>>> >> >>> >>> idea >>>> >> >>>> >> >>> >>> to >>>> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >>>> horizon >>>> >> >>>> >> >>> >>> plugin >>>> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >>>> >> >>>> >> >>> >>> xstatic >>>> >> >>>> >> >>> >>> module >>>> >> >>>> >> >>> >>> in >>>> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >>>> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >>>> horizon >>>> >> >>>> >> >>> >>> code >>>> >> >>>> >> >>> >>> to >>>> >> >>>> >> >>> >>> handle plugin settings should merge multiple entries >>>> to a >>>> >> >>>> >> >>> >>> single >>>> >> >>>> >> >>> >>> one >>>> >> >>>> >> >>> >>> hopefully. >>>> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> Akihiro >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >>>> >> >>>> >> >>> >>> > guys, >>>> >> >>>> >> >>> >>> > and >>>> >> >>>> >> >>> >>> > Best Regards, >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > Xinni Ge >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> __________________________________________________________________________ >>>> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >>>> >> >>>> >> >>> >>> > questions) >>>> >> >>>> >> >>> >>> > Unsubscribe: >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >>> >>> > >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> __________________________________________________________________________ >>>> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >>>> >> >>>> >> >>> >>> questions) >>>> >> >>>> >> >>> >>> Unsubscribe: >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> >> >>>> >> >>> >>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> -- >>>> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> __________________________________________________________________________ >>>> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >>>> >> >>>> >> >>> >> questions) >>>> >> >>>> >> >>> >> Unsubscribe: >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >>> >> >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> __________________________________________________________________________ >>>> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >>>> questions) >>>> >> >>>> >> >>> > Unsubscribe: >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >>> > >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> __________________________________________________________________________ >>>> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >>>> questions) >>>> >> >>>> >> >>> Unsubscribe: >>>> >> >>>> >> >>> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>> >>>> >> >>>> >> >>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> __________________________________________________________________________ >>>> >> >>>> >> >> OpenStack Development Mailing List (not for usage >>>> questions) >>>> >> >>>> >> >> Unsubscribe: >>>> >> >>>> >> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >> >>>> >> >>>> >> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> >> >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> >> >>>> >> > -- >>>> >> >>>> >> > 葛馨霓 Xinni Ge >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> __________________________________________________________________________ >>>> >> >>>> >> > OpenStack Development Mailing List (not for usage >>>> questions) >>>> >> >>>> >> > Unsubscribe: >>>> >> >>>> >> > >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> > >>>> >> >>>> >> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> >> > >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >>>> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> >>>> >> Unsubscribe: >>>> >> >>>> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> > >>>> >> >>>> > >>>> >> >>>> > >>>> >> >>>> > >>>> >> >>>> > >>>> >> >>>> > >>>> __________________________________________________________________________ >>>> >> >>>> > OpenStack Development Mailing List (not for usage questions) >>>> >> >>>> > Unsubscribe: >>>> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>>> > >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> >> >>>> >>>> __________________________________________________________________________ >>>> >> >>>> OpenStack Development Mailing List (not for usage questions) >>>> >> >>>> Unsubscribe: >>>> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> >> >>> >>>> __________________________________________________________________________ >>>> >> >>> OpenStack Development Mailing List (not for usage questions) >>>> >> >>> Unsubscribe: >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >>> >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> __________________________________________________________________________ >>>> >> >> OpenStack Development Mailing List (not for usage questions) >>>> >> >> Unsubscribe: >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >> >> >>>> >> >>>> >> >>>> __________________________________________________________________________ >>>> >> OpenStack Development Mailing List (not for usage questions) >>>> >> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> > >>>> > >>>> > >>>> __________________________________________________________________________ >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >>> >>> -- >>> 葛馨霓 Xinni Ge >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pliu at redhat.com Wed Jun 6 03:47:59 2018 From: pliu at redhat.com (Peng Liu) Date: Wed, 6 Jun 2018 11:47:59 +0800 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 Message-ID: Hi Kuryr-kubernetes team, I'm thinking to propose a new BP to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 [1], which was drafted by network plumbing working group of kubernetes-sig-network. I'll call it NPWG spec below. The purpose of NPWG spec is trying to standardize the multi-network effort around K8S by defining a CRD object 'network' which can be consumed by various CNI plugins. I know there has already been a BP VIF-Handler And Vif Drivers Design, which has designed a set of mechanism to implement the multi-network functionality. However I think it is still worthwhile to support this widely accepted NPWG spec. My proposal is to implement a new vif_driver, which can interpret the PoD annotation and CRD defined by NPWG spec, and attach pod to additional neutron subnet and port accordingly. This new driver should be mutually exclusive with the sriov and additional_subnets drivers.So the endusers can choose either way of using mult-network with kuryr-kubernetes. Please let me know your thought, any comments are welcome. [1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_ RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd Regards, -- Peng Liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jun 6 04:23:48 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 6 Jun 2018 06:23:48 +0200 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: References: Message-ID: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> On 06/05/2018 06:08 PM, Luke Hinds wrote: > > > On Tue, Jun 5, 2018 at 3:44 PM, Cédric Jeanneret > wrote: > > Hello guys! > > I'm currently working on python-tripleoclient in order to squash the > dreadful "NOPASSWD:ALL" allowed to the "stack" user. > > The start was an issue with the rights on some files being wrong (owner > by root instead of stack, in stack home). After some digging and poking, > it appears the undercloud deployment is called with a "sudo openstack > tripleo deploy" command - this, of course, creates some major issues > regarding both security and right management. > > I see a couple of ways to correct that bad situation: > - let the global "sudo" call, and play with setuid/setgid when we > actually don't need the root access (as it's mentioned in this comment¹) > > - drop that global sudo call, and replace all the necessary calls by > some "sudo" when needed. This involves the replacement of native python > code, like "os.mkdir" and the like. > > The first one isn't a solution - code maintenance will not be possible, > having to thing "darn, os.setuid() before calling that, because I don't > need root" is the current way, and it just doesn't apply. > > So I started the second one. It's, of course, longer, not really nice > and painful, but at least this will end to a good status, and not so bad > solution. > > This also meets the current work of the Security Squad about "limiting > sudo rights and accesses". > > For now I don't have a proper patch to show, but it will most probably > appear shortly, as a Work In Progress (I don't think it will be > mergeable before some time, due to all the constraints we have regarding > version portability, new sudoer integration and so on). > > I'll post the relevant review link as an answer of this thread when I > have something I can show. > > Cheers, > > C. > > > Hi Cédric, Hello Luke, > > Pleased to hear you are willing to take this on. Well, we have to ;). > > It makes sense we should co-ordinate efforts here as I have been looking > at the same item, but planned to start with heat-admin over on the > overcloud. yep, took part in some discussions already. > > Due to the complexity / level of coverage in the use of sudo, it makes > sense to have a spec where we can then get community consensus on the > approach selected. This is important as it looks like we will need to > have some sort of white list to maintain and make considerations around > functional test coverage in CI (in case someone writes something new > wrapped in sudo). For now, I'm trying to see how's the extend at the code level itself. This also helps me understanding the different things involved, and I also make some archaeology in order to understand the current situation. But indeed, we should push a spec/blueprint in order to get a good idea of the task and open the discussion on a clear basis. > > In regards to your suggested positions within python code such as the > client, its worth looking at oslo.privsep [1] where a decorator can be > used for when needing to setuid. hmm yep, have to understand how to use it - its doc is.. well. kind of sparse. Would be good to get examples. > > Let's discuss this also in the squad meeting tomorrow and try to > synergize approach for all tripleo nix accounts. You can ping me on #tripleo - I go there by Tengu nick. I'm CET (so yeah, already up'n'running ;)). Cheers, C. > > [1] https://github.com/openstack/oslo.privsep > > Cheers, > > Luke > > > ¹ > https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/v1/tripleo_deploy.py#L827-L829 > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-de > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mike.carden at gmail.com Wed Jun 6 04:37:06 2018 From: mike.carden at gmail.com (Mike Carden) Date: Wed, 6 Jun 2018 14:37:06 +1000 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> References: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> Message-ID: > > > > In regards to your suggested positions within python code such as the > > client, its worth looking at oslo.privsep [1] where a decorator can be > > used for when needing to setuid. > > hmm yep, have to understand how to use it - its doc is.. well. kind of > sparse. Would be good to get examples. Examples you say? Michael Still has been at that recently: https://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/ https://www.madebymikal.com/adding-oslo-privsep-to-a-new-project-a-worked-example/ -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jun 6 04:46:58 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 6 Jun 2018 06:46:58 +0200 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: References: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> Message-ID: <41e95b0f-8d9e-d2a9-b9ba-1317b26624d4@redhat.com> On 06/06/2018 06:37 AM, Mike Carden wrote: > > > In regards to your suggested positions within python code such as the > > client, its worth looking at oslo.privsep [1] where a decorator can be > > used for when needing to setuid. > > hmm yep, have to understand how to use it - its doc is.. well. kind of > sparse. Would be good to get examples. > > > > Examples you say? Michael Still has been at that recently: > > https://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/ > https://www.madebymikal.com/adding-oslo-privsep-to-a-new-project-a-worked-example/ \o/ - care to add the links on the doc? Would be really helpful for others I guess :). > > --  > MC > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mike.carden at gmail.com Wed Jun 6 04:59:42 2018 From: mike.carden at gmail.com (Mike Carden) Date: Wed, 6 Jun 2018 14:59:42 +1000 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: <41e95b0f-8d9e-d2a9-b9ba-1317b26624d4@redhat.com> References: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> <41e95b0f-8d9e-d2a9-b9ba-1317b26624d4@redhat.com> Message-ID: > > > \o/ - care to add the links on the doc? Would be really helpful for > others I guess :). > Doc? What doc? -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed Jun 6 05:27:57 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 6 Jun 2018 07:27:57 +0200 Subject: [openstack-dev] [tripleo][tripleoclient] No more global sudo for "stack" on the undercloud In-Reply-To: References: <209aefb7-c8de-1381-4127-03f360068f5e@redhat.com> <41e95b0f-8d9e-d2a9-b9ba-1317b26624d4@redhat.com> Message-ID: <08864618-cdfa-8e3f-1c52-53bd2874cc81@redhat.com> On 06/06/2018 06:59 AM, Mike Carden wrote: > > \o/ - care to add the links on the doc? Would be really helpful for > others I guess :). > > > Doc? What doc? This one: https://docs.openstack.org/oslo.privsep/latest/index.html I just created https://review.openstack.org/#/c/572670/ So. back to business: we need some spec and discussions in order to get a consensus and implement best practices. Using privsep will allow to drop the sudo part, as it uses rootwrap instead. This way also allows to filter out the rights, and we can ensure we actually don't let people do bad things. The mentioned blog posts also points to the test process, and shows how we can ensure we actually mock the calls. It also proposes a directory structure, and stress on the way to actually call the privileged methods. All of that makes perfectly sense, as it has a simple logic: if you need privileges, show them without any hide-and-seek game. Those advice should be followed, and integrated in any spec/blueprint we're to write prior the implementation. Regarding the tripleoclient part: there's currently one annoying issue, as the generated files aren't owned by the deploy user (usually named "stack"). This isn't a really urgent correction, but I'm pretty sure we have to lock any change toward a "quick'n'dirty resolution". Cheers, C. > > -- > MC >   > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zhipengh512 at gmail.com Wed Jun 6 06:58:55 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 6 Jun 2018 14:58:55 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.06.06 Message-ID: Hi Team, let's resume the team meeting, at today's meeting we need to make decisions on all Rocky critical specs in order to meet MS2 deadline. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed Jun 6 07:20:44 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 6 Jun 2018 09:20:44 +0200 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: Hi Ihar, it was always a pleasure learning from and working with you. Wish you all the best for your new project! --- Andreas Scheuring (andreas_s) On 4. Jun 2018, at 22:31, Ihar Hrachyshka wrote: Hi neutrinos and all, As some of you've already noticed, the last several months I was scaling down my involvement in Neutron and, more generally, OpenStack. I am at a point where I feel confident my disappearance won't disturb the project, and so I am ready to make it official. I am stepping down from all administrative roles I so far accumulated in Neutron and Stable teams. I shifted my focus to another project, and so I just removed myself from all relevant admin groups to reflect the change. It was a nice 4.5 year ride for me. I am very happy with what we achieved in all these years and a bit sad to leave. The community is the most brilliant and compassionate and dedicated to openness group of people I was lucky to work with, and I am reminded daily how awesome it is. I am far from leaving the industry, or networking, or the promise of open source infrastructure, so I am sure we will cross our paths once in a while with most of you. :) I also plan to hang out in our IRC channels and make snarky comments, be aware! Thanks for the fish, Ihar __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Jun 6 07:36:45 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 6 Jun 2018 16:36:45 +0900 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict Message-ID: Hi the release team, When I prepared neutron Rocky-2 deliverables, I noticed a new metadata syntax check which checks README.rst was introduced. As of now, README.rst in networking-bagpipe and networking-ovn hit this [1]. Although they can be fixed in individual projects, what is the current recommended solution? In addition, unfortunately such checks are not run in project gate, so there is no way to detect in advance. I think we need a way to check this when a change is made instead of detecting an error when a release patch is proposed. Thanks, Akihiro (amotoki) [1] http://logs.openstack.org/66/572666/1/check/openstack-tox-validate/b5dde2f/job-output.txt.gz#_2018-06-06_04_09_16_067790 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Jun 6 07:42:10 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 6 Jun 2018 15:42:10 +0800 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> <5B0266BE.20700@windriver.com> Message-ID: Hi Blair, Sorry for the late reply, could you elaborate more on the proxy driver idea ? On Mon, May 21, 2018 at 4:05 PM, Blair Bethwaite wrote: > (Please excuse the top-posting) > > The other possibility is that the Cyborg managed devices are plumbed in > via IP in guest network space. Then "attach" isn't so much a Nova problem > as a Neutron one - probably similar to Manila. > > Has the Cyborg team considered a RESTful-API proxy driver, i.e., something > that wraps a vendor-specific accelerator service and makes it friendly to a > multi-tenant OpenStack cloud? Quantum co-processors might be a compelling > example which fit this model. > > Cheers, > > > On Sun., 20 May 2018, 23:28 Chris Friesen, > wrote: > >> On 05/19/2018 05:58 PM, Blair Bethwaite wrote: >> > G'day Jay, >> > >> > On 20 May 2018 at 08:37, Jay Pipes wrote: >> >> If it's not the VM or baremetal machine that is using the accelerator, >> what >> >> is? >> > >> > It will be a VM or BM, but I don't think accelerators should be tied >> > to the life of a single instance if that isn't technically necessary >> > (i.e., they are hot-pluggable devices). I can see plenty of scope for >> > use-cases where Cyborg is managing devices that are accessible to >> > compute infrastructure via network/fabric (e.g. rCUDA or dedicated >> > PCIe fabric). And even in the simple pci passthrough case (vfio or >> > mdev) it isn't hard to imagine use-cases for workloads that only need >> > an accelerator sometimes. >> >> Currently nova only supports attach/detach of volumes and network >> interfaces. >> Is Cyborg looking to implement new Compute API operations to support hot >> attach/detach of various types of accelerators? >> >> Chris >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Wed Jun 6 07:42:43 2018 From: soulxu at gmail.com (Alex Xu) Date: Wed, 6 Jun 2018 15:42:43 +0800 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> Message-ID: After reading the spec https://review.openstack.org/#/c/554717/14/doc/specs/rocky/cyborg-nova-sched.rst , I confuse on the CUSTOM_ACCELERATOR_FPGA meaning. Initially, I thought it means a region. But after reading the spec, it can be a device, a region or a function. Is it on purpose design? Sounds like we need to have agreement on the naming also. We already have resource class `VGPU`, so we only need to add another resource class 'FPGA'(but same as above question, I thought it should be FPGA_REGION?), is it right? I didn't see any requirement on the prefix 'ACCELERATOR'. 2018-05-31 4:18 GMT+08:00 Eric Fried : > This all sounds fully reasonable to me. One thing, though... > > >> * There is a resource class per device category e.g. > >> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > > Let's propose standard resource classes for these ASAP. > > https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20 > eb5355a818/nova/rc_fields.py > > -efried > . > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschadin at sbcloud.ru Wed Jun 6 07:48:43 2018 From: aschadin at sbcloud.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTSIPPF0sfFxdfJ3g==?=) Date: Wed, 6 Jun 2018 07:48:43 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: <72B288E1-9F49-470A-9B64-2A29B95C45D8@sbcloud.ru> Hi Watcher team, We have meeting today at 8:00 UTC on #openstack-meeting-alt channel Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at ericsson.com Wed Jun 6 11:28:07 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Wed, 6 Jun 2018 13:28:07 +0200 Subject: [openstack-dev] [stable][networking-bgpvpn][infra] missing networking-odl repository Message-ID: <62aa4a95-bfd9-ed5d-9c49-dc6db369168c@ericsson.com> Hi, I'm trying to create a fix for the failing networking-bgpvpn stable periodic sphinx-docs job [1], but meanwhile it turned out that other "check" (and possibly "gate") jobs are failing on stable, too, on networking-bgpvpn, because of missing dependency: networking-odl repository (for pep8, py27, py35, cover and even sphinx, too). I submitted a patch a couple of days ago for the stable periodic py27 job [2] and it solved the issue there. But now it seems that every other networking-bgpvpn job needs this fix if it is run against stable branches (something like in this patch [3]). Question: Is there a better way to fix these issues? The common error message of the failing jobs: ********************************** ERROR! /home/zuul/src/git.openstack.org/openstack/networking-odl not found In Zuul v3 all repositories used need to be declared in the 'required-projects' parameter on the job. To fix this issue, add: openstack/networking-odl to 'required-projects'. While you're at it, it's worth noting that zuul-cloner itself is deprecated and this shim is only present for transition purposes. Start thinking about how to rework job content to just use the git repos that zuul will place into /home/zuul/src/git.openstack.org directly. ********************************** [1] https://review.openstack.org/#/c/572368/ [2] https://review.openstack.org/#/c/569111/ [3] https://review.openstack.org/#/c/572495/ Thanks, Előd From rico.lin.guanyu at gmail.com Wed Jun 6 11:33:01 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 6 Jun 2018 19:33:01 +0800 Subject: [openstack-dev] [Openstack-operators][heat] Heat Summit summary and project status Message-ID: Hi all Summit is over for weeks. Would like to share with team on what we got in Summit *Heat Onboarding Session* We didn't get many people shows up in Onboarding session this time, but we do get much more view in our video. Slide: https://www.slideshare.net/GuanYuLin1/openinfra-summit-2018-vancouver-heat-onboarding Video: https://www.youtube.com/watch?v=8rMkxdx5YKE (You can find videos from previous Summits in Slide) *Project Update Session* Slide: https://www.slideshare.net/GuanYuLin1/openinfra-summit-2018-vancouver-heat-project-update Video: https://www.youtube.com/watch?v=h4UXBRo948k (You can find videos from previous Summits in Slide) *User feedback Session* Etherpad: https://etherpad.openstack.org/p/2018-Vancouver-Summit-heat-ops-and-users-feedback (You can find Etherpad from the last Summit in Etherpad) Apparently, we got a lot of users which includes a lot of different domains (at least that's what I felt during summit). And according to feedbacks, I think our plans mostly match with what requirements from users.(if not, it still not too late to provide us feedbacks https://etherpad.openstack.org/p/2018-Vancouver-Summit-heat-ops-and-users-feedback ) *Project Status* Also, we're about to release Rocky-2, so would like to share current project status: We got less bug reported than the last cycle. For features, we seem got less implemented or WIP. We do get few WIP or under planned features: Blazar resource support(review in progress) Etcd support(work in progress) Multi-Cloud support (work in progress) Swift store for heat template (can input heat template from Swift) We do need more reviewer and people willing to help with features. For rocky release(about to release rocky-2) we got around 700 reviews Commits: 216 Filed Bugs: 56 Resolved Bugs: 34 (For reference. Here's Queens cycle number: around 1700 reviews, Commits: 417, Filed Bugs: 166, Resolved Bugs: 122 ) -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From akamyshnikova at mirantis.com Wed Jun 6 11:35:21 2018 From: akamyshnikova at mirantis.com (Anna Taraday) Date: Wed, 6 Jun 2018 15:35:21 +0400 Subject: [openstack-dev] [neutron][stable] Stepping down from core In-Reply-To: References: Message-ID: Ihar, Neutron can not be what it is without all your work! Thank you and wish you all the best! On Wed, Jun 6, 2018 at 11:22 AM Andreas Scheuring < scheuran at linux.vnet.ibm.com> wrote: > Hi Ihar, it was always a pleasure learning from and working with you. Wish > you all the best for your new project! > > --- > Andreas Scheuring (andreas_s) > > > > On 4. Jun 2018, at 22:31, Ihar Hrachyshka wrote: > > Hi neutrinos and all, > > As some of you've already noticed, the last several months I was > scaling down my involvement in Neutron and, more generally, OpenStack. > I am at a point where I feel confident my disappearance won't disturb > the project, and so I am ready to make it official. > > I am stepping down from all administrative roles I so far accumulated > in Neutron and Stable teams. I shifted my focus to another project, > and so I just removed myself from all relevant admin groups to reflect > the change. > > It was a nice 4.5 year ride for me. I am very happy with what we > achieved in all these years and a bit sad to leave. The community is > the most brilliant and compassionate and dedicated to openness group > of people I was lucky to work with, and I am reminded daily how > awesome it is. > > I am far from leaving the industry, or networking, or the promise of > open source infrastructure, so I am sure we will cross our paths once > in a while with most of you. :) I also plan to hang out in our IRC > channels and make snarky comments, be aware! > > Thanks for the fish, > Ihar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Ann Taraday -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltomasbo at redhat.com Wed Jun 6 11:44:17 2018 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Wed, 6 Jun 2018 13:44:17 +0200 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 In-Reply-To: References: Message-ID: Hi Peng, Thanks for the proposal! See below On 06/06/2018 05:47 AM, Peng Liu wrote: > Hi Kuryr-kubernetes team, > > I'm thinking to propose a new BP to support  Kubernetes Network Custom > Resource Definition De-facto Standard Version 1 [1], which was drafted > by network plumbing working group of kubernetes-sig-network. I'll call > it NPWG spec below. > > The purpose of NPWG spec is trying to standardize the multi-network > effort around K8S by defining a CRD object 'network' which can be > consumed by various CNI plugins. I know there has already been a BP > VIF-Handler And Vif Drivers Design, which has designed a set of > mechanism to implement the multi-network functionality. However I think > it is still worthwhile to support this widely accepted NPWG spec. Yes, I agree > > My proposal is to implement a new vif_driver, which can interpret the > PoD annotation and CRD defined by NPWG spec, and attach pod to > additional neutron subnet and port accordingly. This new driver should > be mutually exclusive with the sriov and additional_subnets drivers.So > the endusers can choose either way of using mult-network with > kuryr-kubernetes. Perhaps we can move current kuryr annotations on pods to also use CRDs, defining a standard way (for instance, dict with 'nic-name' : kuryr-port-crd, and then the kuryr-port-crd having the vif information). Cheers, Luis > > Please let me know your thought, any comments are welcome. > > > > [1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd > > > > Regards, > > -- > Peng Liu > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- LUIS TOMÁS BOLÍVAR SENIOR SOFTWARE ENGINEER Red Hat Madrid, Spain ltomasbo at redhat.com From mbooth at redhat.com Wed Jun 6 11:46:27 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 6 Jun 2018 12:46:27 +0100 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed Message-ID: TL;DR I think we need to entirely disable swap volume for multiattach volumes, and this will be an api breaking change with no immediate workaround. I was looking through tempest and came across api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach. This test does: Create 2 multiattach volumes Create 2 servers Attach volume 1 to both servers ** Swap volume 1 for volume 2 on server 1 ** Check all is attached as expected The problem with this is that swap volume is a copy operation. We don't just replace one volume with another, we copy the contents from one to the other and then do the swap. We do this with a qemu drive mirror operation, which is able to do this copy safely without needing to make the source read-only because it can also track writes to the source and ensure the target is updated again. Here's a link to the libvirt logs showing a drive mirror operation during the swap volume of an execution of the above test: http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201 The problem is that when the volume is attached to more than one VM, the hypervisor doing the drive mirror *doesn't* know about writes on the other attached VMs, so it can't do that copy safely, and the result is data corruption. Note that swap volume isn't visible to the guest os, so this can't be addressed by the user. This is a data corrupter, and we shouldn't allow it. However, it is in released code and users might be doing it already, so disabling it would be a user-visible api change with no immediate workaround. However, I think we're attempting to do the wrong thing here anyway, and the above tempest test is explicit testing behaviour that we don't want. The use case for swap volume is that a user needs to move volume data for attached volumes, e.g. to new faster/supported/maintained hardware. With single attach that's exactly what they get: the end user should never notice. With multi-attach they don't get that. We're basically forking the shared volume at a point in time, with the instance which did the swap writing to the new location while all others continue writing to the old location. Except that even the fork is broken, because they'll get a corrupt, inconsistent copy rather than point in time. I can't think of a use case for this behaviour, and it certainly doesn't meet the original design intent. What they really want is for the multi-attached volume to be copied from location a to location b and for all attachments to be updated. Unfortunately I don't think we're going to be in a position to do that any time soon, but I also think users will be unhappy if they're no longer able to move data at all because it's multi-attach. We can compromise, though, if we allow a multiattach volume to be moved as long as it only has a single attachment. This means the operator can't move this data without disruption to users, but at least it's not fundamentally immovable. This would require some cooperation with cinder to achieve, as we need to be able to temporarily prevent cinder from allowing new attachments. A natural way to achieve this would be to allow a multi-attach volume with only a single attachment to be redesignated not multiattach, but there might be others. The flow would then be: Detach volume from server 2 Set multiattach=False on volume Migrate volume on server 1 Set multiattach=True on volume Attach volume to server 2 Combined with a patch to nova to disallow swap_volume on any multiattach volume, this would then be possible if inconvenient. Regardless of any other changes, though, I think it's urgent that we disable the ability to swap_volume a multiattach volume because we don't want users to start using this relatively new, but broken, feature. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From amrith.kumar at gmail.com Wed Jun 6 12:06:26 2018 From: amrith.kumar at gmail.com (amrith.kumar at gmail.com) Date: Wed, 6 Jun 2018 08:06:26 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1528148963-sup-59@lrrr.local> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> Message-ID: <0b6101d3fd8e$cc38bc50$64aa34f0$@gmail.com> > -----Original Message----- > From: Doug Hellmann > Sent: Monday, June 4, 2018 5:52 PM > To: openstack-dev > Subject: Re: [openstack-dev] [tc] Organizational diversity tag > > Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: > > On 02/06/18 13:23, Doug Hellmann wrote: > > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: > > >> On 01/06/18 12:18, Doug Hellmann wrote: > > > > > > [snip] > > Apparently enough people see it the way you described that this is > > probably not something we want to actively spread to other projects at > > the moment. > > I am still curious to know which teams have the policy. If it is more > widespread than I realized, maybe it's reasonable to extend it and use it as > the basis for a health check after all. > A while back, Trove had this policy. When Rackspace, HP, and Tesora had core reviewers, (at various times, eBay, IBM and Red Hat also had cores), the agreement was that multiple cores from any one company would not merge a change unless it was an emergency. It was not formally written down (to my knowledge). It worked well, and ensured that the operators didn't get surprised by some unexpected thing that took down their service. -amrith From irenab.dev at gmail.com Wed Jun 6 12:37:47 2018 From: irenab.dev at gmail.com (Irena Berezovsky) Date: Wed, 6 Jun 2018 15:37:47 +0300 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 In-Reply-To: References: Message-ID: Sounds like a great initiative. Lets follow up on the proposal by the kuryr-kubernetes blueprint. BR, Irena On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu wrote: > Hi Kuryr-kubernetes team, > > I'm thinking to propose a new BP to support Kubernetes Network Custom > Resource Definition De-facto Standard Version 1 [1], which was drafted by > network plumbing working group of kubernetes-sig-network. I'll call it NPWG > spec below. > > The purpose of NPWG spec is trying to standardize the multi-network effort > around K8S by defining a CRD object 'network' which can be consumed by > various CNI plugins. I know there has already been a BP VIF-Handler And Vif > Drivers Design, which has designed a set of mechanism to implement the > multi-network functionality. However I think it is still worthwhile to > support this widely accepted NPWG spec. > > My proposal is to implement a new vif_driver, which can interpret the PoD > annotation and CRD defined by NPWG spec, and attach pod to additional > neutron subnet and port accordingly. This new driver should be mutually > exclusive with the sriov and additional_subnets drivers.So the endusers can > choose either way of using mult-network with kuryr-kubernetes. > > Please let me know your thought, any comments are welcome. > > > > [1] https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR > 7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd > > > Regards, > > -- > Peng Liu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Jun 6 12:55:13 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 6 Jun 2018 08:55:13 -0400 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: Message-ID: On 06/06/2018 07:46 AM, Matthew Booth wrote: > TL;DR I think we need to entirely disable swap volume for multiattach > volumes, and this will be an api breaking change with no immediate > workaround. > > I was looking through tempest and came across > api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach. > This test does: > > Create 2 multiattach volumes > Create 2 servers > Attach volume 1 to both servers > ** Swap volume 1 for volume 2 on server 1 ** > Check all is attached as expected > > The problem with this is that swap volume is a copy operation. Is it, though? The original blueprint and implementation seem to suggest that the swap_volume operation was nothing more than changing the mountpoint for a volume to point to a different location (in a safe manner that didn't lose any reads or writes). https://blueprints.launchpad.net/nova/+spec/volume-swap Nothing about the description of swap_volume() in the virt driver interface mentions swap_volume() being a "copy operation": https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476 > We don't just replace one volume with another, we copy the contents > from one to the other and then do the swap. We do this with a qemu > drive mirror operation, which is able to do this copy safely without > needing to make the source read-only because it can also track writes > to the source and ensure the target is updated again. Here's a link > to the libvirt logs showing a drive mirror operation during the swap > volume of an execution of the above test: After checking the source code, the libvirt virt driver is the only virt driver that implements swap_volume(), so it looks to me like a public HTTP API method was added that was specific to libvirt's implementation of drive mirroring. Yay, more implementation leaking out through the API. > http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201 > > The problem is that when the volume is attached to more than one VM, > the hypervisor doing the drive mirror *doesn't* know about writes on > the other attached VMs, so it can't do that copy safely, and the > result is data corruption. Would it be possible to swap the volume by doing what Vish originally described in the blueprint: pause the VM, swap the volume mountpoints (potentially after migrating the underlying volume), start the VM? > Note that swap volume isn't visible to the > guest os, so this can't be addressed by the user. This is a data > corrupter, and we shouldn't allow it. However, it is in released code > and users might be doing it already, so disabling it would be a > user-visible api change with no immediate workaround. I'd love to know who is actually using the swap_volume() functionality, actually. I'd especially like to know who is using swap_volume() with multiattach. > However, I think we're attempting to do the wrong thing here anyway, > and the above tempest test is explicit testing behaviour that we don't > want. The use case for swap volume is that a user needs to move volume > data for attached volumes, e.g. to new faster/supported/maintained > hardware. Is that the use case? As was typical, there's no mention of a use case on the original blueprint. It just says "This feature allows a user or administrator to transparently swap out a cinder volume that connected to an instance." Which is hardly a use case since it uses the feature name in a description of the feature itself. :( The commit message (there was only a single commit for this functionality [1]) mentions overwriting data on the new volume: Adds support for transparently swapping an attached volume with another volume. Note that this overwrites all data on the new volume with data from the old volume. Yes, that is the commit message in its entirety. Of course, the commit had no documentation at all in it, so there's no ability to understand what the original use case really was here. https://review.openstack.org/#/c/28995/ If the use case was really "that a user needs to move volume data for attached volumes", why not just pause the VM, detach the volume, do a openstack volume migrate to the new destination, reattach the volume and start the VM? That would mean no libvirt/QEMU-specific implementation behaviour leaking out of the public HTTP API and allow the volume service (Cinder) to do its job properly. > With single attach that's exactly what they get: the end > user should never notice. With multi-attach they don't get that. We're > basically forking the shared volume at a point in time, with the > instance which did the swap writing to the new location while all > others continue writing to the old location. Except that even the fork > is broken, because they'll get a corrupt, inconsistent copy rather > than point in time. I can't think of a use case for this behaviour, > and it certainly doesn't meet the original design intent. > > What they really want is for the multi-attached volume to be copied > from location a to location b and for all attachments to be updated. > Unfortunately I don't think we're going to be in a position to do that > any time soon, but I also think users will be unhappy if they're no > longer able to move data at all because it's multi-attach. We can > compromise, though, if we allow a multiattach volume to be moved as > long as it only has a single attachment. This means the operator can't > move this data without disruption to users, but at least it's not > fundamentally immovable. > > This would require some cooperation with cinder to achieve, as we need > to be able to temporarily prevent cinder from allowing new > attachments. A natural way to achieve this would be to allow a > multi-attach volume with only a single attachment to be redesignated > not multiattach, but there might be others. The flow would then be: > > Detach volume from server 2 > Set multiattach=False on volume > Migrate volume on server 1 > Set multiattach=True on volume > Attach volume to server 2 > > Combined with a patch to nova to disallow swap_volume on any > multiattach volume, this would then be possible if inconvenient. > > Regardless of any other changes, though, I think it's urgent that we > disable the ability to swap_volume a multiattach volume because we > don't want users to start using this relatively new, but broken, > feature. Or we could deprecate the swap_volume Compute API operation and use Cinder for all of this. But sure, we could also add more cruft to the Compute API and add more conditional "it works but only when X" docs to the API reference. Just my two cents, -jay From alifshit at redhat.com Wed Jun 6 13:10:57 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 6 Jun 2018 15:10:57 +0200 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: Message-ID: I think regardless of how we ended up with this situation, we're still in a position where we have a public-facing API that could lead to data-corruption when used in a specific way. That should never be the case. I would think re-using the already possible 400 response code to update-volume when used with a multi-attach volume to indicate that it can't be done, without a new microversion, would be the cleaned way of getting out of this pickle. On Wed, Jun 6, 2018 at 2:55 PM, Jay Pipes wrote: > On 06/06/2018 07:46 AM, Matthew Booth wrote: >> >> TL;DR I think we need to entirely disable swap volume for multiattach >> volumes, and this will be an api breaking change with no immediate >> workaround. >> >> I was looking through tempest and came across >> >> api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach. >> This test does: >> >> Create 2 multiattach volumes >> Create 2 servers >> Attach volume 1 to both servers >> ** Swap volume 1 for volume 2 on server 1 ** >> Check all is attached as expected >> >> The problem with this is that swap volume is a copy operation. > > > Is it, though? The original blueprint and implementation seem to suggest > that the swap_volume operation was nothing more than changing the mountpoint > for a volume to point to a different location (in a safe > manner that didn't lose any reads or writes). > > https://blueprints.launchpad.net/nova/+spec/volume-swap > > Nothing about the description of swap_volume() in the virt driver interface > mentions swap_volume() being a "copy operation": > > https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476 > >> We don't just replace one volume with another, we copy the contents >> from one to the other and then do the swap. We do this with a qemu >> drive mirror operation, which is able to do this copy safely without >> needing to make the source read-only because it can also track writes >> to the source and ensure the target is updated again. Here's a link >> to the libvirt logs showing a drive mirror operation during the swap >> volume of an execution of the above test: > > After checking the source code, the libvirt virt driver is the only virt > driver that implements swap_volume(), so it looks to me like a public HTTP > API method was added that was specific to libvirt's implementation of drive > mirroring. Yay, more implementation leaking out through the API. > >> >> http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201 >> >> The problem is that when the volume is attached to more than one VM, >> the hypervisor doing the drive mirror *doesn't* know about writes on >> the other attached VMs, so it can't do that copy safely, and the >> result is data corruption. > > > Would it be possible to swap the volume by doing what Vish originally > described in the blueprint: pause the VM, swap the volume mountpoints > (potentially after migrating the underlying volume), start the VM? > >> > Note that swap volume isn't visible to the >> >> guest os, so this can't be addressed by the user. This is a data >> corrupter, and we shouldn't allow it. However, it is in released code >> and users might be doing it already, so disabling it would be a >> user-visible api change with no immediate workaround. > > > I'd love to know who is actually using the swap_volume() functionality, > actually. I'd especially like to know who is using swap_volume() with > multiattach. > >> However, I think we're attempting to do the wrong thing here anyway, >> and the above tempest test is explicit testing behaviour that we don't >> want. The use case for swap volume is that a user needs to move volume >> data for attached volumes, e.g. to new faster/supported/maintained >> hardware. > > > Is that the use case? > > As was typical, there's no mention of a use case on the original blueprint. > It just says "This feature allows a user or administrator to transparently > swap out a cinder volume that connected to an instance." Which is hardly a > use case since it uses the feature name in a description of the feature > itself. :( > > The commit message (there was only a single commit for this functionality > [1]) mentions overwriting data on the new volume: > > Adds support for transparently swapping an attached volume with > another volume. Note that this overwrites all data on the new volume > with data from the old volume. > > Yes, that is the commit message in its entirety. Of course, the commit had > no documentation at all in it, so there's no ability to understand what the > original use case really was here. > > https://review.openstack.org/#/c/28995/ > > If the use case was really "that a user needs to move volume data for > attached volumes", why not just pause the VM, detach the volume, do a > openstack volume migrate to the new destination, reattach the volume and > start the VM? That would mean no libvirt/QEMU-specific implementation > behaviour leaking out of the public HTTP API and allow the volume service > (Cinder) to do its job properly. > > >> With single attach that's exactly what they get: the end >> user should never notice. With multi-attach they don't get that. We're >> basically forking the shared volume at a point in time, with the >> instance which did the swap writing to the new location while all >> others continue writing to the old location. Except that even the fork >> is broken, because they'll get a corrupt, inconsistent copy rather >> than point in time. I can't think of a use case for this behaviour, >> and it certainly doesn't meet the original design intent. >> >> What they really want is for the multi-attached volume to be copied >> from location a to location b and for all attachments to be updated. >> Unfortunately I don't think we're going to be in a position to do that >> any time soon, but I also think users will be unhappy if they're no >> longer able to move data at all because it's multi-attach. We can >> compromise, though, if we allow a multiattach volume to be moved as >> long as it only has a single attachment. This means the operator can't >> move this data without disruption to users, but at least it's not >> fundamentally immovable. >> >> This would require some cooperation with cinder to achieve, as we need >> to be able to temporarily prevent cinder from allowing new >> attachments. A natural way to achieve this would be to allow a >> multi-attach volume with only a single attachment to be redesignated >> not multiattach, but there might be others. The flow would then be: >> >> Detach volume from server 2 >> Set multiattach=False on volume >> Migrate volume on server 1 >> Set multiattach=True on volume >> Attach volume to server 2 >> >> Combined with a patch to nova to disallow swap_volume on any >> multiattach volume, this would then be possible if inconvenient. >> >> Regardless of any other changes, though, I think it's urgent that we >> disable the ability to swap_volume a multiattach volume because we >> don't want users to start using this relatively new, but broken, >> feature. > > > Or we could deprecate the swap_volume Compute API operation and use Cinder > for all of this. > > But sure, we could also add more cruft to the Compute API and add more > conditional "it works but only when X" docs to the API reference. > > Just my two cents, > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From mriedemos at gmail.com Wed Jun 6 13:13:22 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 6 Jun 2018 08:13:22 -0500 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: Message-ID: <6436e8f7-4177-4a9c-28e5-fdd16b328087@gmail.com> On 6/6/2018 7:55 AM, Jay Pipes wrote: > I'd love to know who is actually using the swap_volume() functionality, > actually. I'd especially like to know who is using swap_volume() with > multiattach. The swap volume API in nova only exists as a callback routine during volume live migration or retype operations. It's admin-only by default on the nova side, and shouldn't be called directly (similar to guest-assisted volume snapshots for NFS and GlusterFS volumes - totally just a callback from Cinder). So during volume retype, cinder will call swap volume in nova and then nova will call another admin-only API in Cinder to tell Cinder, yup we did it or we failed, rollback. The cinder API reference on retype mentions the restrictions about multiattach volumes: https://developer.openstack.org/api-ref/block-storage/v3/#retype-a-volume "Retyping an in-use volume from a multiattach-capable type to a non-multiattach-capable type, or vice-versa, is not supported. It is generally not recommended to retype an in-use multiattach volume if that volume has more than one active read/write attachment." There is no API reference for volume live migration, but it should generally be the same idea. The Tempest test for swap volume with multiattach volumes was written before we realized we needed to put restrictions in place *on the cinder side* to limit the behavior. The Tempest test just hits the compute API to verify the plumbing in nova works properly, it doesn't initiate the flow via an actual retype (or volume live migration). -- Thanks, Matt From jaypipes at gmail.com Wed Jun 6 13:24:22 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 6 Jun 2018 09:24:22 -0400 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: Message-ID: <63fc8b1a-31d6-6d83-c2f5-844083e6a3e8@gmail.com> On 06/06/2018 09:10 AM, Artom Lifshitz wrote: > I think regardless of how we ended up with this situation, we're still > in a position where we have a public-facing API that could lead to > data-corruption when used in a specific way. That should never be the > case. I would think re-using the already possible 400 response code to > update-volume when used with a multi-attach volume to indicate that it > can't be done, without a new microversion, would be the cleaned way of > getting out of this pickle. That's fine, yes. I just think it's worth noting that it's a pickle that we put ourselves in due to an ill-conceived feature and Compute API call. And that we should, you know, try to stop doing that. :) -jay From openstack at sheep.art.pl Wed Jun 6 13:28:18 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Wed, 6 Jun 2018 15:28:18 +0200 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Some of our xstatic packages require an elaborate build process, as they use various javascript-based tools to do the build. In this case, it's more than just minification — it's macros and includes, basically a pre-processor, and it would be very hard to re-create that in Python. Thus, we did some exceptions and included the minified files in those cases. But when it's just about minifying, then the application serving the files is responsible for that, and the xstatic package shouldn't contain those files. On Wed, Jun 6, 2018 at 5:45 AM, Akihiro Motoki wrote: > 2018年6月6日(水) 11:54 Xinni Ge : > >> Hi, akihiro and other guys, >> >> I understand why minified is considered to be non-free, but I was >> confused about the statement >> "At the very least, a non-minified version should be present next to the >> minified version" [1] >> in the documentation. >> >> Actually in existing xstatic repo, I observed several minified files in >> angular_fileupload, jquery-migrate, or bootstrap_scss. >> So, I uploaded those minified files as in the release package of >> angular/material. >> > > Good point. My interpretation is: > - Basically minified files should not be included in xstatic deliverables. > - Even though not suggested, if minified files are included, corresponding > non-minified version must be included. > > Considering this, I believe we should not include minified files for new > xstatic deliverables. > Makes sense? > > >> >> Personally I don't insist on minified files, and I will delete all >> minified files and re-upload the patch. >> Thanks a lot for the advice. >> > > Thanks for understanding and your patience. > Let's land pending reviews soon :) > > Akihiro > > >> >> [1] https://docs.openstack.org/horizon/latest/contributor/ >> topics/packaging.html#minified-javascript-policy >> >> ==================== >> Ge Xinni >> Email: xinni.ge1990 at gmail.com >> ==================== >> >> On Tue, Jun 5, 2018 at 8:59 PM, Akihiro Motoki wrote: >> >>> Hi, >>> >>> Sorry for re-using the ancient ML thread. >>> Looking at recent xstatic-* repo reviews, I am a bit afraid that >>> xstatic-cores do not have a common understanding on the principle of >>> xstatic packages. >>> I hope all xstatic-cores re-read "Packing Software" in the horizon >>> contributor docs [1], especially "Minified Javascript policy" [2], >>> carefully. >>> >>> Thanks, >>> Akihiro >>> >>> [1] https://docs.openstack.org/horizon/latest/contributor/ >>> topics/packaging.html >>> [2] https://docs.openstack.org/horizon/latest/ >>> contributor/topics/packaging.html#minified-javascript-policy >>> >>> >>> 2018年4月4日(水) 14:35 Xinni Ge : >>> >>>> Hi Ivan and other Horizon team member, >>>> >>>> Thanks for adding us into xstatic-core group. >>>> But I still need your opinion and help to release the newly-added >>>> xstatic packages to pypi index. >>>> >>>> Current `xstatic-core` group doesn't have the permission to PUSH SIGNED >>>> TAG, and I cannot release the first non-trivial version. >>>> >>>> If I (or maybe Kaz) could be added into xstatic-release group, we can >>>> release all the 8 packages by ourselves. >>>> >>>> Or, we are very appreciate if any member of xstatic-release could help >>>> to do it. >>>> >>>> Just for your quick access, here is the link of access permission page >>>> of one xstatic package. >>>> https://review.openstack.org/#/admin/projects/openstack/ >>>> xstatic-angular-material,access >>>> >>>> -- >>>> Best Regards, >>>> Xinni >>>> >>>> On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara >>>> wrote: >>>> >>>>> Hi Ivan, >>>>> >>>>> >>>>> Thank you very much. >>>>> I've confirmed that all of us have been added to xstatic-core. >>>>> >>>>> As discussed, we will focus on the followings what we added for >>>>> heat-dashboard, will not touch other xstatic repos as core. >>>>> >>>>> xstatic-angular-material >>>>> xstatic-angular-notify >>>>> xstatic-angular-uuid >>>>> xstatic-angular-vis >>>>> xstatic-filesaver >>>>> xstatic-js-yaml >>>>> xstatic-json2yaml >>>>> xstatic-vis >>>>> >>>>> Regards, >>>>> Kaz >>>>> >>>>> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >>>>> > Hi Kuz, >>>>> > >>>>> > Don't worry, we're on the same page with you. I added both you, >>>>> Xinni and >>>>> > Keichii to the xstatic-core group. Thank you for your contributions! >>>>> > >>>>> > Regards, >>>>> > Ivan Kolodyazhny, >>>>> > http://blog.e0ne.info/ >>>>> > >>>>> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >>>>> wrote: >>>>> >> >>>>> >> Hi Ivan & Horizon folks >>>>> >> >>>>> >> >>>>> >> AFAIK, Horizon team had conclusion that you will add the specific >>>>> >> members to xstatic-core, correct ? >>>>> >> Can I ask you to add the following members ? >>>>> >> # All of tree are heat-dashboard core. >>>>> >> >>>>> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >>>>> >> Xinni Ge / xinni.ge1990 at gmail.com >>>>> >> Keiichi Hikita / keiichi.hikita at gmail.com >>>>> >> >>>>> >> Please give me a shout, if we are not on same page or any concern. >>>>> >> >>>>> >> Regards, >>>>> >> Kaz >>>>> >> >>>>> >> >>>>> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >>>>> >> > Hi Ivan, Akihiro, >>>>> >> > >>>>> >> > >>>>> >> > Thanks for your kind arrangement. >>>>> >> > Looking forward to hearing your decision soon. >>>>> >> > >>>>> >> > Regards, >>>>> >> > Kaz >>>>> >> > >>>>> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >>>>> >> >> HI Team, >>>>> >> >> >>>>> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >>>>> that >>>>> >> >> #4 >>>>> >> >> could be too complicated for us. Anyway, we've got this topic on >>>>> the >>>>> >> >> meeting >>>>> >> >> agenda [1] so we'll discuss it there too. I'll share our >>>>> decision after >>>>> >> >> the >>>>> >> >> meeting. >>>>> >> >> >>>>> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> Regards, >>>>> >> >> Ivan Kolodyazhny, >>>>> >> >> http://blog.e0ne.info/ >>>>> >> >> >>>>> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki < >>>>> amotoki at gmail.com> >>>>> >> >> wrote: >>>>> >> >>> >>>>> >> >>> Hi Kaz and Ivan, >>>>> >> >>> >>>>> >> >>> Yeah, it is worth discussed officially in the horizon team >>>>> meeting or >>>>> >> >>> the >>>>> >> >>> mailing list thread to get a consensus. >>>>> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >>>>> >> >>> >>>>> >> >>> After sending the previous mail, I noticed anther option. I see >>>>> there >>>>> >> >>> are >>>>> >> >>> several options now. >>>>> >> >>> (1) Keep xstatic-core and horizon-core same. >>>>> >> >>> (2) Add specific members to xstatic-core >>>>> >> >>> (3) Add specific horizon-plugin core to xstatic-core >>>>> >> >>> (4) Split core membership into per-repo basis (perhaps too >>>>> >> >>> complicated!!) >>>>> >> >>> >>>>> >> >>> My current vote is (2) as xstatic-core needs to understand what >>>>> is >>>>> >> >>> xstatic >>>>> >> >>> and how it is maintained. >>>>> >> >>> >>>>> >> >>> Thanks, >>>>> >> >>> Akihiro >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara >>>> >: >>>>> >> >>>> >>>>> >> >>>> Hi Akihiro, >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> Thanks for your comment. >>>>> >> >>>> The background of my request to add us to xstatic-core comes >>>>> from >>>>> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >>>>> discussion. >>>>> >> >>>> >>>>> >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg- >>>>> rocky-discussion >>>>> >> >>>> Line135, "we can share ownership if needed - e0ne" >>>>> >> >>>> >>>>> >> >>>> Just in case, could you guys confirm unified opinion on this >>>>> matter >>>>> >> >>>> as >>>>> >> >>>> Horizon team ? >>>>> >> >>>> >>>>> >> >>>> Frankly speaking I'm feeling the benefit to make us >>>>> xstatic-core >>>>> >> >>>> because it's easier & smoother to manage what we are taking for >>>>> >> >>>> heat-dashboard. >>>>> >> >>>> On the other hand, I can understand what Akihiro you are >>>>> saying, the >>>>> >> >>>> newly added repos belong to Horizon project & being managed by >>>>> not >>>>> >> >>>> Horizon core is not consistent. >>>>> >> >>>> Also having exception might make unexpected confusion in near >>>>> future. >>>>> >> >>>> >>>>> >> >>>> Eventually we will follow your opinion, let me hear Horizon >>>>> team's >>>>> >> >>>> conclusion. >>>>> >> >>>> >>>>> >> >>>> Regards, >>>>> >> >>>> Kaz >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >>>>> >> >>>> > Hi Kaz, >>>>> >> >>>> > >>>>> >> >>>> > These repositories are under horizon project. It looks >>>>> better to >>>>> >> >>>> > keep >>>>> >> >>>> > the >>>>> >> >>>> > current core team. >>>>> >> >>>> > It potentially brings some confusion if we treat some horizon >>>>> >> >>>> > plugin >>>>> >> >>>> > team >>>>> >> >>>> > specially. >>>>> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think >>>>> it >>>>> >> >>>> > would >>>>> >> >>>> > work >>>>> >> >>>> > without problem even if only horizon-core can approve xstatic >>>>> >> >>>> > reviews. >>>>> >> >>>> > >>>>> >> >>>> > >>>>> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara < >>>>> ksnhr.tech at gmail.com>: >>>>> >> >>>> >> >>>>> >> >>>> >> Hi Ivan, Horizon folks, >>>>> >> >>>> >> >>>>> >> >>>> >> >>>>> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >>>>> >> >>>> >> landed. >>>>> >> >>>> >> >>>>> >> >>>> >> In project-config for them, I've set same acl-config as the >>>>> >> >>>> >> existing >>>>> >> >>>> >> xstatic repos. >>>>> >> >>>> >> It means only "xstatic-core" can manage the newly created >>>>> repos on >>>>> >> >>>> >> gerrit. >>>>> >> >>>> >> Could you kindly add "heat-dashboard-core" into >>>>> "xstatic-core" >>>>> >> >>>> >> like as >>>>> >> >>>> >> what horizon-core is doing ? >>>>> >> >>>> >> >>>>> >> >>>> >> xstatic-core >>>>> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >>>>> >> >>>> >> >>>>> >> >>>> >> heat-dashboard-core >>>>> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >>>>> >> >>>> >> >>>>> >> >>>> >> Of course, we will surely touch only what we made, just >>>>> would like >>>>> >> >>>> >> to >>>>> >> >>>> >> manage them smoothly by ourselves. >>>>> >> >>>> >> In case we need to touch the other ones, will ask Horizon >>>>> team for >>>>> >> >>>> >> help. >>>>> >> >>>> >> >>>>> >> >>>> >> Thanks in advance. >>>>> >> >>>> >> >>>>> >> >>>> >> Regards, >>>>> >> >>>> >> Kaz >>>>> >> >>>> >> >>>>> >> >>>> >> >>>>> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge >>>> >: >>>>> >> >>>> >> > Hi Horizon Team, >>>>> >> >>>> >> > >>>>> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` >>>>> plugin >>>>> >> >>>> >> > option, >>>>> >> >>>> >> > and submitted a patch for it. >>>>> >> >>>> >> > Could you please help to review the patch. >>>>> >> >>>> >> > >>>>> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >>>>> >> >>>> >> > https://review.openstack.org/#/c/552259/ >>>>> >> >>>> >> > >>>>> >> >>>> >> > Thank you very much. >>>>> >> >>>> >> > >>>>> >> >>>> >> > Best Regards, >>>>> >> >>>> >> > Xinni >>>>> >> >>>> >> > >>>>> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >>>>> >> >>>> >> > >>>>> >> >>>> >> > wrote: >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> Hi Kaz, >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >>>>> patches >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> Regards, >>>>> >> >>>> >> >> Ivan Kolodyazhny, >>>>> >> >>>> >> >> http://blog.e0ne.info/ >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> wrote: >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> Hi Ivan & Horizon folks, >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> Now we are submitting a couple of patches to have the >>>>> new >>>>> >> >>>> >> >>> xstatic >>>>> >> >>>> >> >>> modules. >>>>> >> >>>> >> >>> Let me request you to have review the following patches. >>>>> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> project-config >>>>> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> governance >>>>> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> Thanks in advance:) >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> Regards, >>>>> >> >>>> >> >>> Kaz >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >>>>> >> >>>> >> >>> : >>>>> >> >>>> >> >>> > Yes, please do that. We can then discuss in the >>>>> review about >>>>> >> >>>> >> >>> > technical >>>>> >> >>>> >> >>> > details. >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > wrote: >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> Hi, Akihiro >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> Thanks for the quick reply. >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >>>>> should >>>>> >> >>>> >> >>> >> not >>>>> >> >>>> >> >>> >> be >>>>> >> >>>> >> >>> >> modified. >>>>> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >>>>> >> >>>> >> >>> >> and I think maybe there could be one option like >>>>> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >>>>> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >>>>> >> >>>> >> >>> >> STATICFILES_DIRS. >>>>> >> >>>> >> >>> >> I am considering to add a bug report to describe it >>>>> at >>>>> >> >>>> >> >>> >> first, >>>>> >> >>>> >> >>> >> and >>>>> >> >>>> >> >>> >> give >>>>> >> >>>> >> >>> >> a >>>>> >> >>>> >> >>> >> patch later maybe. >>>>> >> >>>> >> >>> >> Is that ok with the Horizon team? >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> Best Regards. >>>>> >> >>>> >> >>> >> Xinni >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> wrote: >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> Hi Xinni, >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >>>>> >> >>>> >> >>> >>> : >>>>> >> >>>> >> >>> >>> > Hello Horizon Team, >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > I would like to hear about your opinions about >>>>> how to >>>>> >> >>>> >> >>> >>> > add >>>>> >> >>>> >> >>> >>> > new >>>>> >> >>>> >> >>> >>> > xstatic >>>>> >> >>>> >> >>> >>> > modules to horizon settings. >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >>>>> files >>>>> >> >>>> >> >>> >>> > issue, >>>>> >> >>>> >> >>> >>> > thanks >>>>> >> >>>> >> >>> >>> > for >>>>> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing >>>>> them and >>>>> >> >>>> >> >>> >>> > referencing as >>>>> >> >>>> >> >>> >>> > new >>>>> >> >>>> >> >>> >>> > xstatic-* libs. >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> Thanks for moving this forward. >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> > So we installed the new xstatic files (not >>>>> uploaded as >>>>> >> >>>> >> >>> >>> > openstack >>>>> >> >>>> >> >>> >>> > official >>>>> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >>>>> >> >>>> >> >>> >>> > hesitate >>>>> >> >>>> >> >>> >>> > to >>>>> >> >>>> >> >>> >>> > decide >>>>> >> >>>> >> >>> >>> > how to >>>>> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >>>>> >> >>>> >> >>> >>> > STATICFILES_DIRS >>>>> >> >>>> >> >>> >>> > in >>>>> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static >>>>> files >>>>> >> >>>> >> >>> >>> > could >>>>> >> >>>> >> >>> >>> > be >>>>> >> >>>> >> >>> >>> > automatically >>>>> >> >>>> >> >>> >>> > collected by *collectstatic* process. >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >>>>> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >>>>> relevant >>>>> >> >>>> >> >>> >>> > static >>>>> >> >>>> >> >>> >>> > fils >>>>> >> >>>> >> >>> >>> > are >>>>> >> >>>> >> >>> >>> > added >>>>> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >>>>> plugin >>>>> >> >>>> >> >>> >>> > dashboard. >>>>> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( >>>>> something >>>>> >> >>>> >> >>> >>> > similar >>>>> >> >>>> >> >>> >>> > to >>>>> >> >>>> >> >>> >>> > ADD_JS_FILES) >>>>> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly >>>>> update >>>>> >> >>>> >> >>> >>> > STATICFILES_DIRS). >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >>>>> xstatic >>>>> >> >>>> >> >>> >>> modules >>>>> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it >>>>> is a >>>>> >> >>>> >> >>> >>> good >>>>> >> >>>> >> >>> >>> idea >>>>> >> >>>> >> >>> >>> to >>>>> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >>>>> horizon >>>>> >> >>>> >> >>> >>> plugin >>>>> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >>>>> >> >>>> >> >>> >>> xstatic >>>>> >> >>>> >> >>> >>> module >>>>> >> >>>> >> >>> >>> in >>>>> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >>>>> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >>>>> horizon >>>>> >> >>>> >> >>> >>> code >>>>> >> >>>> >> >>> >>> to >>>>> >> >>>> >> >>> >>> handle plugin settings should merge multiple >>>>> entries to a >>>>> >> >>>> >> >>> >>> single >>>>> >> >>>> >> >>> >>> one >>>>> >> >>>> >> >>> >>> hopefully. >>>>> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> Akihiro >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from >>>>> you >>>>> >> >>>> >> >>> >>> > guys, >>>>> >> >>>> >> >>> >>> > and >>>>> >> >>>> >> >>> >>> > Best Regards, >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > Xinni Ge >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >>>>> >> >>>> >> >>> >>> > questions) >>>>> >> >>>> >> >>> >>> > Unsubscribe: >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > OpenStack-dev-request at lists. >>>>> openstack.org?subject:unsubscribe >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> > http://lists.openstack.org/ >>>>> cgi-bin/mailman/listinfo/openstack-dev >>>>> >> >>>> >> >>> >>> > >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >>>>> >> >>>> >> >>> >>> questions) >>>>> >> >>>> >> >>> >>> Unsubscribe: >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> >>>>> >> >>>> >> >>> >>> http://lists.openstack.org/ >>>>> cgi-bin/mailman/listinfo/openstack-dev >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> -- >>>>> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >>>>> >> >>>> >> >>> >> questions) >>>>> >> >>>> >> >>> >> Unsubscribe: >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> >> >>> >> >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >>>>> questions) >>>>> >> >>>> >> >>> > Unsubscribe: >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> >> >>> > >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >>>>> questions) >>>>> >> >>>> >> >>> Unsubscribe: >>>>> >> >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> >>> >>>>> >> >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> >> OpenStack Development Mailing List (not for usage >>>>> questions) >>>>> >> >>>> >> >> Unsubscribe: >>>>> >> >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> >> >>>>> >> >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> >> >> >>>>> >> >>>> >> > >>>>> >> >>>> >> > >>>>> >> >>>> >> > >>>>> >> >>>> >> > -- >>>>> >> >>>> >> > 葛馨霓 Xinni Ge >>>>> >> >>>> >> > >>>>> >> >>>> >> > >>>>> >> >>>> >> > >>>>> >> >>>> >> > >>>>> >> >>>> >> > ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> > OpenStack Development Mailing List (not for usage >>>>> questions) >>>>> >> >>>> >> > Unsubscribe: >>>>> >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> > >>>>> >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> >> > >>>>> >> >>>> >> >>>>> >> >>>> >> >>>>> >> >>>> >> >>>>> >> >>>> >> ______________________________ >>>>> ____________________________________________ >>>>> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >>>>> >> >>>> >> Unsubscribe: >>>>> >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> > >>>>> >> >>>> > >>>>> >> >>>> > >>>>> >> >>>> > >>>>> >> >>>> > >>>>> >> >>>> > ____________________________________________________________ >>>>> ______________ >>>>> >> >>>> > OpenStack Development Mailing List (not for usage questions) >>>>> >> >>>> > Unsubscribe: >>>>> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>>> > >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> >>>>> >> >>>> ____________________________________________________________ >>>>> ______________ >>>>> >> >>>> OpenStack Development Mailing List (not for usage questions) >>>>> >> >>>> Unsubscribe: >>>>> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> >>>>> >> >>> ____________________________________________________________ >>>>> ______________ >>>>> >> >>> OpenStack Development Mailing List (not for usage questions) >>>>> >> >>> Unsubscribe: >>>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >>> >>>>> >> >> >>>>> >> >> >>>>> >> >> >>>>> >> >> ____________________________________________________________ >>>>> ______________ >>>>> >> >> OpenStack Development Mailing List (not for usage questions) >>>>> >> >> Unsubscribe: >>>>> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ >>>>> openstack-dev >>>>> >> >> >>>>> >> >>>>> >> ____________________________________________________________ >>>>> ______________ >>>>> >> OpenStack Development Mailing List (not for usage questions) >>>>> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> > >>>>> > >>>>> > >>>>> > ____________________________________________________________ >>>>> ______________ >>>>> > OpenStack Development Mailing List (not for usage questions) >>>>> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> > >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>>> unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>>> >>>> >>>> -- >>>> 葛馨霓 Xinni Ge >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>>> unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 6 13:35:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jun 2018 13:35:59 +0000 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: References: Message-ID: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote: [...] > In addition, unfortunately such checks are not run in project gate, > so there is no way to detect in advance. > I think we need a way to check this when a change is made > instead of detecting an error when a release patch is proposed. While I hate to suggest yet another Python PTI addition, for my personal projects I test every commit (essentially a check/gate pipeline job) with: python setup.py check --restructuredtext --strict python setup.py bdist_wheel sdist ...as proof that it hasn't broken sdist/wheel building nor regressed the description-file provided in my setup.cfg. My intent is to add other release artifact tests into the same set so that there are no surprises come release time. We sort of address this case in OpenStack projects by forcing sdist builds in our standard pep8 jobs, so maybe that would be a lower-overhead place to introduce the setup rst check? Brainstorming. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Wed Jun 6 13:43:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 09:43:04 -0400 Subject: [openstack-dev] [release][ptl][doc] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: References: Message-ID: <1528292169-sup-6019@lrrr.local> Excerpts from Akihiro Motoki's message of 2018-06-06 16:36:45 +0900: > Hi the release team, > > When I prepared neutron Rocky-2 deliverables, I noticed a new metadata > syntax check > which checks README.rst was introduced. > > As of now, README.rst in networking-bagpipe and networking-ovn hit this [1]. > > Although they can be fixed in individual projects, what is the current > recommended solution? > > In addition, unfortunately such checks are not run in project gate, > so there is no way to detect in advance. > I think we need a way to check this when a change is made > instead of detecting an error when a release patch is proposed. > > Thanks, > Akihiro (amotoki) > > [1] > http://logs.openstack.org/66/572666/1/check/openstack-tox-validate/b5dde2f/job-output.txt.gz#_2018-06-06_04_09_16_067790 I apologize for not following through with more communication when we added this check. We started noticing uploads to PyPI fail because of validation errors in the README.rst files associated with the packages. We think this is a recent change to warehouse (the software that implements PyPI). The new check in the releases repo validation job tries to catch the errors before the upload fails, so they can be fixed. We wanted to start by putting it in the releases repo because it would only block releases, and not block projects from landing other patches. I recommend that projects update their tox.ini to modify their pep8 or linters target (whichever you are using) to add this command: python setup.py check --restructuredtext --strict For the check to run, the 'docutils' package must be installed, so you may have to add that to the test-requirements.txt list. Be forewarned that the error messages can be scant, almost to the point of useless. In some cases the exception has to do with implementation details of the parser, rather than explaining what part of the input triggered the error. Usually the problems are caused by using RST directives that are part of Sphinx but not "core" RST in the README.rst ("code-block" is a common one). If you can't figure out what's wrong, please post a link to the README.rst on the mailing list or in #openstack-docs and someone will try to help you out. Doug From mriedemos at gmail.com Wed Jun 6 14:02:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 6 Jun 2018 09:02:56 -0500 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: <63fc8b1a-31d6-6d83-c2f5-844083e6a3e8@gmail.com> References: <63fc8b1a-31d6-6d83-c2f5-844083e6a3e8@gmail.com> Message-ID: On 6/6/2018 8:24 AM, Jay Pipes wrote: > On 06/06/2018 09:10 AM, Artom Lifshitz wrote: >> I think regardless of how we ended up with this situation, we're still >> in a position where we have a public-facing API that could lead to >> data-corruption when used in a specific way. That should never be the >> case. I would think re-using the already possible 400 response code to >> update-volume when used with a multi-attach volume to indicate that it >> can't be done, without a new microversion, would be the cleaned way of >> getting out of this pickle. > > That's fine, yes. > > I just think it's worth noting that it's a pickle that we put ourselves > in due to an ill-conceived feature and Compute API call. And that we > should, you know, try to stop doing that. :) > > -jay If we're going to change something, I think it should probably happen on the cinder side when the retype or live migration of the volume is initiated and would do the attachment counting there. So if you're swapping from multiattach volume A to multiattach volume B and either has >1 read/write attachment, then fail with a 400 in the cinder API. We can check those things in the compute API when cinder calls the swap volume API in nova, but: 1. It's racy - cinder is the source of truth on the current state of the attachments. 2. The failure mode is going to be questionable - by the time cinder calls nova to swap the volumes on the compute host, the cinder REST API has long since 202'ed the response to the user and the best nova can do is return a 400 and then cinder has to handle that gracefully and rollback. It would be much cleaner if the volume API just fails fast. -- Thanks, Matt From mbooth at redhat.com Wed Jun 6 14:04:26 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 6 Jun 2018 15:04:26 +0100 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: Message-ID: On 6 June 2018 at 13:55, Jay Pipes wrote: > On 06/06/2018 07:46 AM, Matthew Booth wrote: >> >> TL;DR I think we need to entirely disable swap volume for multiattach >> volumes, and this will be an api breaking change with no immediate >> workaround. >> >> I was looking through tempest and came across >> >> api.compute.admin.test_volume_swap.TestMultiAttachVolumeSwap.test_volume_swap_with_multiattach. >> This test does: >> >> Create 2 multiattach volumes >> Create 2 servers >> Attach volume 1 to both servers >> ** Swap volume 1 for volume 2 on server 1 ** >> Check all is attached as expected >> >> The problem with this is that swap volume is a copy operation. > > > Is it, though? The original blueprint and implementation seem to suggest > that the swap_volume operation was nothing more than changing the mountpoint > for a volume to point to a different location (in a safe > manner that didn't lose any reads or writes). > > https://blueprints.launchpad.net/nova/+spec/volume-swap > > Nothing about the description of swap_volume() in the virt driver interface > mentions swap_volume() being a "copy operation": > > https://github.com/openstack/nova/blob/76ec078d3781fb55c96d7aaca4fb73a74ce94d96/nova/virt/driver.py#L476 > >> We don't just replace one volume with another, we copy the contents >> from one to the other and then do the swap. We do this with a qemu >> drive mirror operation, which is able to do this copy safely without >> needing to make the source read-only because it can also track writes >> to the source and ensure the target is updated again. Here's a link >> to the libvirt logs showing a drive mirror operation during the swap >> volume of an execution of the above test: > > After checking the source code, the libvirt virt driver is the only virt > driver that implements swap_volume(), so it looks to me like a public HTTP > API method was added that was specific to libvirt's implementation of drive > mirroring. Yay, more implementation leaking out through the API. > >> >> http://logs.openstack.org/58/567258/5/check/nova-multiattach/d23fad8/logs/libvirt/libvirtd.txt.gz#_2018-06-04_10_57_05_201 >> >> The problem is that when the volume is attached to more than one VM, >> the hypervisor doing the drive mirror *doesn't* know about writes on >> the other attached VMs, so it can't do that copy safely, and the >> result is data corruption. > > > Would it be possible to swap the volume by doing what Vish originally > described in the blueprint: pause the VM, swap the volume mountpoints > (potentially after migrating the underlying volume), start the VM? > >> > Note that swap volume isn't visible to the >> >> guest os, so this can't be addressed by the user. This is a data >> corrupter, and we shouldn't allow it. However, it is in released code >> and users might be doing it already, so disabling it would be a >> user-visible api change with no immediate workaround. > > > I'd love to know who is actually using the swap_volume() functionality, > actually. I'd especially like to know who is using swap_volume() with > multiattach. > >> However, I think we're attempting to do the wrong thing here anyway, >> and the above tempest test is explicit testing behaviour that we don't >> want. The use case for swap volume is that a user needs to move volume >> data for attached volumes, e.g. to new faster/supported/maintained >> hardware. > > > Is that the use case? > > As was typical, there's no mention of a use case on the original blueprint. > It just says "This feature allows a user or administrator to transparently > swap out a cinder volume that connected to an instance." Which is hardly a > use case since it uses the feature name in a description of the feature > itself. :( > > The commit message (there was only a single commit for this functionality > [1]) mentions overwriting data on the new volume: > > Adds support for transparently swapping an attached volume with > another volume. Note that this overwrites all data on the new volume > with data from the old volume. > > Yes, that is the commit message in its entirety. Of course, the commit had > no documentation at all in it, so there's no ability to understand what the > original use case really was here. > > https://review.openstack.org/#/c/28995/ > > If the use case was really "that a user needs to move volume data for > attached volumes", why not just pause the VM, detach the volume, do a > openstack volume migrate to the new destination, reattach the volume and > start the VM? That would mean no libvirt/QEMU-specific implementation > behaviour leaking out of the public HTTP API and allow the volume service > (Cinder) to do its job properly. I can't comment on how it was originally documented, but I'm confident in the use case. Certainly I know this is how our customers use it. It's the Nova-side implementation of a cinder retype operation. There are a bunch of potential reasons to want to do this, but a specific one that I recall from a customer was that they'd originally stood up their cloud with a single storage array. After a while they realised that their cloud was getting way more use than they'd anticipated, and the original array wasn't up to the job either in performance or capacity. They needed to move a bunch of data from the old array to the new one. They did this with a cinder retype. Without refreshing my memory (so please don't sweat the details!), cinder does: Create volume b with the new type (where 'new type' means 'in the new location') { if volume a is attached: nova.swap_volume(volume a, volume b) >>> because cinder can't safely copy itself while we're still writing to it else: copy it some other way presumably } >>> The result of this is that volume b contains a copy of the data of volume a Update volume b to have the same uuid as volume a Delete volume a So the copy operation is fundamental to the purpose of the call. Any other hypervisor driver which implemented it would also have to implement the copy, there's nothing libvirt-specific about it. The contract of the call is: Copy the contents of volume a to volume b and swap volume a for volume b without losing data. We could, as you say, pause the VM during this operation. However, note that this wouldn't solve the problem with multiattach because you'd need to pause *all* attached VMs, so the libvirt implementation which allows you to do it without downtime has no bearing. It would also require 'data downtime' during the copy, which may well take many hours. From discussions with customers, I don't believe this would be well received. We could change the flow in cinder to: Create new volume b for each volumea.attachment: nova.pause(attached vm) cinder does the copy Update volume b to have same uuid as volume a for each volumea.attachment: nova.just_swap_no_copy(volume a, volume b) nova.unpause(attached vm) Delete volume a However, to be safe we'd have to create a mechanism for cinder to 'lock' those pauses, otherwise another operation during the potentially very long period in which our data is unavailable could unlock it and cause the data corruption we're trying to avoid here. In short: * swap_volume only exists because Cinder can't do this * the contract of the call isn't tied to the libvirt driver * pause vs live-copy is an optimisation which doesn't impact the contract of the call * if we removed it anyway, users would be seriously unhappy with the downtime As for swap_volume with multiattach, the use case is exactly the same. The operator finds that their existing storage array is almost full, can't be expanded further, occasionally emits blue smoke, and they're fed up with that vendor anyway because the support contract increases in price by 50% annually. They need to move the data off. Some of the volumes on the array are multiattach. It needs to be possible to move this data *somehow*, even if it's less convenient than with non-multiattach. Incidentally, I believe some users do it for load balancing, so it's routine. >> With single attach that's exactly what they get: the end >> user should never notice. With multi-attach they don't get that. We're >> basically forking the shared volume at a point in time, with the >> instance which did the swap writing to the new location while all >> others continue writing to the old location. Except that even the fork >> is broken, because they'll get a corrupt, inconsistent copy rather >> than point in time. I can't think of a use case for this behaviour, >> and it certainly doesn't meet the original design intent. >> >> What they really want is for the multi-attached volume to be copied >> from location a to location b and for all attachments to be updated. >> Unfortunately I don't think we're going to be in a position to do that >> any time soon, but I also think users will be unhappy if they're no >> longer able to move data at all because it's multi-attach. We can >> compromise, though, if we allow a multiattach volume to be moved as >> long as it only has a single attachment. This means the operator can't >> move this data without disruption to users, but at least it's not >> fundamentally immovable. >> >> This would require some cooperation with cinder to achieve, as we need >> to be able to temporarily prevent cinder from allowing new >> attachments. A natural way to achieve this would be to allow a >> multi-attach volume with only a single attachment to be redesignated >> not multiattach, but there might be others. The flow would then be: >> >> Detach volume from server 2 >> Set multiattach=False on volume >> Migrate volume on server 1 >> Set multiattach=True on volume >> Attach volume to server 2 >> >> Combined with a patch to nova to disallow swap_volume on any >> multiattach volume, this would then be possible if inconvenient. >> >> Regardless of any other changes, though, I think it's urgent that we >> disable the ability to swap_volume a multiattach volume because we >> don't want users to start using this relatively new, but broken, >> feature. > > > Or we could deprecate the swap_volume Compute API operation and use Cinder > for all of this. As above, Cinder can't do this. The only way to achieve this exclusively in Cinder would be to deprecate retype of an attached volume. Then it would be down to the user to detach the volume, retype, and reattach. As noted, this would be very much user-visible, and the associated downtime would often be large. Red Hat, at least, would have some very unhappy customers. > But sure, we could also add more cruft to the Compute API and add more > conditional "it works but only when X" docs to the API reference. If it was simple and pretty I wouldn't be bothering you with it ;) Thanks, Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From jaypipes at gmail.com Wed Jun 6 14:07:12 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 6 Jun 2018 10:07:12 -0400 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: References: <63fc8b1a-31d6-6d83-c2f5-844083e6a3e8@gmail.com> Message-ID: <6aaabb34-2509-39d1-3cb0-a7bb22b6f9df@gmail.com> On 06/06/2018 10:02 AM, Matt Riedemann wrote: > On 6/6/2018 8:24 AM, Jay Pipes wrote: >> On 06/06/2018 09:10 AM, Artom Lifshitz wrote: >>> I think regardless of how we ended up with this situation, we're still >>> in a position where we have a public-facing API that could lead to >>> data-corruption when used in a specific way. That should never be the >>> case. I would think re-using the already possible 400 response code to >>> update-volume when used with a multi-attach volume to indicate that it >>> can't be done, without a new microversion, would be the cleaned way of >>> getting out of this pickle. >> >> That's fine, yes. >> >> I just think it's worth noting that it's a pickle that we put >> ourselves in due to an ill-conceived feature and Compute API call. And >> that we should, you know, try to stop doing that. :) >> >> -jay > > If we're going to change something, I think it should probably happen on > the cinder side when the retype or live migration of the volume is > initiated and would do the attachment counting there. > > So if you're swapping from multiattach volume A to multiattach volume B > and either has >1 read/write attachment, then fail with a 400 in the > cinder API. > > We can check those things in the compute API when cinder calls the swap > volume API in nova, but: > > 1. It's racy - cinder is the source of truth on the current state of the > attachments. > > 2. The failure mode is going to be questionable - by the time cinder > calls nova to swap the volumes on the compute host, the cinder REST API > has long since 202'ed the response to the user and the best nova can do > is return a 400 and then cinder has to handle that gracefully and > rollback. It would be much cleaner if the volume API just fails fast. +10 -jay From celebdor at gmail.com Wed Jun 6 15:08:22 2018 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Wed, 6 Jun 2018 17:08:22 +0200 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 In-Reply-To: References: Message-ID: On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky wrote: > Sounds like a great initiative. > > Lets follow up on the proposal by the kuryr-kubernetes blueprint. I fully subscribe what Irena said. Let's get on this quick! > > BR, > Irena > > On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu wrote: >> >> Hi Kuryr-kubernetes team, >> >> I'm thinking to propose a new BP to support Kubernetes Network Custom >> Resource Definition De-facto Standard Version 1 [1], which was drafted by >> network plumbing working group of kubernetes-sig-network. I'll call it NPWG >> spec below. >> >> The purpose of NPWG spec is trying to standardize the multi-network effort >> around K8S by defining a CRD object 'network' which can be consumed by >> various CNI plugins. I know there has already been a BP VIF-Handler And Vif >> Drivers Design, which has designed a set of mechanism to implement the >> multi-network functionality. However I think it is still worthwhile to >> support this widely accepted NPWG spec. >> >> My proposal is to implement a new vif_driver, which can interpret the PoD >> annotation and CRD defined by NPWG spec, and attach pod to additional >> neutron subnet and port accordingly. This new driver should be mutually >> exclusive with the sriov and additional_subnets drivers.So the endusers can >> choose either way of using mult-network with kuryr-kubernetes. >> >> Please let me know your thought, any comments are welcome. >> >> >> >> [1] >> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd >> >> >> Regards, >> >> -- >> Peng Liu >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Wed Jun 6 15:17:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Jun 2018 10:17:13 -0500 Subject: [openstack-dev] [PTL] Rocky-2 Milestone Reminder Message-ID: <20180606151713.GA32201@sm-xps> Hey everyone, Just a quick reminder that tomorrow, June 7, is the Rocky-2 milestone. Any projects follow the cycle-with-milestones model should propose a patch to the openstack/releases repo before the end of the day tomorrow to have a b2 release created. Please see the releases repo README for details on how to request a release: https://github.com/openstack/releases/blob/master/README.rst#requesting-a-release Note, you can also use the new-release command rather than manually editing files. After cloning the repo, you would then run something like the following to prepare a milestone 2 release request: $ tox -e venv -- new-release rocky $PROJECT milestone If you have any questions, please stop by #openstack-release and let us know how we can help. ** Note on README files There has been a recent change when uploading to PyPi that will reject packages if their long description has RST formatting errors. To guard against this, we now have validation in place on release patches to validate this before it's too late. If you see failures with the validation job, most likely this will be the cause. The README file in each repo will need to be fixed and the release request will need to be updated to include the commit hash that includes that update. For further details, please see: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131233.html --- Sean (smcginnis) From chris at openstack.org Wed Jun 6 15:18:22 2018 From: chris at openstack.org (Chris Hoge) Date: Wed, 6 Jun 2018 08:18:22 -0700 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: References: Message-ID: <56CBB211-C149-4AB5-AC5A-24E1B472F2C8@openstack.org> Hi Zane, Do you think this effort would make sense as a subproject within the Cloud Provider OpenStack repository hosted within the Kubernetes org? We have a solid group of people working on the cloud provider, and while it’s not the same code, it’s a collection of the same expertise and test resources. Even if it's hosted as an OpenStack project, we should still make sure we have documentation and pointers from the kubernetes/cloud-provider-openstack to guide users in the right direction. While I'm not in a position to directly contribute, I'm happy to offer any support I can through the SIG-OpenStack and SIG-Cloud-Provider roles I have in the K8s community. -Chris > On Jun 5, 2018, at 9:19 AM, Zane Bitter wrote: > > I've been doing some investigation into the Service Catalog in Kubernetes and how we can get OpenStack resources to show up in the catalog for use by applications running in Kubernetes. (The Big 3 public clouds already support this.) The short answer is via an implementation of something called the Open Service Broker API, but there are shortcuts available to make it easier to do. > > I'm convinced that this is readily achievable and something we ought to do as a community. > > I've put together a (long-winded) FAQ below to answer all of your questions about it. > > Would you be interested in working on a new project to implement this integration? Reply to this thread and let's collect a list of volunteers to form the initial core review team. > > cheers, > Zane. > > > What is the Open Service Broker API? > ------------------------------------ > > The Open Service Broker API[1] is a standard way to expose external resources to applications running in a PaaS. It was originally developed in the context of CloudFoundry, but the same standard was adopted by Kubernetes (and hence OpenShift) in the form of the Service Catalog extension[2]. (The Service Catalog in Kubernetes is the component that calls out to a service broker.) So a single implementation can cover the most popular open-source PaaS offerings. > > In many cases, the services take the form of simply a pre-packaged application that also runs inside the PaaS. But they don't have to be - services can be anything. Provisioning via the service broker ensures that the services requested are tied in to the PaaS's orchestration of the application's lifecycle. > > (This is certainly not the be-all and end-all of integration between OpenStack and containers - we also need ways to tie PaaS-based applications into the OpenStack's orchestration of a larger group of resources. Some applications may even use both. But it's an important part of the story.) > > What sorts of services would OpenStack expose? > ---------------------------------------------- > > Some example use cases might be: > > * The application needs a reliable message queue. Rather than spinning up multiple storage-backed containers with anti-affinity policies and dealing with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar queue from an OpenStack cloud. The overhead of running the queueing service is amortised across all of the applications in the cloud. The queue gets cleaned up correctly when the application is removed, since it is tied into the application definition. > > * The application needs a database. Rather than spinning one up in a storage-backed container and dealing with the overhead of managing it, the application requests a Trove DB from an OpenStack cloud. > > * The application includes a service that needs to run on bare metal for performance reasons (e.g. could also be a database). The application requests a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to requesting a VM, but there are alternatives like KubeVirt - which also operates through the Service Catalog - available for getting a VM in Kubernetes. There are no non-proprietary alternatives for getting a bare-metal server.) > > AWS[3], Azure[4], and GCP[5] all have service brokers available that support these and many more services that they provide. I don't know of any reason in principle not to expose every type of resource that OpenStack provides via a service broker. > > How is this different from cloud-provider-openstack? > ---------------------------------------------------- > > The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to access features of the cloud to provide its service. For example, if k8s needs persistent storage for a container then it can request that from Cinder through cloud-provider-openstack[7]. It can also request a load balancer from Octavia instead of having to start a container running HAProxy to load balance between multiple instances of an application container (thus enabling use of hardware load balancers via the cloud's abstraction for them). > > In contrast, the Service Catalog interface allows the *application* running on Kubernetes to access features of the cloud. > > What does a service broker look like? > ------------------------------------- > > A service broker provides an HTTP API with 5 actions: > > * List the services provided by the broker > * Create an instance of a resource > * Bind the resource into an instance of the application > * Unbind the resource from an instance of the application > * Delete the resource > > The binding step is used for things like providing a set of DB credentials to a container. You can rotate credentials when replacing a container by revoking the existing credentials on unbind and creating a new set on bind, without replacing the entire resource. > > Is there an easier way? > ----------------------- > > Yes! Folks from OpenShift came up with a project called the Automation Broker[8]. To add support for a service to Automation Broker you just create a container with an Ansible playbook to handle each of the actions (create/bind/unbind/delete). This eliminates the need to write another implementation of the service broker API, and allows us to simply write Ansible playbooks instead.[9] > > (Aside: Heat uses a comparable method to allow users to manage an external resource using Mistral workflows: the OS::Mistral::ExternalResource resource type.) > > Support for accessing AWS resources through a service broker is also implemented using these Ansible Playbook Bundles.[3] > > Does this mean maintaining another client interface? > ---------------------------------------------------- > > Maybe not. We already have per-project Python libraries, (deprecated) per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat resource plugins, and Horizon dashboards. (Mistral actions are generated automatically from the clients.) Some consolidation is already planned, but it would be great not to require projects to maintain yet another interface. > > One option is to implement a tool that generates a set of playbooks for each of the resources already exposed (via shade) in the OpenStack Ansible modules. Then in theory we'd only need to implement the common parts once, and then every service with support in shade would get this for free. Ideally the same broker could be used against any OpenStack cloud (so e.g. k8s might be running in your private cloud, but you may want its service catalog to allow you to connect to resources in one or more public clouds) - using shade is an advantage there because it is designed to abstract the differences between clouds. > > Another option might be to write or generate Heat templates for each resource type we want to expose. Then we'd only need to implement a common way of creating a Heat stack, and just have a different template for each resource type. This is the approach taken by the AWS playbook bundles (except with CloudFormation, obviously). An advantage is that this allows Heat to do any checking and type conversion required on the input parameters. Heat templates can also be made to be fairly cloud-independent, mainly because they make it easier to be explicit about things like ports and subnets than on the command line, where it's more tempting to allow things to happen in a magical but cloud-specific way. > > I'd prefer to go with the pure-Ansible autogenerated way so we can have support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] brokers they have 10, 11 and 17 services respectively, so arguably we could get a comparable number of features exposed without investing crazy amounts of time if we had to write templates explicitly. > > How would authentication work? > ------------------------------ > > There are two main deployment topologies we need to consider: Kubernetes deployed by an OpenStack tenant (Magnum-style, though not necessarily using Magnum) and accessing resources in that tenant's project in the local cloud, or accessing resources in some remote OpenStack cloud. > > We also need to take into account that in the second case, the Kubernetes cluster may 'belong' to a single cloud tenant (as in the first case) or may be shared by applications that each need to authenticate to different OpenStack tenants. (Kubernetes has traditionally assumed the former, but I expect it to move in the direction of allowing the latter, and it's already fairly common for OpenShift deployments.) > > The way e.g. the AWS broker[3] works is that you can either use the credentials provisioned to the VM that k8s is installed on (a 'Role' in AWS parlance - note that this is completely different to a Keystone Role), or supply credentials to authenticate to AWS remotely. > > OpenStack doesn't yet support per-instance credentials, although we're working on it. (One thing to keep in mind is that ideally we'll want a way to provide different permissions to the service broker and cloud-provider-openstack.) An option in the meantime might be to provide a way to set up credentials as part of the k8s installation. We'd also need to have a way to specify credentials manually. Unlike for proprietary clouds, the credentials also need to include the Keystone auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml format[10] if possible. > > The OpenShift Ansible Broker works by starting up an Ansible container on k8s to run a playbook from the bundle, so presumably credentials can be passed as regular k8s secrets. > > In all cases we'll want to encourage users to authenticate using Keystone Application Credentials[11]. > > How would network integration work? > ----------------------------------- > > Kuryr[12] allows us to connect application containers in Kubernetes to Neutron networks in OpenStack. It would be desirable if, when the user requests a VM or bare-metal server through the service broker, it were possible to choose between attaching to the same network as Kubernetes pods, or to a different network. > > > [1] https://www.openservicebrokerapi.org/ > [2] https://kubernetes.io/docs/concepts/service-catalog/ > [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker > [4] https://github.com/Azure/open-service-broker-azure#open-service-broker-for-azure > [5] https://github.com/GoogleCloudPlatform/gcp-service-broker#cloud-foundry-service-broker-for-google-cloud-platform > [6] https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#remove-cloud-provider-code-from-kubernetes-core > [7] https://github.com/kubernetes/cloud-provider-openstack#openstack-cloud-controller-manager > [8] http://automationbroker.io/ > [9] https://docs.openshift.org/latest/apb_devel/index.html > [10] https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html#config-files > [11] https://docs.openstack.org/keystone/latest/user/application_credentials.html > [12] https://docs.openstack.org/kuryr/latest/devref/goals_and_use_cases.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Wed Jun 6 15:44:06 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 6 Jun 2018 11:44:06 -0400 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: <56CBB211-C149-4AB5-AC5A-24E1B472F2C8@openstack.org> References: <56CBB211-C149-4AB5-AC5A-24E1B472F2C8@openstack.org> Message-ID: On 06/06/18 11:18, Chris Hoge wrote: > Hi Zane, > > Do you think this effort would make sense as a subproject within the Cloud > Provider OpenStack repository hosted within the Kubernetes org? We have > a solid group of people working on the cloud provider, and while it’s not > the same code, it’s a collection of the same expertise and test resources. TBH, I think it makes more sense as part of the OpenStack community. If you look at how the components interact, it goes: Kubernetes Service Catalog -> Automation Broker -> [this] -> OpenStack So the interfaces with k8s are already well-defined and owned by other teams. It's the interface with OpenStack that requires the closest co-ordination. (Particularly if we end up autogenerating the playbooks from introspection on shade.) If you look at where the other clouds host their service brokers or Ansible Playbook Bundles, they're not part of the equivalent Kubernetes Cloud Providers either. We'll definitely want testing though. Given that this is effectively another user interface to OpenStack, do you think this is an area that OpenLab could help out with? > Even if it's hosted as an OpenStack project, we should still make sure > we have documentation and pointers from the kubernetes/cloud-provider-openstack > to guide users in the right direction. Sure, that makes sense to cross-advertise it to people we know are using k8s on top of OpenStack already. (Although note that k8s does not have to be running on top of OpenStack for the service broker to be useful, unlike the cloud provider.) > While I'm not in a position to directly contribute, I'm happy to offer > any support I can through the SIG-OpenStack and SIG-Cloud-Provider > roles I have in the K8s community. Thanks! cheers, Zane. From davanum at gmail.com Wed Jun 6 15:52:31 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 6 Jun 2018 11:52:31 -0400 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: References: <56CBB211-C149-4AB5-AC5A-24E1B472F2C8@openstack.org> Message-ID: "do you think this is an area that OpenLab could help out with?" << YES!! please ping mrhillsman and RuiChen over on #askopenlab -- Dims On Wed, Jun 6, 2018 at 11:44 AM, Zane Bitter wrote: > On 06/06/18 11:18, Chris Hoge wrote: >> >> Hi Zane, >> >> Do you think this effort would make sense as a subproject within the Cloud >> Provider OpenStack repository hosted within the Kubernetes org? We have >> a solid group of people working on the cloud provider, and while it’s not >> the same code, it’s a collection of the same expertise and test resources. > > > TBH, I think it makes more sense as part of the OpenStack community. If you > look at how the components interact, it goes: > > Kubernetes Service Catalog -> Automation Broker -> [this] -> OpenStack > > So the interfaces with k8s are already well-defined and owned by other > teams. It's the interface with OpenStack that requires the closest > co-ordination. (Particularly if we end up autogenerating the playbooks from > introspection on shade.) If you look at where the other clouds host their > service brokers or Ansible Playbook Bundles, they're not part of the > equivalent Kubernetes Cloud Providers either. > > We'll definitely want testing though. Given that this is effectively another > user interface to OpenStack, do you think this is an area that OpenLab could > help out with? > >> Even if it's hosted as an OpenStack project, we should still make sure >> we have documentation and pointers from the >> kubernetes/cloud-provider-openstack >> to guide users in the right direction. > > > Sure, that makes sense to cross-advertise it to people we know are using k8s > on top of OpenStack already. (Although note that k8s does not have to be > running on top of OpenStack for the service broker to be useful, unlike the > cloud provider.) > >> While I'm not in a position to directly contribute, I'm happy to offer >> any support I can through the SIG-OpenStack and SIG-Cloud-Provider >> roles I have in the K8s community. > > > Thanks! > > cheers, > Zane. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From dtantsur at redhat.com Wed Jun 6 16:24:00 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 6 Jun 2018 18:24:00 +0200 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> References: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> Message-ID: <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> In Ironic world we run doc8 on README.rst as part of the pep8 job. Maybe we should make it a common practice? On 06/06/2018 03:35 PM, Jeremy Stanley wrote: > On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote: > [...] >> In addition, unfortunately such checks are not run in project gate, >> so there is no way to detect in advance. >> I think we need a way to check this when a change is made >> instead of detecting an error when a release patch is proposed. > > While I hate to suggest yet another Python PTI addition, for my > personal projects I test every commit (essentially a check/gate > pipeline job) with: > > python setup.py check --restructuredtext --strict > python setup.py bdist_wheel sdist > > ...as proof that it hasn't broken sdist/wheel building nor regressed > the description-file provided in my setup.cfg. My intent is to add > other release artifact tests into the same set so that there are no > surprises come release time. > > We sort of address this case in OpenStack projects by forcing sdist > builds in our standard pep8 jobs, so maybe that would be a > lower-overhead place to introduce the setup rst check? > Brainstorming. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Wed Jun 6 16:35:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 12:35:06 -0400 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> References: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> Message-ID: <1528302824-sup-2108@lrrr.local> Excerpts from Dmitry Tantsur's message of 2018-06-06 18:24:00 +0200: > In Ironic world we run doc8 on README.rst as part of the pep8 job. Maybe we > should make it a common practice? That seems like it may be a good thing to add, but I don't know that it is sufficient to detect all of the problems that prevent uploading packages because of the README formatting. > > On 06/06/2018 03:35 PM, Jeremy Stanley wrote: > > On 2018-06-06 16:36:45 +0900 (+0900), Akihiro Motoki wrote: > > [...] > >> In addition, unfortunately such checks are not run in project gate, > >> so there is no way to detect in advance. > >> I think we need a way to check this when a change is made > >> instead of detecting an error when a release patch is proposed. > > > > While I hate to suggest yet another Python PTI addition, for my > > personal projects I test every commit (essentially a check/gate > > pipeline job) with: > > > > python setup.py check --restructuredtext --strict > > python setup.py bdist_wheel sdist > > > > ...as proof that it hasn't broken sdist/wheel building nor regressed > > the description-file provided in my setup.cfg. My intent is to add > > other release artifact tests into the same set so that there are no > > surprises come release time. > > > > We sort of address this case in OpenStack projects by forcing sdist > > builds in our standard pep8 jobs, so maybe that would be a > > lower-overhead place to introduce the setup rst check? > > Brainstorming. > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From fungi at yuggoth.org Wed Jun 6 16:35:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jun 2018 16:35:27 +0000 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> References: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> Message-ID: <20180606163526.rynmfp3hr6miznxx@yuggoth.org> On 2018-06-06 18:24:00 +0200 (+0200), Dmitry Tantsur wrote: > In Ironic world we run doc8 on README.rst as part of the pep8 job. > Maybe we should make it a common practice? [...] First, the doc8 tool should be considered generally useful for any project with Sphinx-based documentation, regardless of whether it's a Python project. Second, doc8 isn't going to necessarily turn up the same errors as `python setup.py check --restructuredtext --strict` since the latter is focused on validating that the long description (which _might_ be in a file referenced from your documentation tree, but also might not!) for your Python package is suitable for rendering on PyPI. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Jun 6 17:07:41 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Jun 2018 12:07:41 -0500 Subject: [openstack-dev] [release] openstack-tox-validate: python setup.py check --restructuredtext --strict In-Reply-To: <20180606163526.rynmfp3hr6miznxx@yuggoth.org> References: <20180606133559.gvkktieuvy3ifzo4@yuggoth.org> <63136094-a81e-d1a9-8304-c7661a7e152c@redhat.com> <20180606163526.rynmfp3hr6miznxx@yuggoth.org> Message-ID: On 06/06/2018 11:35 AM, Jeremy Stanley wrote: > On 2018-06-06 18:24:00 +0200 (+0200), Dmitry Tantsur wrote: >> In Ironic world we run doc8 on README.rst as part of the pep8 job. >> Maybe we should make it a common practice? > [...] > > First, the doc8 tool should be considered generally useful for any > project with Sphinx-based documentation, regardless of whether it's > a Python project. Second, doc8 isn't going to necessarily turn up > the same errors as `python setup.py check --restructuredtext > --strict` since the latter is focused on validating that the > long description (which _might_ be in a file referenced from your > documentation tree, but also might not!) for your Python package is > suitable for rendering on PyPI. This is a good point about the README not necessarily being the long description. Another option for teams that have complicated README files that would be a lot of work to make compatible would be to explicitly set the long_description value for the project to something else: https://pythonhosted.org/an_example_pypi_project/setuptools.html From assaf at redhat.com Wed Jun 6 17:17:35 2018 From: assaf at redhat.com (Assaf Muller) Date: Wed, 6 Jun 2018 13:17:35 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: On Tue, May 29, 2018 at 12:41 PM, Mathieu Gagné wrote: > Hi Julia, > > Thanks for the follow up on this topic. > > On Tue, May 29, 2018 at 6:55 AM, Julia Kreger > wrote: >> >> These things are not just frustrating, but also very inhibiting for >> part time contributors such as students who may also be time limited. >> Or an operator who noticed something that was clearly a bug and that >> put forth a very minor fix and doesn't have the time to revise it over >> and over. >> > > What I found frustrating is receiving *only* nitpicks, addressing them > to only receive more nitpicks (sometimes from the same reviewer) with > no substantial review on the change itself afterward. > I wouldn't mind addressing nitpicks if more substantial reviews were > made in a timely fashion. The behavior that I've tried to promote in communities I've partaken is: If your review is comprised solely of nits, either abandon it, or don't -1. > > -- > Mathieu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Wed Jun 6 17:32:06 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 6 Jun 2018 10:32:06 -0700 Subject: [openstack-dev] [nova] proposal to postpone nova-network core functionality removal to Stein In-Reply-To: <1391ee64-90f7-9414-9168-3a4caf495555@gmail.com> References: <29873b6f-8a3c-ae6e-0756-c90d2c52a306@gmail.com> <1391ee64-90f7-9414-9168-3a4caf495555@gmail.com> Message-ID: <29096a1c-493d-2ba3-8ff4-2d0a15731916@gmail.com> On Thu, 31 May 2018 15:04:53 -0500, Matt Riedemann wrote: > On 5/31/2018 1:35 PM, melanie witt wrote: >> >> This cycle at the PTG, we had decided to start making some progress >> toward removing nova-network [1] (thanks to those who have helped!) and >> so far, we've landed some patches to extract common network utilities >> from nova-network core functionality into separate utility modules. And >> we've started proposing removal of nova-network REST APIs [2]. >> >> At the cells v2 sync with operators forum session at the summit [3], we >> learned that CERN is in the middle of migrating from nova-network to >> neutron and that holding off on removal of nova-network core >> functionality until Stein would help them out a lot to have a safety net >> as they continue progressing through the migration. >> >> If we recall correctly, they did say that removal of the nova-network >> REST APIs would not impact their migration and Surya Seetharaman is >> double-checking about that and will get back to us. If so, we were >> thinking we can go ahead and work on nova-network REST API removals this >> cycle to make some progress while holding off on removing the core >> functionality of nova-network until Stein. >> >> I wanted to send this to the ML to let everyone know what we were >> thinking about this and to receive any additional feedback folks might >> have about this plan. >> >> Thanks, >> -melanie >> >> [1] https://etherpad.openstack.org/p/nova-ptg-rocky L301 >> [2] https://review.openstack.org/567682 >> [3] >> https://etherpad.openstack.org/p/YVR18-cellsv2-migration-sync-with-operators >> L30 > > As a reminder, this is the etherpad I started to document the nova-net > specific compute REST APIs which are candidates for removal: > > https://etherpad.openstack.org/p/nova-network-removal-rocky Update: In the cells meeting today [4], Surya confirmed that CERN is okay with nova-network REST API pieces being removed this cycle while leaving the core functionality of nova-network intact, as they continue their migration from nova-network to neutron. We're tracking the nova-net REST API removal candidates on the aforementioned nova-network-removal etherpad. -melanie [4] http://eavesdrop.openstack.org/meetings/nova_cells/2018/nova_cells.2018-06-06-17.00.html From johnsomor at gmail.com Wed Jun 6 18:48:01 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 6 Jun 2018 11:48:01 -0700 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <0b6101d3fd8e$cc38bc50$64aa34f0$@gmail.com> References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> <0b6101d3fd8e$cc38bc50$64aa34f0$@gmail.com> Message-ID: Octavia also has an informal rule about two cores from the same company merging patches. I support this because it makes sure we have a diverse perspective on the patches. Specifically it has worked well for us as all of the cores have different cloud designs, so it catches anything that would limit/conflict with the different OpenStack topologies. That said, we don't hard enforce this or police it, it is just an informal policy to make sure we get input from the wider team. Currently we only have one company with two cores. That said, my issue with the current diversity calculations is they tend to be skewed by the PTL role. People have a tendency to defer to the PTL to review/comment/merge patches, so if the PTL shares a company with another core the diversity numbers get skewed heavily towards that company. Michael On Wed, Jun 6, 2018 at 5:06 AM, wrote: >> -----Original Message----- >> From: Doug Hellmann >> Sent: Monday, June 4, 2018 5:52 PM >> To: openstack-dev >> Subject: Re: [openstack-dev] [tc] Organizational diversity tag >> >> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: >> > On 02/06/18 13:23, Doug Hellmann wrote: >> > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: >> > >> On 01/06/18 12:18, Doug Hellmann wrote: >> > > >> > > [snip] >> > Apparently enough people see it the way you described that this is >> > probably not something we want to actively spread to other projects at >> > the moment. >> >> I am still curious to know which teams have the policy. If it is more >> widespread than I realized, maybe it's reasonable to extend it and use it as >> the basis for a health check after all. >> > > A while back, Trove had this policy. When Rackspace, HP, and Tesora had core reviewers, (at various times, eBay, IBM and Red Hat also had cores), the agreement was that multiple cores from any one company would not merge a change unless it was an emergency. It was not formally written down (to my knowledge). > > It worked well, and ensured that the operators didn't get surprised by some unexpected thing that took down their service. > > -amrith > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Wed Jun 6 18:52:04 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 6 Jun 2018 14:52:04 -0400 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> Message-ID: <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> On 29/05/18 13:37, Jeremy Stanley wrote: > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: >> We allow various open source projects that are not an official >> part of OpenStack or necessarily used by OpenStack to be hosted on >> OpenStack infrastructure - previously under the 'StackForge' >> branding, but now without separate branding. Do we document >> anywhere the terms of service under which we offer such hosting? > > We do so minimally here: > > https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html > > It's linked from this section of the Project Creator’s Guide in the > Infra Manual: > > https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project > > But yes, we should probably add some clarity to that document and > see about making sure it's linked more prominently. We also maintain > some guidelines for reviewers of changes to the > openstack-infra/project-config repository, which has a bit to say > about new repository creation changes: > > https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst > >> It is my understanding that the infra team will enforce the >> following conditions when a repo import request is received: >> >> * The repo must be licensed under an OSI-approved open source >> license. > > That has been our custom, but we should add a statement to this > effect in the aforementioned document. > >> * If the repo is a fork of another project, there must be (public) >> evidence of an attempt to co-ordinate with the upstream first. > > I don't recall this ever being mandated, though the project-config > reviewers do often provide suggestions to project creators such as > places in the existing community with which they might consider > cooperating/collaborating. We're mandating it for StarlingX, aren't we? AIUI we haven't otherwise forked anything that was still maintained (although we've forked plenty of libraries after establishing that the upstream was moribund). >> Neither of those appears to be documented (specifically, >> https://governance.openstack.org/tc/reference/licensing.html only >> specifies licensing requirements for official projects, libraries >> imported by official projects, and software used by the Infra >> team). > > The Infrastructure team has been granted a fair amount of autonomy > to determine its operating guidelines, and future plans to separate > project hosting further from the OpenStack name (in an attempt to > make it more clear that hosting your project in the infrastructure > is not an endorsement by OpenStack and doesn't make it "part of > OpenStack") make the OpenStack TC governance site a particularly > poor choice of venue to document such things. So clearly in the future this will be the responsibility of the Winterscale Infrastructure Council assuming that proposal goes ahead. For now, would it be valuable for the TC to develop some guidelines that will provide the WIC with a solid base it can evolve from once it takes them over, or should we just leave it up to infra's discretion? >> In addition, I think we should require projects hosted on our >> infrastructure to agree to other policies: >> >> * Adhere to the OpenStack Foundation Code of Conduct. > > This seems like a reasonable addition to our hosting requirements. > >> * Not misrepresent their relationship to the official OpenStack >> project or the Foundation. Ideally we'd come up with language that >> they *can* use to describe their status, such as "hosted on the >> OpenStack infrastructure". > > Also a great suggestion. We sort of say that in the "what being an > unoffocial project is not" bullet list, but it could use some > fleshing out. > >> If we don't have place where this kind of thing is documented >> already, I'll submit a review adding one. Does anybody have any >> ideas about a process for ensuring that projects have read and >> agreed to the terms when we add them? > > Adding process forcing active confirmation of such rules seems like > a lot of unnecessary overhead/red tape/bureaucracy. As it stands, > we're working to get rid of active agreement to the ICLA in favor of > simply asserting the DCO in commit messages, so I'm not a fan of > adding some new agreement people have to directly acknowledge along > with associated automation and policing. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Wed Jun 6 19:16:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 15:16:59 -0400 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> Message-ID: <1528312215-sup-6296@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-06 14:52:04 -0400: > On 29/05/18 13:37, Jeremy Stanley wrote: > > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: > >> We allow various open source projects that are not an official > >> part of OpenStack or necessarily used by OpenStack to be hosted on > >> OpenStack infrastructure - previously under the 'StackForge' > >> branding, but now without separate branding. Do we document > >> anywhere the terms of service under which we offer such hosting? > > > > We do so minimally here: > > > > https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html > > > > It's linked from this section of the Project Creator’s Guide in the > > Infra Manual: > > > > https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project > > > > But yes, we should probably add some clarity to that document and > > see about making sure it's linked more prominently. We also maintain > > some guidelines for reviewers of changes to the > > openstack-infra/project-config repository, which has a bit to say > > about new repository creation changes: > > > > https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst > > > >> It is my understanding that the infra team will enforce the > >> following conditions when a repo import request is received: > >> > >> * The repo must be licensed under an OSI-approved open source > >> license. > > > > That has been our custom, but we should add a statement to this > > effect in the aforementioned document. > > > >> * If the repo is a fork of another project, there must be (public) > >> evidence of an attempt to co-ordinate with the upstream first. > > > > I don't recall this ever being mandated, though the project-config > > reviewers do often provide suggestions to project creators such as > > places in the existing community with which they might consider > > cooperating/collaborating. > > We're mandating it for StarlingX, aren't we? We suggested that it would make importing the repositories more palatable, and Dean said he would do it. Which isn't quite the same as making it a requirement. > > AIUI we haven't otherwise forked anything that was still maintained > (although we've forked plenty of libraries after establishing that the > upstream was moribund). Kata has a fork of the kernel, but that feels less controversial because the kernel community expects forks as part of their contribution process. Kata also has a qemu fork, but that is under the kata-containers github org and not our infrastructure. I'm not sure someone outside of our community would differentiate between the two, but maybe they would. Either way, I would like to ensure that someone from Kata is communicating with qemu upstream. > > >> Neither of those appears to be documented (specifically, > >> https://governance.openstack.org/tc/reference/licensing.html only > >> specifies licensing requirements for official projects, libraries > >> imported by official projects, and software used by the Infra > >> team). > > > > The Infrastructure team has been granted a fair amount of autonomy > > to determine its operating guidelines, and future plans to separate > > project hosting further from the OpenStack name (in an attempt to > > make it more clear that hosting your project in the infrastructure > > is not an endorsement by OpenStack and doesn't make it "part of > > OpenStack") make the OpenStack TC governance site a particularly > > poor choice of venue to document such things. > > So clearly in the future this will be the responsibility of the > Winterscale Infrastructure Council assuming that proposal goes ahead. > > For now, would it be valuable for the TC to develop some guidelines that > will provide the WIC with a solid base it can evolve from once it takes > them over, or should we just leave it up to infra's discretion? > > >> In addition, I think we should require projects hosted on our > >> infrastructure to agree to other policies: > >> > >> * Adhere to the OpenStack Foundation Code of Conduct. > > > > This seems like a reasonable addition to our hosting requirements. > > > >> * Not misrepresent their relationship to the official OpenStack > >> project or the Foundation. Ideally we'd come up with language that > >> they *can* use to describe their status, such as "hosted on the > >> OpenStack infrastructure". > > > > Also a great suggestion. We sort of say that in the "what being an > > unoffocial project is not" bullet list, but it could use some > > fleshing out. > > > >> If we don't have place where this kind of thing is documented > >> already, I'll submit a review adding one. Does anybody have any > >> ideas about a process for ensuring that projects have read and > >> agreed to the terms when we add them? > > > > Adding process forcing active confirmation of such rules seems like > > a lot of unnecessary overhead/red tape/bureaucracy. As it stands, > > we're working to get rid of active agreement to the ICLA in favor of > > simply asserting the DCO in commit messages, so I'm not a fan of > > adding some new agreement people have to directly acknowledge along > > with associated automation and policing. From anne at openstack.org Wed Jun 6 19:28:25 2018 From: anne at openstack.org (Anne Bertucio) Date: Wed, 6 Jun 2018 12:28:25 -0700 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <1528312215-sup-6296@lrrr.local> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> <1528312215-sup-6296@lrrr.local> Message-ID: <1D069D3C-FCD7-40D9-8555-C0F8DF69DFB1@openstack.org> > Either way, I would like to ensure that someone from > Kata is communicating with qemu upstream. Since probably not too many Kata folks are on the OpenStack dev list (something to tackle in another thread or OSF all-project meeting), chiming in to say yup!, we’ve got QEMU upstream folks in the Kata community, and we’re definitely committed to making sure we communicate with other communities about these things (be it QEMU or another group in the future). Anne Bertucio OpenStack Foundation anne at openstack.org | irc: annabelleB > On Jun 6, 2018, at 12:16 PM, Doug Hellmann wrote: > > Excerpts from Zane Bitter's message of 2018-06-06 14:52:04 -0400: >> On 29/05/18 13:37, Jeremy Stanley wrote: >>> On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: >>>> We allow various open source projects that are not an official >>>> part of OpenStack or necessarily used by OpenStack to be hosted on >>>> OpenStack infrastructure - previously under the 'StackForge' >>>> branding, but now without separate branding. Do we document >>>> anywhere the terms of service under which we offer such hosting? >>> >>> We do so minimally here: >>> >>> https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html >>> >>> It's linked from this section of the Project Creator’s Guide in the >>> Infra Manual: >>> >>> https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project >>> >>> But yes, we should probably add some clarity to that document and >>> see about making sure it's linked more prominently. We also maintain >>> some guidelines for reviewers of changes to the >>> openstack-infra/project-config repository, which has a bit to say >>> about new repository creation changes: >>> >>> https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst >>> >>>> It is my understanding that the infra team will enforce the >>>> following conditions when a repo import request is received: >>>> >>>> * The repo must be licensed under an OSI-approved open source >>>> license. >>> >>> That has been our custom, but we should add a statement to this >>> effect in the aforementioned document. >>> >>>> * If the repo is a fork of another project, there must be (public) >>>> evidence of an attempt to co-ordinate with the upstream first. >>> >>> I don't recall this ever being mandated, though the project-config >>> reviewers do often provide suggestions to project creators such as >>> places in the existing community with which they might consider >>> cooperating/collaborating. >> >> We're mandating it for StarlingX, aren't we? > > We suggested that it would make importing the repositories more > palatable, and Dean said he would do it. Which isn't quite the same > as making it a requirement. > >> >> AIUI we haven't otherwise forked anything that was still maintained >> (although we've forked plenty of libraries after establishing that the >> upstream was moribund). > > Kata has a fork of the kernel, but that feels less controversial > because the kernel community expects forks as part of their contribution > process. > > Kata also has a qemu fork, but that is under the kata-containers > github org and not our infrastructure. I'm not sure someone outside > of our community would differentiate between the two, but maybe > they would. Either way, I would like to ensure that someone from > Kata is communicating with qemu upstream. > >> >>>> Neither of those appears to be documented (specifically, >>>> https://governance.openstack.org/tc/reference/licensing.html only >>>> specifies licensing requirements for official projects, libraries >>>> imported by official projects, and software used by the Infra >>>> team). >>> >>> The Infrastructure team has been granted a fair amount of autonomy >>> to determine its operating guidelines, and future plans to separate >>> project hosting further from the OpenStack name (in an attempt to >>> make it more clear that hosting your project in the infrastructure >>> is not an endorsement by OpenStack and doesn't make it "part of >>> OpenStack") make the OpenStack TC governance site a particularly >>> poor choice of venue to document such things. >> >> So clearly in the future this will be the responsibility of the >> Winterscale Infrastructure Council assuming that proposal goes ahead. >> >> For now, would it be valuable for the TC to develop some guidelines that >> will provide the WIC with a solid base it can evolve from once it takes >> them over, or should we just leave it up to infra's discretion? >> >>>> In addition, I think we should require projects hosted on our >>>> infrastructure to agree to other policies: >>>> >>>> * Adhere to the OpenStack Foundation Code of Conduct. >>> >>> This seems like a reasonable addition to our hosting requirements. >>> >>>> * Not misrepresent their relationship to the official OpenStack >>>> project or the Foundation. Ideally we'd come up with language that >>>> they *can* use to describe their status, such as "hosted on the >>>> OpenStack infrastructure". >>> >>> Also a great suggestion. We sort of say that in the "what being an >>> unoffocial project is not" bullet list, but it could use some >>> fleshing out. >>> >>>> If we don't have place where this kind of thing is documented >>>> already, I'll submit a review adding one. Does anybody have any >>>> ideas about a process for ensuring that projects have read and >>>> agreed to the terms when we add them? >>> >>> Adding process forcing active confirmation of such rules seems like >>> a lot of unnecessary overhead/red tape/bureaucracy. As it stands, >>> we're working to get rid of active agreement to the ICLA in favor of >>> simply asserting the DCO in commit messages, so I'm not a fan of >>> adding some new agreement people have to directly acknowledge along >>> with associated automation and policing. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Jun 6 19:32:19 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 6 Jun 2018 14:32:19 -0500 Subject: [openstack-dev] [neutron] Ports port_binding attribute is changing to an iterable Message-ID: Dear OpenStack Networking community of projects, As part of the implementation of multiple port bindings in the Neutron reference implementation ( https://specs.openstack.org/openstack/neutron-specs/specs/backlog/pike/portbinding_information_for_nova.html), the port_binding relationship in the Port DB model is changing to be an iterable: https://review.openstack.org/#/c/414251/66/neutron/plugins/ml2/models.py at 64 and its name is being changed to port_bindings: https://review.openstack.org/#/c/571041/4/neutron/plugins/ml2/models.py at 61 Corresponding changes are being made to the Port Oslo Versioned Object: https://review.openstack.org/#/c/414251/66/neutron/objects/ports.py at 285 https://review.openstack.org/#/c/571041/4/neutron/objects/ports.py at 285 I did my best to find usages of these attributes in the Neutron Stadium projects and only found them in networking-odl: https://review.openstack.org/#/c/572212/2/networking_odl/ml2/mech_driver.py. These are the other projects that I checked: - networking-midonet - networking-ovn - networking-bagpipe - networking-bgpvpn - neutron-dynamic-routing - neutron-fwaas - neutron-vpnaas - networking-sfc I STRONGLY ENCOURAGE these projects teams to double check and see if you might be affected. I also encourage projects in the broader OpenStack Networking community of projects to check for possible impacts. We will be holding these two patches until June 14th before merging them. If you need help dealing with the change, please ping me in the Neutron channel Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jun 6 19:58:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 6 Jun 2018 14:58:26 -0500 Subject: [openstack-dev] [nova][cinder] Update (swap) of multiattach volume should not be allowed In-Reply-To: <6aaabb34-2509-39d1-3cb0-a7bb22b6f9df@gmail.com> References: <63fc8b1a-31d6-6d83-c2f5-844083e6a3e8@gmail.com> <6aaabb34-2509-39d1-3cb0-a7bb22b6f9df@gmail.com> Message-ID: <0ed17c55-edfa-4153-3097-f1c2ab3061d4@gmail.com> Here is the nova patch for those following along: https://review.openstack.org/#/c/572790/ On 6/6/2018 9:07 AM, Jay Pipes wrote: > On 06/06/2018 10:02 AM, Matt Riedemann wrote: >> On 6/6/2018 8:24 AM, Jay Pipes wrote: >>> On 06/06/2018 09:10 AM, Artom Lifshitz wrote: >>>> I think regardless of how we ended up with this situation, we're still >>>> in a position where we have a public-facing API that could lead to >>>> data-corruption when used in a specific way. That should never be the >>>> case. I would think re-using the already possible 400 response code to >>>> update-volume when used with a multi-attach volume to indicate that it >>>> can't be done, without a new microversion, would be the cleaned way of >>>> getting out of this pickle. >>> >>> That's fine, yes. >>> >>> I just think it's worth noting that it's a pickle that we put >>> ourselves in due to an ill-conceived feature and Compute API call. >>> And that we should, you know, try to stop doing that. :) >>> >>> -jay >> >> If we're going to change something, I think it should probably happen >> on the cinder side when the retype or live migration of the volume is >> initiated and would do the attachment counting there. >> >> So if you're swapping from multiattach volume A to multiattach volume >> B and either has >1 read/write attachment, then fail with a 400 in the >> cinder API. >> >> We can check those things in the compute API when cinder calls the >> swap volume API in nova, but: >> >> 1. It's racy - cinder is the source of truth on the current state of >> the attachments. >> >> 2. The failure mode is going to be questionable - by the time cinder >> calls nova to swap the volumes on the compute host, the cinder REST >> API has long since 202'ed the response to the user and the best nova >> can do is return a 400 and then cinder has to handle that gracefully >> and rollback. It would be much cleaner if the volume API just fails fast. > > +10 > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Matt From doug at doughellmann.com Wed Jun 6 20:04:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 16:04:19 -0400 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition Message-ID: <1528313781-sup-5000@lrrr.local> I have started submitting a series of patches to fix up the tox.ini settings for projects as a step towards running "python3 first" [1]. The point of doing this now is to give teams a head start on understanding the work involved as we consider whether to make this a community goal. The current patches are all mechanically generated changes to the basepython value for environments that seem to be likely candidates. They're basically the "easy" part of the transition. I've left any changes that will need more discussion alone for now. In particular, I've skipped over any tox environments with "functional" in the name, since I thought those ran functional tests. Teams will need to decide whether to change those job definitions, or duplicate them and run them under python 2 and 3. Since we are not dropping python 2 support until the U cycle, I suggest going ahead and running the jobs twice. Note that changing the tox settings won't actually change some of the jobs. For example, with our current PTI definition, the documentation and releasenotes jobs do not run under tox. That means those will need to be changed by editing the zuul configuration for the repository. I have started to make notes for tracking the work in https://etherpad.openstack.org/p/python3-first -- including some notes about taking the next step to update the zuul job definitions and common issues we've already encountered to help folks debug job failures. I could use some help keeping an eye on these changes and getting them through the gate. If you are interested in helping, please leave a comment on the review you are willing to shepherd. Doug [1] https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged) From fungi at yuggoth.org Wed Jun 6 20:09:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jun 2018 20:09:36 +0000 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> Message-ID: <20180606200936.cjlsrnuihmisvhzt@yuggoth.org> On 2018-06-06 14:52:04 -0400 (-0400), Zane Bitter wrote: > On 29/05/18 13:37, Jeremy Stanley wrote: > > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: [...] > > > * If the repo is a fork of another project, there must be (public) > > > evidence of an attempt to co-ordinate with the upstream first. > > > > I don't recall this ever being mandated, though the project-config > > reviewers do often provide suggestions to project creators such as > > places in the existing community with which they might consider > > cooperating/collaborating. > > We're mandating it for StarlingX, aren't we? This goes back to depending on what you mean by "we" but assuming you mean those of us who were in the community track Forum room at the end of the day on Thursday, a number of us seemed to be in support of that idea including Dean (who was going to do the work to make it happen) and Jonathan (as OSF executive director). Far from a mandate, and definitely a rare enough situation that recording a hard and fast rule is not a useful way to spend our valuable time. > AIUI we haven't otherwise forked anything that was still maintained > (although we've forked plenty of libraries after establishing that the > upstream was moribund). All the Debian packaging, when we were hosting it (before it got retired and moved back to Debian's repository hosting) was implemented as forks of our Git repositories. The Infra team also maintains a fork of Gerrit (for the purposes of backporting bug fixes from later versions until we're ready to upgrade what we're running), and has some forks of other things which are basically dead upstream (lodgeit) or where we're stuck carrying support for very old versions of stuff that upstream has since moved on from (puppet-apache). Forks are not necessarily inherently bad, and usually the story around each one is somewhat unique. > > > Neither of those appears to be documented (specifically, > > > https://governance.openstack.org/tc/reference/licensing.html only > > > specifies licensing requirements for official projects, libraries > > > imported by official projects, and software used by the Infra > > > team). > > > > The Infrastructure team has been granted a fair amount of autonomy > > to determine its operating guidelines, and future plans to separate > > project hosting further from the OpenStack name (in an attempt to > > make it more clear that hosting your project in the infrastructure > > is not an endorsement by OpenStack and doesn't make it "part of > > OpenStack") make the OpenStack TC governance site a particularly > > poor choice of venue to document such things. > > So clearly in the future this will be the responsibility of the > Winterscale Infrastructure Council assuming that proposal goes > ahead. > > For now, would it be valuable for the TC to develop some > guidelines that will provide the WIC with a solid base it can > evolve from once it takes them over, or should we just leave it up > to infra's discretion? [...] My opinion is that helping clarify the terms of service documentation the Infra team is already maintaining is great, but putting hosting terms of service in the TC governance repo is likely a poor choice of venue. In the past it has fallen to the Infra team to help people come to the right conclusions as to what sorts of behaviors are acceptable, but we've preferred to avoid having lots of proscriptive rules and beating people into submission with them. I think we'd all like this to remain a fun and friendly place to get things done. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jun 6 20:11:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jun 2018 20:11:13 +0000 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <1528312215-sup-6296@lrrr.local> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> <1528312215-sup-6296@lrrr.local> Message-ID: <20180606201112.5ldwper2qxo7x7vq@yuggoth.org> On 2018-06-06 15:16:59 -0400 (-0400), Doug Hellmann wrote: [...] > Kata also has a qemu fork, but that is under the kata-containers > github org and not our infrastructure. I'm not sure someone outside > of our community would differentiate between the two, but maybe > they would. [...] The Kata community (currently) hosts all their work in GitHub rather than our infrastructure, so I'm not sure that's an altogether useful distinction. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Wed Jun 6 20:12:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 16:12:26 -0400 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <1D069D3C-FCD7-40D9-8555-C0F8DF69DFB1@openstack.org> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> <1528312215-sup-6296@lrrr.local> <1D069D3C-FCD7-40D9-8555-C0F8DF69DFB1@openstack.org> Message-ID: <1528315932-sup-2444@lrrr.local> Excerpts from Anne Bertucio's message of 2018-06-06 12:28:25 -0700: > > Either way, I would like to ensure that someone from > > Kata is communicating with qemu upstream. > > Since probably not too many Kata folks are on the OpenStack dev list (something to tackle in another thread or OSF all-project meeting), chiming in to say yup!, we’ve got QEMU upstream folks in the Kata community, and we’re definitely committed to making sure we communicate with other communities about these things (be it QEMU or another group in the future). > > > Anne Bertucio > OpenStack Foundation > anne at openstack.org | irc: annabelleB Thanks for confirming that, Anne! Doug From sean.mcginnis at gmx.com Wed Jun 6 20:14:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 6 Jun 2018 15:14:48 -0500 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: <1528313781-sup-5000@lrrr.local> References: <1528313781-sup-5000@lrrr.local> Message-ID: <5a1f84c9-8037-71d3-2611-33909099f842@gmx.com> On 06/06/2018 03:04 PM, Doug Hellmann wrote: > I have started submitting a series of patches to fix up the tox.ini > settings for projects as a step towards running "python3 first" > [1]. The point of doing this now is to give teams a head start on > understanding the work involved as we consider whether to make this > a community goal. I would ask that you stop. While I think this is useful as a quick way of finding out which projects will require additional work here and which don't, this is just creating a lot of work and overlap. Some teams are not ready to take this on right now. So unless you are planning on actually following through with making the failing ones work, it is just adding to the set of failing patches in their review queue. Other teams are already working on this and working through the failures due to the differences between python 2 and 3. So these just end up being duplication and a distraction for limited review capacity. > > The current patches are all mechanically generated changes to the > basepython value for environments that seem to be likely candidates. > They're basically the "easy" part of the transition. I've left any > changes that will need more discussion alone for now. > > In particular, I've skipped over any tox environments with "functional" > in the name, since I thought those ran functional tests. Teams will > need to decide whether to change those job definitions, or duplicate > them and run them under python 2 and 3. Since we are not dropping > python 2 support until the U cycle, I suggest going ahead and running > the jobs twice. > > Note that changing the tox settings won't actually change some of the > jobs. For example, with our current PTI definition, the documentation > and releasenotes jobs do not run under tox. That means those will need > to be changed by editing the zuul configuration for the repository. > > I have started to make notes for tracking the work in > https://etherpad.openstack.org/p/python3-first -- including some notes > about taking the next step to update the zuul job definitions and common > issues we've already encountered to help folks debug job failures. > > I could use some help keeping an eye on these changes and getting > them through the gate. If you are interested in helping, please > leave a comment on the review you are willing to shepherd. > > Doug > > [1] https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hamzy at us.ibm.com Wed Jun 6 20:15:00 2018 From: hamzy at us.ibm.com (Mark Hamzy) Date: Wed, 6 Jun 2018 15:15:00 -0500 Subject: [openstack-dev] [tripleo][heat] where does ip_netmask in network_config come from? Message-ID: When the system boots up, the IP addresses seem correct: Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno5: | True | . | . | . | 6c:ae:8b:25:34:ed | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno4: | True | 9.114.118.241 | 255.255.255.0 | . | 6c:ae:8b:25:34:ec | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno4: | True | . | . | d | 6c:ae:8b:25:34:ec | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | enp0s29u1u1u5: | True | . | . | . | 6e:ae:8b:25:34:e9 | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | enp0s29u1u1u5: | True | . | . | d | 6e:ae:8b:25:34:e9 | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | . | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | lo: | True | . | . | d | . | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno3: | True | 9.114.219.197 | 255.255.255.0 | . | 6c:ae:8b:25:34:eb | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno3: | True | . | . | d | 6c:ae:8b:25:34:eb | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno2: | True | 9.114.219.44 | 255.255.255.0 | . | 6c:ae:8b:25:34:ea | Jun 6 12:43:07 overcloud-controller-0 cloud-init: ci-info: | eno2: | True | . | . | d | 6c:ae:8b:25:34:ea | However, I am seeing the following when run-os-net-config.sh is run. I put in a (sudo ip route; sudo ip -o address; sudo ip route get to ${METADATA_IP}) before the ping check: default via 9.114.219.254 dev eno3 proto dhcp metric 101 9.114.219.0/24 dev br-ex proto kernel scope link src 9.114.219.193 9.114.219.0/24 dev eno2 proto kernel scope link src 9.114.219.193 9.114.219.0/24 dev eno3 proto kernel scope link src 9.114.219.197 metric 101 169.254.95.0/24 dev enp0s29u1u1u5 proto kernel scope link src 169.254.95.120 metric 103 169.254.169.254 via 9.114.219.30 dev eno2 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever 2: eno2 inet 9.114.219.193/24 brd 9.114.219.255 scope global eno2\ valid_lft forever preferred_lft forever 2: eno2 inet6 fe80::6eae:8bff:fe25:34ea/64 scope link tentative \ valid_lft forever preferred_lft forever 3: eno3 inet 9.114.219.197/24 brd 9.114.219.255 scope global noprefixroute dynamic eno3\ valid_lft 538sec preferred_lft 538sec 3: eno3 inet6 fd55:faaf:e1ab:3d9:6eae:8bff:fe25:34eb/64 scope global mngtmpaddr dynamic \ valid_lft 2591961sec preferred_lft 604761sec 3: eno3 inet6 fe80::6eae:8bff:fe25:34eb/64 scope link \ valid_lft forever preferred_lft forever 6: enp0s29u1u1u5 inet 169.254.95.120/24 brd 169.254.95.255 scope link noprefixroute dynamic enp0s29u1u1u5\ valid_lft 539sec preferred_lft 539sec 6: enp0s29u1u1u5 inet6 fe80::6cae:8bff:fe25:34e9/64 scope link \ valid_lft forever preferred_lft forever 8: br-ex inet 9.114.219.193/24 brd 9.114.219.255 scope global br-ex\ valid_lft forever preferred_lft forever 8: br-ex inet6 fe80::6eae:8bff:fe25:34ec/64 scope link \ valid_lft forever preferred_lft forever 9.114.219.30 dev br-ex src 9.114.219.193 cache Trying to ping metadata IP 9.114.219.30...FAILURE It seems like the data is coming from: [root at overcloud-controller-0 ~]# cat /etc/os-net-config/config.json {"network_config": [{"addresses": [{"ip_netmask": "9.114.219.196/24"}], "dns_servers": ["8.8.8.8", "8.8.4.4"], "name": "nic1", "routes": [{"ip_netmask": "169.254.169.254/32", "next_hop": "9.114.219.30"}], "type": "interface", "use_dhcp": false}, {"addresses": [{"ip_netmask": "9.114.219.196/24"}], "dns_servers": ["8.8.8.8", "8.8.4.4"], "members": [{"name": "nic3", "primary": true, "type": "interface"}], "name": "br-ex", "routes": [{"default": true, "next_hop": "9.114.118.254"}], "type": "ovs_bridge", "use_dhcp": false}]} Also in the log I see: ... Jun 6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 12:43:53 PM] [INFO] Active nics are ['eno2', 'eno3', 'eno4'] Jun 6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 12:43:53 PM] [INFO] nic1 mapped to: eno2 Jun 6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 12:43:53 PM] [INFO] nic2 mapped to: eno3 Jun 6 12:45:15 overcloud-controller-0 os-collect-config: [2018/06/06 12:43:53 PM] [INFO] nic3 mapped to: eno4 ... templates/nic-configs/controller.yaml has the following section: ... $network_config: network_config: - type: interface name: nic1 use_dhcp: false dns_servers: get_param: DnsServers addresses: - ip_netmask: list_join: - / - - get_param: ControlPlaneIp - get_param: ControlPlaneSubnetCidr routes: - ip_netmask: 169.254.169.254/32 next_hop: get_param: EC2MetadataIp - type: ovs_bridge name: bridge_name dns_servers: get_param: DnsServers use_dhcp: false addresses: - ip_netmask: get_param: ExternalIpSubnet routes: - default: true next_hop: get_param: ExternalInterfaceDefaultRoute members: - type: interface name: nic3 primary: true ... (undercloud) [stack at oscloud5 ~]$ grep External templates/environments/network-environment.yaml ExternalNetCidr: 9.114.118.0/24 ExternalNetworkVlanID: 10 ExternalAllocationPools: [{'start': '9.114.118.240', 'end': '9.114.118.248'}] ExternalInterfaceDefaultRoute: 9.114.118.254 (undercloud) [stack at oscloud5 ~]$ grep Control templates/environments/network-environment.yaml # Port assignments for the Controller OS::TripleO::Controller::Net::SoftwareConfig: ControlPlaneSubnetCidr: '24' ControlPlaneDefaultRoute: 9.114.219.254 Also is there a way to dump heat variables? -- Mark You must be the change you wish to see in the world. -- Mahatma Gandhi Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jun 6 20:25:03 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 06 Jun 2018 13:25:03 -0700 Subject: [openstack-dev] [all] Zuul updating Ansible in use to Ansible 2.5 Message-ID: <1528316703.2413450.1398900544.02A900E3@webmail.messagingengine.com> Zuul will be updating the version of Ansible it uses to run jobs from version 2.3 to 2.5 tomorrow, June 7, 2018. The Infra team will followup shortly after and get that update deployed. Other users have apparently checked that this works in general and we have tests that exercise some basic integration with Ansible so we don't expect major breakages. However, should you notice anything new/different/broken feel free to reach out to the Infra team. You may notice there will be new deprecation warnings from Ansible particularly around our use of the include directive. Version 2.3 doesn't have the non deprecated directives available to it so we will have to transition after the upgrade. Thank you for your patience, Clark (and the Infra team) From doug at doughellmann.com Wed Jun 6 20:27:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 16:27:34 -0400 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: <5a1f84c9-8037-71d3-2611-33909099f842@gmx.com> References: <1528313781-sup-5000@lrrr.local> <5a1f84c9-8037-71d3-2611-33909099f842@gmx.com> Message-ID: <1528316669-sup-4270@lrrr.local> Excerpts from Sean McGinnis's message of 2018-06-06 15:14:48 -0500: > On 06/06/2018 03:04 PM, Doug Hellmann wrote: > > I have started submitting a series of patches to fix up the tox.ini > > settings for projects as a step towards running "python3 first" > > [1]. The point of doing this now is to give teams a head start on > > understanding the work involved as we consider whether to make this > > a community goal. > > I would ask that you stop. > > While I think this is useful as a quick way of finding out which projects > will require additional work here and which don't, this is just creating > a lot of work and overlap. > > Some teams are not ready to take this on right now. So unless you are > planning on actually following through with making the failing ones work, > it is just adding to the set of failing patches in their review queue. > > Other teams are already working on this and working through the failures > due to the differences between python 2 and 3. So these just end up being > duplication and a distraction for limited review capacity. I've already proposed all of the ones I intended to, so if folks don't want them either abandon them or let me know and I will. Otherwise, I will work with anyone who wants to use these as the first step to converting their doc and release notes jobs, or to explore what else would need to be done as part of the shift. Doug From doug at doughellmann.com Wed Jun 6 20:41:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 06 Jun 2018 16:41:17 -0400 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <20180606200936.cjlsrnuihmisvhzt@yuggoth.org> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> <58882cee-9915-90cf-9fab-eaf37e6789e9@redhat.com> <20180606200936.cjlsrnuihmisvhzt@yuggoth.org> Message-ID: <1528316928-sup-8901@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-06-06 20:09:36 +0000: > On 2018-06-06 14:52:04 -0400 (-0400), Zane Bitter wrote: > > On 29/05/18 13:37, Jeremy Stanley wrote: > > > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: > [...] > > > > * If the repo is a fork of another project, there must be (public) > > > > evidence of an attempt to co-ordinate with the upstream first. > > > > > > I don't recall this ever being mandated, though the project-config > > > reviewers do often provide suggestions to project creators such as > > > places in the existing community with which they might consider > > > cooperating/collaborating. > > > > We're mandating it for StarlingX, aren't we? > > This goes back to depending on what you mean by "we" but assuming > you mean those of us who were in the community track Forum room at > the end of the day on Thursday, a number of us seemed to be in > support of that idea including Dean (who was going to do the work to > make it happen) and Jonathan (as OSF executive director). Far from a > mandate, and definitely a rare enough situation that recording a > hard and fast rule is not a useful way to spend our valuable time. > > > AIUI we haven't otherwise forked anything that was still maintained > > (although we've forked plenty of libraries after establishing that the > > upstream was moribund). > > All the Debian packaging, when we were hosting it (before it got > retired and moved back to Debian's repository hosting) was > implemented as forks of our Git repositories. The Infra team also > maintains a fork of Gerrit (for the purposes of backporting bug > fixes from later versions until we're ready to upgrade what we're > running), and has some forks of other things which are basically > dead upstream (lodgeit) or where we're stuck carrying support for > very old versions of stuff that upstream has since moved on from > (puppet-apache). Forks are not necessarily inherently bad, and > usually the story around each one is somewhat unique. Yeah, if I had realized the Debian packaging repos had changes beyond packaging I wouldn't have supported hosting them at the time. Because the gerrit fork is for the use of this community with our deployment, we do try to upstream fixes, and we don't intend to release it separately under our own distribution, I see that as reasonable. I'm trying to look at this from the perspective of the Golden Rule [1]. We not treat other projects in ways we don't want to be treated ourselves, regardless of whether we're doing it out in the open. I don't want the OpenStack community to have the reputation of forking instead of collaborating. [1] https://en.wikipedia.org/wiki/Golden_Rule > > > > Neither of those appears to be documented (specifically, > > > > https://governance.openstack.org/tc/reference/licensing.html only > > > > specifies licensing requirements for official projects, libraries > > > > imported by official projects, and software used by the Infra > > > > team). > > > > > > The Infrastructure team has been granted a fair amount of autonomy > > > to determine its operating guidelines, and future plans to separate > > > project hosting further from the OpenStack name (in an attempt to > > > make it more clear that hosting your project in the infrastructure > > > is not an endorsement by OpenStack and doesn't make it "part of > > > OpenStack") make the OpenStack TC governance site a particularly > > > poor choice of venue to document such things. > > > > So clearly in the future this will be the responsibility of the > > Winterscale Infrastructure Council assuming that proposal goes > > ahead. > > > > For now, would it be valuable for the TC to develop some > > guidelines that will provide the WIC with a solid base it can > > evolve from once it takes them over, or should we just leave it up > > to infra's discretion? > [...] > > My opinion is that helping clarify the terms of service > documentation the Infra team is already maintaining is great, but > putting hosting terms of service in the TC governance repo is likely > a poor choice of venue. In the past it has fallen to the Infra team > to help people come to the right conclusions as to what sorts of > behaviors are acceptable, but we've preferred to avoid having lots > of proscriptive rules and beating people into submission with them. > I think we'd all like this to remain a fun and friendly place to get > things done. I want it to be fun, too. One way to ensure that is to try to avoid these situations where one group angers another through some action that the broader community can generally agree is not acceptable to us by writing those policies down. I agree this is ultimately going to be something we rely on the infra team to deal with. I think it's reasonable for the rest of the community to try to help establish the preferences about what policies should be in place. Doug From rosmaita.fossdev at gmail.com Wed Jun 6 21:08:20 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 6 Jun 2018 17:08:20 -0400 Subject: [openstack-dev] [glance] Rocky-2 milestone release Message-ID: Status: glanceclient - released 2.11.1 today glance_store - one outstanding patch that would be worth including in the release: - https://review.openstack.org/#/c/534745/ (use only exceptions for uri validations) glance - two patches we should get in: - https://review.openstack.org/#/c/514114/ (refactor exception handling in cmd.api) (has one +2) - https://review.openstack.org/#/c/572534/ (remove deprecated 'enable_image_import' option) - note: will need to regenerate the config files before proposing a release cheers, brian From kennelson11 at gmail.com Wed Jun 6 22:00:57 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Jun 2018 15:00:57 -0700 Subject: [openstack-dev] [First Contact] [SIG] [PTL] Project Liaisons Message-ID: Hello! As you hopefully are aware the First Contact SIG strives to provide a place for new contributors to come for information and advice. Part of this is helping new contributors find more established contributors in the community they can ask for help from. While the group of people involved in the FC SIG is diverse in project knowledge, we don't have all of them covered. Over the last year we have built a list of Project Liaisons to refer new contributors to when the project they are interested in isn't one we know well. Unfortunately, this list[1] isn't as filled out as we would like it to be. So! Keeping with the conventions of other liaison roles, if there isn't already a project liaison named, this role will default to the PTL unless you respond to this thread with the individual you are delegating to :) Or by adding them to the list in the wiki[1]. Essentially the duties of the liaison are just to be willing to help out newcomers when a FC SIG member introduces you to them and to keep an eye out for patches that come in to your project with the 'Welcome, new contributor' bot message. Its likely you are doing this already, but to have a defined list of people to refer to would be a huge help. Thank you! -Kendall Nelson (diablo_rojo) [1]https://wiki.openstack.org/wiki/First_Contact_SIG#Project_Liaisons -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Thu Jun 7 00:35:13 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Thu, 7 Jun 2018 10:35:13 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: The branch is now available under feature/graphql on the neutron core repository [1]. Just to summarize our initial requirements: - GraphQL endpoint to be added through a new WeBoB/WSGI stack - Add graphene library [2] - Unit tests and implementation for GraphQL schema for networks, subnets and ports Types. I think we should support Relay by making the Schema Relay compliant and support Node ID, cursor connections and . This will offer re-fetch, automated pagination and caching out of the box and not only will show the power of GraphQL but also because on the long run it would more likely what would be needed for complex API structures like we have across the board. Any thoughts? [1] https://git.openstack.org/cgit/openstack/neutron/log/?h=feature/graphql [2] http://graphene-python.org/ On 31/05/18 17:27, Flint WALRUS wrote: > Hi Gilles, Ed, > > I’m really glad and thrilled to read such good news! > > At this point it’s cool to see that many initiatives have the same > convergent needs regarding GraphQL as it will give us a good traction > from the beginning if our PoC manage to sufficiently convince our peers. > > Let me know as soon as the branch have been made, I’ll work on it. > > Regards, > Fl1nt. > Le jeu. 31 mai 2018 à 09:17, Gilles Dubreuil > a écrit : > > Hi Flint, > > I wish it was "my" summit ;) > In the latter case I'd make the sessions an hour and not 20 or 40 > minutes, well at least for the Forum part. And I will also make > only one summit a year instead of two (which is also a feed back I > got from the Market place). I've passed that during the user > feedback session. > > Sorry for not responding earlier, @elmiko is going to send the > minutes of the API SIG forum session we had. > > We confirmed Neutron to be the PoC. > We are going to use a feature branch, waiting for Miguel Lavalle > to confirm the request has been acknowledge by the Infra group. > The PoC goal is to show GraphQL efficiency. > So we're going to make something straightforward, use Neutron > existing server by  adding the graphQL endpoint and cover few core > items such as network, subnets and ports (for example). > > Also the idea of having a central point of access for OpenStack > APIs using GrahpQL stitching and delegation is exciting for > everyone (and I had obviously same feedback off the session) and > that's something that could happen once the PoC has convinced. > > During the meeting, Jiri Tomasek explained how GraphQL could help > TripleO UI. Effectively they struggle with APIs requests and had > to create a middle(ware) module in JS to do API work and > reconstruction before the Javascript client can use it. GraphQL > would simplify the process and allow to get rid of the module. He > also explained, after the meeting, how Horizon could benefit as > well, allowing to use only JS and avoid Django altogether! > > I've also been told that Zuul nees GraphQL. > > Well basically the question is who doesn't need it? > > Cheers, > Gilles > > > > On 31/05/18 03:34, Flint WALRUS wrote: >> Hi Gilles, I hope you enjoyed your Summit!? >> >> Did you had any interesting talk to report about our little >> initiative ? >> Le dim. 6 mai 2018 à 15:01, Gilles Dubreuil > > a écrit : >> >> >> Akihiro, thank you for your precious help! >> >> Regarding the choice of Neutron as PoC, I'm sorry for not >> providing much details when I said "because of its specific >> data model", >> effectively the original mention was  "its API exposes things >> at an individual table level, requiring the client to join >> that information to get the answers they need". >> I realize now such description probably applies to many >> OpenStack APIs. >> So I'm not sure what was the reason for choosing Neutron. >> I suppose Nova is also a good candidate because API is quite >> complex too, in a different way, and need to expose the data >> API and the control API plane as we discussed. >> >> After all Neutron is maybe not the best candidate but it >> seems good enough. >> >> And as Flint say the extension mechanism shouldn't be an issue. >> >> So if someone believe there is a better candidate for the >> PoC, please speak now. >> >> Thanks, >> Gilles >> >> PS: Flint,  Thank you for offering to be the advocate for >> Berlin. That's great! >> >> >> On 06/05/18 02:23, Flint WALRUS wrote: >>> Hi Akihiro, >>> >>> Thanks a lot for this insight on how neutron behave. >>> >>> We would love to get support and backing from the neutron >>> team in order to be able to get the best PoC possible. >>> >>> Someone suggested neutron as a good choice because of it >>> simple database model. As GraphQL can manage your behavior >>> of an extension declaring its own schemes I don’t think it >>> would take that much time to implement it. >>> >>> @Gilles, if I goes to the berlin summitt I could definitely >>> do the networking and relationship work needed to get >>> support on our PoC from different teams members. This would >>> help to spread the world multiple time and don’t have a long >>> time before someone come to talk about this subject as what >>> happens with the 2015 talk of the Facebook speaker. >>> >>> Le sam. 5 mai 2018 à 18:05, Akihiro Motoki >>> > a écrit : >>> >>> Hi, >>> >>> I am happy to see the effort to explore a new API mechanism. >>> I would like to see good progress and help effort as API >>> liaison from the neutron team. >>> >>> > Neutron has been selected for the PoC because of its >>> specific data model >>> >>> On the other hand, I am not sure this is the right >>> reason to choose 'neutron' only from this reason. I >>> would like to note "its specific data model" is not the >>> reason that makes the progress of API versioning slowest >>> in the OpenStack community. I believe it is worth >>> recognized as I would like not to block the effort due >>> to the neutron-specific reasons. >>> The most complicated point in the neutron API is that >>> the neutron API layer allows neutron plugins to declare >>> which features are supported. The neutron API is a >>> collection of API extensions defined in the neutron-lib >>> repo and each neutron plugin can declare which subset(s) >>> of the neutron APIs are supported. (For more detail, you >>> can check how the neutron API extension mechanism is >>> implemented). It is not defined only by the neutron API >>> layer. We need to communicate which API features are >>> supported by communicating enabled service plugins. >>> >>> I am afraid that most efforts to explore a new mechanism >>> in neutron will be spent to address the above points >>> which is not directly related to GraphQL itself. >>> Of course, it would be great if you overcome >>> long-standing complicated topics as part of GraphQL >>> effort :) >>> >>> I am happy to help the effort and understand how the >>> neutron API is defined. >>> >>> Thanks, >>> Akihiro >>> >>> >>> 2018年5月5日(土) 18:16 Gilles Dubreuil >> >: >>> >>> Hello, >>> >>> Few of us recently discussed [1] how GraphQL [2], >>> the next evolution >>> from REST, could transform OpenStack APIs for the >>> better. >>> Effectively we believe OpenStack APIs provide >>> perfect use cases for >>> GraphQL DSL approach, to bring among other >>> advantages, better >>> performance and stability, easier developments and >>> consumption, and with >>> GraphQL Schema provide automation capabilities never >>> achieved before. >>> >>> The API SIG suggested to start an API GraphQL Proof >>> of Concept (PoC) to >>> demonstrate the capabilities before eventually >>> extend GraphQL to other >>> projects. >>> Neutron has been selected for the PoC because of its >>> specific data model. >>> >>> So if you are interested, please join us. >>> For those who can make it, we'll also discuss this >>> during the SIG API >>> BoF at OpenStack Summit at Vancouver [3] >>> >>> To learn more about GraphQL, check-out >>> howtographql.com [4]. >>> >>> So let's get started... >>> >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html >>> [2] http://graphql.org/ >>> [3] >>> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session >>> [4] https://www.howtographql.com/ >>> >>> Regards, >>> Gilles >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage >>> questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pliu at redhat.com Thu Jun 7 01:52:57 2018 From: pliu at redhat.com (Peng Liu) Date: Thu, 7 Jun 2018 09:52:57 +0800 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 In-Reply-To: References: Message-ID: Cool. I'll start to prepare a BP for this, so we can have more detailed discussion. On Wed, Jun 6, 2018 at 11:08 PM, Antoni Segura Puimedon wrote: > On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky > wrote: > > Sounds like a great initiative. > > > > Lets follow up on the proposal by the kuryr-kubernetes blueprint. > > I fully subscribe what Irena said. Let's get on this quick! > > > > > BR, > > Irena > > > > On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu wrote: > >> > >> Hi Kuryr-kubernetes team, > >> > >> I'm thinking to propose a new BP to support Kubernetes Network Custom > >> Resource Definition De-facto Standard Version 1 [1], which was drafted > by > >> network plumbing working group of kubernetes-sig-network. I'll call it > NPWG > >> spec below. > >> > >> The purpose of NPWG spec is trying to standardize the multi-network > effort > >> around K8S by defining a CRD object 'network' which can be consumed by > >> various CNI plugins. I know there has already been a BP VIF-Handler And > Vif > >> Drivers Design, which has designed a set of mechanism to implement the > >> multi-network functionality. However I think it is still worthwhile to > >> support this widely accepted NPWG spec. > >> > >> My proposal is to implement a new vif_driver, which can interpret the > PoD > >> annotation and CRD defined by NPWG spec, and attach pod to additional > >> neutron subnet and port accordingly. This new driver should be mutually > >> exclusive with the sriov and additional_subnets drivers.So the endusers > can > >> choose either way of using mult-network with kuryr-kubernetes. > >> > >> Please let me know your thought, any comments are welcome. > >> > >> > >> > >> [1] > >> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_ > RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd > >> > >> > >> Regards, > >> > >> -- > >> Peng Liu > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Peng Liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Thu Jun 7 07:02:05 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 7 Jun 2018 15:02:05 +0800 Subject: [openstack-dev] [oslo][nova] anyway to specify req-id in LOG.xxx Message-ID: Due to bug [1] looks like the req-id of each request will be recorded in context and then LOG.xxx will use the context if not spcified per bug reported, the periodic task seems reuse the context of its previous action and lead to same req-id in the log quick oslo code check didn't provide too much help along with [2] seems doesn't work any hint to specify the id in log instead of using default one? thanks [1] https://bugs.launchpad.net/nova/+bug/1773102 [2] https://review.openstack.org/#/c/570707/1 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Jun 7 07:54:48 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 7 Jun 2018 09:54:48 +0200 Subject: [openstack-dev] [Openstack-operators] [openstack-ansible][releases][governance] Change in OSA roles tagging In-Reply-To: <1528209045-sup-7456@lrrr.local> References: <1528209045-sup-7456@lrrr.local> Message-ID: > Right, you can set the stable-branch-type field to 'tagless' (see > http://git.openstack.org/cgit/openstack/releases/tree/README.rst#n462) and > then set the branch location field to the SHA you want to use. Exactly what I thought. > If you would be ready to branch all of the roles at one time, you could > put all of them into 1 deliverable file. Otherwise, you will want to > split them up into their own files. Same. > > And since you have so many, I will point out that we're really into > automation over here on the release team, and if you wanted to work on > making the edit-deliverable command smart enough to determine the SHA > for you I could walk you through that code to get you started. Cool. From celebdor at gmail.com Thu Jun 7 08:08:00 2018 From: celebdor at gmail.com (Antoni Segura Puimedon) Date: Thu, 7 Jun 2018 10:08:00 +0200 Subject: [openstack-dev] [kuryr][kuryr-kubernetes] Propose to support Kubernetes Network Custom Resource Definition De-facto Standard Version 1 In-Reply-To: References: Message-ID: On Thu, Jun 7, 2018 at 3:52 AM, Peng Liu wrote: > Cool. > I'll start to prepare a BP for this, so we can have more detailed > discussion. Great! > > On Wed, Jun 6, 2018 at 11:08 PM, Antoni Segura Puimedon > wrote: >> >> On Wed, Jun 6, 2018 at 2:37 PM, Irena Berezovsky >> wrote: >> > Sounds like a great initiative. >> > >> > Lets follow up on the proposal by the kuryr-kubernetes blueprint. >> >> I fully subscribe what Irena said. Let's get on this quick! >> >> > >> > BR, >> > Irena >> > >> > On Wed, Jun 6, 2018 at 6:47 AM, Peng Liu wrote: >> >> >> >> Hi Kuryr-kubernetes team, >> >> >> >> I'm thinking to propose a new BP to support Kubernetes Network Custom >> >> Resource Definition De-facto Standard Version 1 [1], which was drafted >> >> by >> >> network plumbing working group of kubernetes-sig-network. I'll call it >> >> NPWG >> >> spec below. >> >> >> >> The purpose of NPWG spec is trying to standardize the multi-network >> >> effort >> >> around K8S by defining a CRD object 'network' which can be consumed by >> >> various CNI plugins. I know there has already been a BP VIF-Handler And >> >> Vif >> >> Drivers Design, which has designed a set of mechanism to implement the >> >> multi-network functionality. However I think it is still worthwhile to >> >> support this widely accepted NPWG spec. >> >> >> >> My proposal is to implement a new vif_driver, which can interpret the >> >> PoD >> >> annotation and CRD defined by NPWG spec, and attach pod to additional >> >> neutron subnet and port accordingly. This new driver should be mutually >> >> exclusive with the sriov and additional_subnets drivers.So the endusers >> >> can >> >> choose either way of using mult-network with kuryr-kubernetes. >> >> >> >> Please let me know your thought, any comments are welcome. >> >> >> >> >> >> >> >> [1] >> >> >> >> https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit#heading=h.hylsbqoj5fxd >> >> >> >> >> >> Regards, >> >> >> >> -- >> >> Peng Liu >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> > >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Peng Liu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ianyrchoi at gmail.com Thu Jun 7 08:32:26 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 7 Jun 2018 17:32:26 +0900 Subject: [openstack-dev] [I18n] IRC Office hours reminder: June 7th at 13:00-14:00 UTC Message-ID: <50688545-c885-7e29-f97f-9f6d120e7e92@gmail.com> Hello all, I18n team previously had team meetings but decided to have office hours instead [1]. Please feel free to come to #openstack-i18n IRC channel and ask and/or discuss anything on translation, internationalization, and/or localization issues on projects. Although I18n PTL is now busy on attending OpenStack Days in Europe by the of June [2] (He makes himself really internationlizated by attending to many offline events in Europe!), I am available this week and can serve as a co-chair with the aggrement of I18n PTL :) With many thanks, /Ian [1] http://lists.openstack.org/pipermail/openstack-i18n/2018-April/003238.html [2] http://lists.openstack.org/pipermail/openstack-i18n/2018-May/003239.html From thomas.morin at orange.com Thu Jun 7 09:09:15 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Thu, 7 Jun 2018 11:09:15 +0200 Subject: [openstack-dev] [stable][networking-bgpvpn][infra] missing networking-odl repository / tox-siblings & tox_install.sh In-Reply-To: <62aa4a95-bfd9-ed5d-9c49-dc6db369168c@ericsson.com> References: <62aa4a95-bfd9-ed5d-9c49-dc6db369168c@ericsson.com> Message-ID: <29199_1528362556_5B18F63C_29199_9_1_7026d833-e515-a462-92e2-562a55ac04e1@orange.com> Hi Előd, Thanks for looking into that. Summary: - we  removed networking-odl from required-projects of networking-bgpvpn because of a history of networking-odl changes breaking functional tests in networking-bgvpn (since networking-bgpvpn master declares networking-odl in requirements.txt , the tox-siblings magic would prevent us from pinning our networking-odl dep to a pypi release) - the side effect, as you identified, is that this breaks stable/* branches of networking-bgpvpn, which still use toos/tox_install.sh (and can't stop doing so because in stable branches, the requirement checking job still prevents the addition of networking-odl in requirements.txt) I think that, if doable, it would be nice to avoid complexifying the zuul job configuration to keep networking-odl. If it can't be avoided then something like your change  [3] would be the way, I guess. Andreas, Monty, any guidance ? -Thomas [3] https://review.openstack.org/#/c/572495/ On 06/06/2018 13:28, Elõd Illés wrote: > Hi, > > I'm trying to create a fix for the failing networking-bgpvpn stable > periodic sphinx-docs job [1], but meanwhile it turned out that other > "check" (and possibly "gate") jobs are failing on stable, too, on > networking-bgpvpn, because of missing dependency: networking-odl > repository (for pep8, py27, py35, cover and even sphinx, too). I > submitted a patch a couple of days ago for the stable periodic py27 > job [2] and it solved the issue there. But now it seems that every > other networking-bgpvpn job needs this fix if it is run against stable > branches (something like in this patch [3]). > > Question: Is there a better way to fix these issues? > > > The common error message of the failing jobs: > ********************************** > ERROR! /home/zuul/src/git.openstack.org/openstack/networking-odl not > found > In Zuul v3 all repositories used need to be declared > in the 'required-projects' parameter on the job. > To fix this issue, add: > >   openstack/networking-odl > > to 'required-projects'. > > While you're at it, it's worth noting that zuul-cloner itself > is deprecated and this shim is only present for transition > purposes. Start thinking about how to rework job content to > just use the git repos that zuul will place into > /home/zuul/src/git.openstack.org directly. > ********************************** > > > [1] https://review.openstack.org/#/c/572368/ > [2] https://review.openstack.org/#/c/569111/ > [3] https://review.openstack.org/#/c/572495/ > > > Thanks, > > Előd > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From gergely.csatari at nokia.com Thu Jun 7 10:49:31 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Thu, 7 Jun 2018 10:49:31 +0000 Subject: [openstack-dev] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Message-ID: Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: * What is the API between Glance and the Glance backends? * How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? * Is it possible to have different OpenStack versions in the different cloud instances? * Can a cloud instance use the locally synchronised images in case of a network connection break? * Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: * If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Jun 7 10:57:46 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 7 Jun 2018 12:57:46 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <7efaf226-f610-6a2f-33ac-0d581e66ae21@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG.  Agenda can be found at https://etherpad.openstack.org/p/publiccloud-wg See you all at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From gr at ham.ie Thu Jun 7 11:59:56 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 7 Jun 2018 12:59:56 +0100 Subject: [openstack-dev] [designate] Meeting Times update Message-ID: <5c54753d-8b38-c8d2-9bb7-986c40af808a@ham.ie> Hi All, We had talked about moving to a bi-weekly meeting, with alternating times, and I have updated the meetings repo [0] with some suggested times. Please speak up if these do not suit! - Graham 0 - https://review.openstack.org/#/c/573204/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From corey.bryant at canonical.com Thu Jun 7 12:00:40 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Thu, 7 Jun 2018 08:00:40 -0400 Subject: [openstack-dev] [packaging] Extended Maintenance and Distro LTS coordination Message-ID: At the summit in Vancouver there were sessions discussing Extended Maintenance. The general focus was on keeping upstream branches open in extended maintenance mode after 18 months rather than EOLing them. If at some point fixes cannot land in a given project's branch, and there is no reasonable solution, then the branch would need to be EOL'd for that project. Personally I think this is great. In Ubuntu we want all fixes to land upstream first for supported releases, but once upstream branches reach EOL, we end up carrying local patches in our packages. I assume other distros do the same, so any sharing we can do here would make shorter work for all involved. During the first EM session, David Ames mentioned that perhaps distros could coordinate on what releases will be their LTS's to enable more focus on specific EM branches and Doug Hellman suggested an email to the list. I don't know how other distros choose their OpenStack LTS's or if anything will change from distro to distro but it seems to be worth a discussion. I'm going to give an overview of the Ubuntu OpenStack LTS releases below (current and planned). What are other distros' LTS's (current and planned) and do we line up at all? Ubuntu LTS (for background) -------------------------------------- Every 2 years in April a new Ubuntu LTS is released and support supported for 5 years. For example: * Ubuntu 16.04 - released in April 2016; supported until April 2021 * Ubuntu 18.04 - released in April 2018; supported until April 2023 * Ubuntu 20.04 - released in April 2020; supported until April 2025 Ubuntu OpenStack LTS ------------------------------- Ubuntu OpenStack LTS is provided in 3 different scenarios: 1) Each Ubuntu LTS supports the most recent release of OpenStack available at the time of the release for 5 years. 2) Each N-1 Ubuntu LTS supports that same release --^ of OpenStack for 3 years. 3) Each Ubuntu LTS supports the latest release of OpenStack that is available as of April of odd years for 3 years. Examples of 1: * Ubuntu 16.04 - OpenStack Mitaka supported for 5 years (via Ubuntu 16.04 archive) * Ubuntu 18.04 - OpenStack Queens supported for 5 years (via Ubuntu 18.04 archive) * Ubuntu 20.04 - OpenStack U***** supported for 5 years (via Ubuntu 20.04 archive) [1] Examples of 2: * Ubuntu 14.04 - OpenStack Mitaka supported for 3 years (via Ubuntu Cloud Archive) * Ubuntu 16.04 - OpenStack Queens supported for 3 years (via Ubuntu Cloud Archive) * Ubuntu 18.04 - OpenStack U***** supported for 3 years (via Ubuntu Cloud Archive) [1] Examples of 3: * Ubuntu 16.04 - OpenStack Ocata supported for 3 years (via Ubuntu Cloud Archive) * Ubuntu 18.04 - OpenStack Stein supported for 3 years (via Ubuntu Cloud Archive) [1] [1] Future OpenStack release; assumes the same OpenStack release cadence continues If you'd like to see a visual of this release cadence, there's a chart at https://www.ubuntu.com/info/release-end-of-life under "Ubuntu OpenStack release end of life". Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jun 7 12:23:44 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 7 Jun 2018 12:23:44 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Message-ID: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends · In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? o i.e. GLANCE is a typical shared service in a multi-region environment ? · If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? Several Glances with an independent synchronization service (PUSH) · I refer to this as the PUSH model · I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images o i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) · I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization o i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites · Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) One Glance and multiple Glance API Servers (PULL) · I refer to this as the PULL model · This is the current model supported in StarlingX’s Distributed Cloud sub-project o We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and o We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge · · this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" , "edge-computing at lists.openstack.org" Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: - What is the API between Glance and the Glance backends? - How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? - Is it possible to have different OpenStack versions in the different cloud instances? - Can a cloud instance use the locally synchronised images in case of a network connection break? - Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: - If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Jun 7 13:11:36 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Jun 2018 09:11:36 -0400 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> Message-ID: <5376e149-2820-819a-919b-31c9f97c9817@gmail.com> Sorry for delay in responding on this. Comments inline. On 05/29/2018 07:33 PM, Nadathur, Sundar wrote: > Hi all, >    The Cyborg/Nova scheduling spec [1] details what traits will be > applied to the resource providers that represent devices like GPUs. Some > of the traits referred to vendor names. I got feedback that traits must > not refer to products or specific models of devices. It's not that traits are not allowed to reference vendor names or identifiers. Just take a look at the entire module in os-traits that is designated with x86 CPU instruction set extensions: https://github.com/openstack/os-traits/blob/master/os_traits/hw/cpu/x86.py Clearly x86 references the vendor_id for Intel, as you know. The primary issue has never been having vendor identifiers in traits. The primary issue has always been the proposed (ab)use of traits as string categories -- in other words, using traits as "types". That isn't what traits are for. Traits are specifically for boolean values -- capabilities that a provider either has or doesn't have. That is why there's no key/value pairing for traits. There isn't a value. The capability is either available or not available. What you are trying to do is make a key/value pair where the key is "VGPU TYPE" and the value is the vendor's model name or moniker. And that isn't appropriate for traits. The string "M60-0Q" doesn't refer to a single capability. Instead, that string is a moniker that NVIDIA uses to represent a set of capabilities and random requirements together: * a max of 2 vGPU "display heads" * a max resolution of 2560x1600 * 512M framebuffer per vGPU * the host requires a Quadro vDWS license installed * support for the following graphics APIs: * DirectX 12 * Direct2D * DirectX Video Acceleration (DXVA) * OpenGL 4.5 * Vulkan 1.0 * support for the following parallel programming platforms: * OpenCL (<= 2.1 I think?) * CUDA (<=4.0 I think?) It's virtually impossible to tell what the actual capabilities of these vendor monikers are without help from the few people at NVIDIA that actually know these things, partly because the documentation from NVIDIA is so poor (or completely lacking), partly because the installation of the various host and guest drivers is an entirely manual process, partly because NVIDIA and most of the other hardware vendors are more interested in enabling their latest and greatest technology instead of documenting their "old" (read: <6 months ago released) stuff. > I agree. However, we need some reference to device types to enable > matching the VM driver with the device. Well, no, you don't need to match the device type to the VM driver. You need to match the host (or specific pGPU)'s supported CUDA driver version(s) (NVIDIA calls this "Compute Capability") with the *required minimum CUDA driver version for the guest*. The solution here is to have a big hash table of vendor product name (vGPU type) to sets of standard traits, and have the guest specify CUDA driver version requirements as one or more required=HW_GPU_API_CUDA_XXX extra specs. In other words, we need to break down this "vGPU type" (which even NVIDIA admits is nothing more than a "product profile" of a set of capabilities) into its respective set of standardized os-traits. I've recommended in Sylvain's multi-vgpu-types spec that we put this hash table in nova/virt/vgpu_capabilities.py but if Cyborg needs to use this as well, we could just as easily make it a module in os-traits. This way, when the nova-compute or Cyborg worker starts up, it can query the sysfs mdev_supported_types bucket of randomness, take the values that show up in /sys/class/mdev_bus/$device/mdev_supported_types and look up the actual capabilities that the strings like "nvidia-35" represent. > TL;DR We need some reference to device types, but we don't need product > names. I will update the spec [1] to clarify that. Rest of this email > clarifies why we need device types in traits, and what traits we propose > to include. > > In general, an accelerator device is operated by two pieces of software: > a driver in the kernel (which may discover and handle the PF for SR-IOV > devices), and a driver/library in the guest (which may handle the > assigned VF). > > The device assigned to the VM must match the driver/library packaged in > the VM. For this, the request must explicitly state what category of > devices it needs. For example, if the VM needs a GPU, it needs to say > whether it needs an AMD GPU or an Nvidia GPU, since it may have the > driver/libraries for that vendor alone. Placement's traits and resource classes are absolutely *not* intended to be the vehicle by which guest *configuration details* (like proprietary driver setup and versioning in the guest or license activation, etc) are conveyed to the guest. We already have a vehicle for that: it's called the metadata API and userdata, vendor data, device metadata. Let's limit the traits that are set as required by the guest to being an expression of which APIs the software in the VM was written against (e.g. what version of CUDA or OpenCL is needed, how many display heads are needed, what the maximum resolution needed is, etc). Handle license activation of proprietary drivers separately without traits. > It may also need to state what version of Cuda is needed, if it is a > Nvidia GPU. These aspects are necessarily vendor-specific. Actually, no, CUDA is (mostly) standardized as an API, as is OpenCL, OpenACC, OpenGL, etc. The vendor-specific stuff you are referring to is mostly about license activation inside the guest VM. > Further, one driver/library version may handle multiple devices. Since a > new driver version may be backwards compatible, multiple driver versions > may manage the same device. The development/release of the > driver/library inside the VM should be independent of the kernel driver > for that device. Agreed. > For FPGAs, there is an additional twist as the VM may need specific > bitstream(s), and they match only specific device/region types. The > bitstream for a device from a vendor will not fit any other device from > the same vendor, let alone other vendors. IOW, the region type is > specific not just to a vendor but to a device type within the vendor. > So, it is essential to identify the device type. > > So, the proposed set of RCs and traits are as below. As we learn more > about actual usages by operators, we may need to evolve this set. > > * There is a resource class per device category e.g. > CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > * The resource provider that represents a device has the following traits: > o Vendor/Category trait: e.g. CUSTOM_GPU_AMD, CUSTOM_FPGA_XILINX. > o Device type trait which is a refinement of vendor/category trait > e.g. CUSTOM_FPGA_XILINX_VU9P. > > NOTE: This is not a product or model, at least for FPGAs. > Multiple products may use the same FPGA chip. > NOTE: The reason for having both the vendor/category and this > one is that a flavor may ask for either, depending on the > granularity desired. IOW, if one driver can handle all devices > from a vendor (*eye roll*), the flavor can ask for the > vendor/category trait alone. If there are separate drivers for > different device families from the same vendor, the flavor must > specify the trait for the device family. > NOTE: The equivalent trait for GPUs may be like > CUSTOM_GPU_NVIDIA_P90, but I'll let others decide if that is a > product or not. > > o For FPGAs, we have additional traits: > + Functionality trait: e.g. CUSTOM_FPGA_COMPUTE, > CUSTOM_FPGA_NETWORK, CUSTOM_FPGA_STORAGE > + Region type ID.  e.g. CUSTOM_FPGA_INTEL_REGION_. > + Optionally, a function ID, indicating what function is > currently programmed in the region RP. e.g. > CUSTOM_FPGA_INTEL_FUNCTION_. Not all implementations > may provide it. The function trait may change on > reprogramming, but it is not expected to be frequent. > + Possibly, CUSTOM_PROGRAMMABLE as a separate trait I really don't believe you should be using traits for the different types of FPGA bitstreams. Use custom resource classes for all of it, IMHO. Traits are capabilities. What you are describing above is really just a consumable resource (in other words, a resource class) of a custom bitstream program. They should be just custom resource classes. Use traits to represent capabilities, not types. Best, -jay From lucioseki at gmail.com Thu Jun 7 13:33:00 2018 From: lucioseki at gmail.com (Lucio Seki) Date: Thu, 7 Jun 2018 08:33:00 -0500 Subject: [openstack-dev] [cinder] Enabling tempest test for in-use volume extending Message-ID: Hi. Since Pike release, Cinder supports in-use volume extending [1]. By default, it assumes that every storage backend is able to perform this operation. Thus, the tempest test for this feature should be enabled by default. A patch was submitted to enable it [2]. Please note that, after this patch being merged, the 3rd party CI maintainers may need to override this configuration, if the backend being tested does not support in-use volume extending. [1] Add ability to extend 'in-use' volume: https://review.openstack.org/#/c/454287/ [2] Enable tempest tests for attached volume extending: https://review.openstack.org/#/c/572188/ Regards, Lucio Seki -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Jun 7 13:35:54 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 07 Jun 2018 14:35:54 +0100 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: <1528313781-sup-5000@lrrr.local> References: <1528313781-sup-5000@lrrr.local> Message-ID: <44ad3b379c5d1dea97aea5680b43a0dfd2346765.camel@redhat.com> On Wed, 2018-06-06 at 16:04 -0400, Doug Hellmann wrote: > I have started submitting a series of patches to fix up the tox.ini > settings for projects as a step towards running "python3 first" > [1]. The point of doing this now is to give teams a head start on > understanding the work involved as we consider whether to make this > a community goal. > > The current patches are all mechanically generated changes to the > basepython value for environments that seem to be likely candidates. > They're basically the "easy" part of the transition. I've left any > changes that will need more discussion alone for now. > > In particular, I've skipped over any tox environments with "functional" > in the name, since I thought those ran functional tests. Teams will > need to decide whether to change those job definitions, or duplicate > them and run them under python 2 and 3. Since we are not dropping > python 2 support until the U cycle, I suggest going ahead and running > the jobs twice. > > Note that changing the tox settings won't actually change some of the > jobs. For example, with our current PTI definition, the documentation > and releasenotes jobs do not run under tox. That means those will need > to be changed by editing the zuul configuration for the repository. > > I have started to make notes for tracking the work in > https://etherpad.openstack.org/p/python3-first -- including some notes > about taking the next step to update the zuul job definitions and common > issues we've already encountered to help folks debug job failures. > > I could use some help keeping an eye on these changes and getting > them through the gate. If you are interested in helping, please > leave a comment on the review you are willing to shepherd. > > Doug I banged through nearly everything I could +2/+W this morning (mostly stuff under the oslo or nova teams' remit). I saw very few issues and no failing CIs so it was mostly a case of sanity checking and approving. If anyone is seeing issues with their "docs" target, feel free to give me a shout (IRC: stephenfin) as we had to work through similar issues in nova not too long ago. It's amazing where monkey patching will get you :) Stephen > [1] https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu Jun 7 14:02:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 09:02:15 -0500 Subject: [openstack-dev] [nova] Need feedback on spec for handling down cells in the API Message-ID: We have a nova spec [1] which is at the point that it needs some API user (and operator) feedback on what nova API should be doing when listing servers and there are down cells (unable to reach the cell DB or it times out). tl;dr: the spec proposes to return "shell" instances which have the server uuid and created_at fields set, and maybe some other fields we can set, but otherwise a bunch of fields in the server response would be set to UNKNOWN sentinel values. This would be unversioned, and therefore could wreak havoc on existing client side code that expects fields like 'config_drive' and 'updated' to be of a certain format. There are alternatives listed in the spec so please read this over and provide feedback since this is a pretty major UX change. Oh, and no pressure, but today is the spec freeze deadline for Rocky. [1] https://review.openstack.org/#/c/557369/ -- Thanks, Matt From openstack-dev at storpool.com Thu Jun 7 14:15:08 2018 From: openstack-dev at storpool.com (Peter Penchev) Date: Thu, 7 Jun 2018 17:15:08 +0300 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... In-Reply-To: <20180604194009.GA13935@sm-xps> References: <20180604190719.GA26100@straylight.m.ringlet.net> <20180604194009.GA13935@sm-xps> Message-ID: <20180607141508.GA22884@straylight.m.ringlet.net> On Mon, Jun 04, 2018 at 02:40:09PM -0500, Sean McGinnis wrote: > > > > Our CI has been chugging along since June 2nd (not really related to > > the timing of your e-mail, it just so happened that we fixed another > > small problem there). You can see the logs at > > > > http://logs.ci-openstack.storpool.com/ > > > > Thanks Peter. > > It looks like the reason the report run doesn't show Storpool reporting is a > due to a mismatch on name. The officially list account is "StorPool CI" > according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI > > But it appears on looking into this that the real CI account is "StorPool > distributed storage CI". Is that correct? If so, can you update the wiki with > the correct account name? Right... sorry about that. I've fixed that in the wiki - https://wiki.openstack.org/w/index.php?title=ThirdPartySystems&oldid=161664 Best regards, Peter -- Peter Pentchev roam@{ringlet.net,debian.org,FreeBSD.org} pp at storpool.com PGP key: http://people.FreeBSD.org/~roam/roam.key.asc Key fingerprint 2EE7 A7A5 17FC 124C F115 C354 651E EFB0 2527 DF13 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Thu Jun 7 14:39:19 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 09:39:19 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <1528207912-sup-6956@lrrr.local> References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> <20180605122627.GA75119@smcginnis-mbp.local> <1528207912-sup-6956@lrrr.local> Message-ID: On 6/5/2018 9:14 AM, Doug Hellmann wrote: > In the past when we've had questions about how broadly a goal is going > to affect projects, we did a little data collection work up front. Maybe > that's the next step for this one? Does someone want to volunteer to go > around and talk to some of the project teams that are likely candidates > for these sorts of upgrade blockers to start making lists? I figured that if the upgrade status check CLI goal was ever given serious consideration, I'd likely be the champion since I wrote the nova-status CLI and do a lot of the maintenance and updates to it. So unless others are interested in this, you can sign me up, but at the moment I'm pretty swamped with nova development stuff so I can't make any promises on making quick progress. -- Thanks, Matt From jungleboyj at gmail.com Thu Jun 7 14:53:40 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Thu, 7 Jun 2018 09:53:40 -0500 Subject: [openstack-dev] [cinder] Removing Support for Drivers with Failing CI's ... In-Reply-To: <20180607141508.GA22884@straylight.m.ringlet.net> References: <20180604190719.GA26100@straylight.m.ringlet.net> <20180604194009.GA13935@sm-xps> <20180607141508.GA22884@straylight.m.ringlet.net> Message-ID: Peter, Thanks for getting that fixed.  The associated patch has been removed so we should be good now. Jay On 6/7/2018 9:15 AM, Peter Penchev wrote: > On Mon, Jun 04, 2018 at 02:40:09PM -0500, Sean McGinnis wrote: >>> Our CI has been chugging along since June 2nd (not really related to >>> the timing of your e-mail, it just so happened that we fixed another >>> small problem there). You can see the logs at >>> >>> http://logs.ci-openstack.storpool.com/ >>> >> Thanks Peter. >> >> It looks like the reason the report run doesn't show Storpool reporting is a >> due to a mismatch on name. The officially list account is "StorPool CI" >> according to https://wiki.openstack.org/wiki/ThirdPartySystems/StorPool_CI >> >> But it appears on looking into this that the real CI account is "StorPool >> distributed storage CI". Is that correct? If so, can you update the wiki with >> the correct account name? > Right... sorry about that. I've fixed that in the wiki - > https://wiki.openstack.org/w/index.php?title=ThirdPartySystems&oldid=161664 > > Best regards, > Peter > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Jun 7 14:53:40 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 7 Jun 2018 09:53:40 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> <1528146144-sup-2183@lrrr.local> <8aade74e-7eeb-7d31-8331-e2a1e6be7b64@gmx.com> <61b4f192-3b48-83b4-9a21-e159fec67fcf@gmail.com> <20180605122627.GA75119@smcginnis-mbp.local> <1528207912-sup-6956@lrrr.local> Message-ID: <6d8ce97c-ac4a-b831-cee8-d748d9f26ebd@gmx.com> On 06/07/2018 09:39 AM, Matt Riedemann wrote: > On 6/5/2018 9:14 AM, Doug Hellmann wrote: >> In the past when we've had questions about how broadly a goal is going >> to affect projects, we did a little data collection work up front. Maybe >> that's the next step for this one? Does someone want to volunteer to go >> around and talk to some of the project teams that are likely candidates >> for these sorts of upgrade blockers to start making lists? > > I figured that if the upgrade status check CLI goal was ever given > serious consideration, I'd likely be the champion since I wrote the > nova-status CLI and do a lot of the maintenance and updates to it. > > So unless others are interested in this, you can sign me up, but at > the moment I'm pretty swamped with nova development stuff so I can't > make any promises on making quick progress. > Excellent, thanks Matt. Keep in mind that being the champion for a goal does not necessarily mean you will be the one doing all the work. The goals just need someone that can help facilitate things like communication and education to the affected teams and keep an eye on the progress. From borne.mace at oracle.com Thu Jun 7 15:54:22 2018 From: borne.mace at oracle.com (Borne Mace) Date: Thu, 7 Jun 2018 08:54:22 -0700 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: <3530d89a-bf11-b12c-a4f7-bce722d2917b@oracle.com> The vote has passed, with 12 people voting, and them all being +1. Welcome to the core kolla-cli reviewer team Steve. I have made the appropriate gerrit changes so you now have "the power". -- Borne On 05/31/2018 10:02 AM, Borne Mace wrote: > Greetings all, > > I would like to propose the addition of Steve Noyes to the kolla-cli > core reviewer team. Consider this nomination as my personal +1. > > Steve has a long history with the kolla-cli and should be considered > its co-creator as probably half or more of the existing code was due > to his efforts. He has now been working diligently since it was > pushed upstream to improve the stability and testability of the cli > and has the second most commits on the project. > > The kolla core team consists of 19 people, and the kolla-cli team of > 2, for a total of 21. Steve therefore requires a minimum of 11 votes > (so just 10 more after my +1), with no veto -2 votes within a 7 day > voting window to end on June 6th. Voting will be closed immediately > on a veto or in the case of a unanimous vote. > > As I'm not sure how active all of the 19 kolla cores are, your > attention and timely vote is much appreciated. > > Thanks! > > -- Borne > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ashlee at openstack.org Thu Jun 7 16:13:37 2018 From: ashlee at openstack.org (Ashlee Ferguson) Date: Thu, 7 Jun 2018 11:13:37 -0500 Subject: [openstack-dev] Berlin Summit CFP Deadline July 17 Message-ID: <558EFA52-A6F3-4C7E-A509-301B9B17953E@openstack.org> Hi everyone, The Call for Presentations is open for the Berlin Summit , November 13-15. The deadline to submit your presentation has been extended to July 17. At the Vancouver Summit, we focused on open infrastructure integration as the Summit has evolved over the years to cover more than just OpenStack. We had over 30 different projects from the open infrastructure community join, including Kubernetes, Docker, Ansible, OpenShift and many more. The Tracks were organized around specific use cases and will remain the same for Berlin with the addition of Hands on Workshops as its own dedicated Track. We encourage you to submit presentations covering the open infrastructure tools you’re using, as well as the integration work needed to address these use cases. We also encourage you to invite peers from other open source communities to speak and collaborate. The Tracks are: • CI/CD • Container Infrastructure • Edge Computing • Hands on Workshops • HPC / GPU / AI • Open Source Community • Private & Hybrid Cloud • Public Cloud • Telecom & NFV Community voting, the first step in building the Summit schedule, will open in mid July. Once community voting concludes, a Programming Committee for each Track will build the schedule. Programming Committees are made up of individuals from many different open source communities working in open infrastructure, in addition to people who have participated in the past. If you’re interested in nominating yourself or someone else to be a member of the Summit Programming Committee for a specific Track, please fill out the nomination form . Nominations will close on June 28. Again, the deadline to submit proposals is July 17. Please note topic submissions for the OpenStack Forum (planning/working sessions with OpenStack devs and operators) will open at a later date. The Early Bird registration deadline will be in mid August. We’re working hard to make it the best Summit yet, and look forward to bringing together different open infrastructure communities to solve these hard problems together. Want to provide feedback on this process? Please focus discussion on the openstack-community mailing list, or contact the Summit Team directly at summit at openstack.org. See you in Berlin! Ashlee Ashlee Ferguson OpenStack Foundation ashlee at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Jun 7 16:36:31 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 7 Jun 2018 12:36:31 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was brief, covering only 1 major topic. The main topic for the SIG today was the migration of bug tracking from Launchpad to StoryBoard[7]. No firm plans were made to migrate yet, but the group agreed that this migration would be positive from a community alignment perspective and that the number of bugs in conjunction with a low rate of bug addition would make the migration less risky. There was no opposition to the migration, but it was noted that the shift from Launchpad to StoryBoard does represent a paradigm shift and that there is not a 1:1 equivalency between the two. Expect the SIG to continue researching the migration and put forth a plan to shift at some point in the future. Although not a full discussion topic, Chris Dent reinforced the SIG's commitment to attending the forthcoming writeup about micrversion bump plans by mordred. There being no recent changes to pending guidelines nor to bugs, we ended the meeting early. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://storyboard.openstack.org/#!/page/about Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From sean.mcginnis at gmx.com Thu Jun 7 16:56:49 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 7 Jun 2018 11:56:49 -0500 Subject: [openstack-dev] [release] Release countdown for week R-11, June 11-15 Message-ID: <20180607165648.GA8825@sm-xps> Hey everyone, here is this week's countdown email. Development Focus ----------------- Teams should be focused on implementing planned work for the cycle. It is also a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. With milestone-2 here, many teams have their own project-specific spec freezes or other deadlines. Please be aware of your teams policies for this. General Information ------------------- The following libraries have unreleased changes merged that look like they may be more than just test settings and the like. openstack/automaton openstack/debtcollector openstack/glance_store openstack/heat-translator openstack/neutron-lib openstack/openstacksdk openstack/os-brick openstack/osc-placement openstack/python-brick-cinderclient-ext openstack/python-designateclient openstack/python-ironicclient openstack/python-magnumclient openstack/python-muranoclient openstack/python-neutronclient openstack/python-novaclient openstack/python-qinlingclient openstack/python-senlinclient openstack/python-solumclient openstack/python-swiftclient openstack/python-vitrageclient openstack/python-watcherclient openstack/sahara-extra openstack/taskflow openstack/tosca-parser This does not mean they must do a release immediately, but I wanted to make sure the teams are aware and thinking about when it makes sense to get those changes released. Making library changes available regularly helps to make sure there are no surprises for consumers late in the cycle. Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From melwittt at gmail.com Thu Jun 7 17:56:53 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 7 Jun 2018 10:56:53 -0700 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 Message-ID: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Hello Stackers, Recently, we've received interest about increasing the maximum number of allowed volumes to attach to a single instance > 26. The limit of 26 is because of a historical limitation in libvirt (if I remember correctly) and is no longer limited at the libvirt level in the present day. So, we're looking at providing a way to attach more than 26 volumes to a single instance and we want your feedback. We'd like to hear from operators and users about their use cases for wanting to be able to attach a large number of volumes to a single instance. If you could share your use cases, it would help us greatly in moving forward with an approach for increasing the maximum. Some ideas that have been discussed so far include: A) Selecting a new, higher maximum that still yields reasonable performance on a single compute host (64 or 128, for example). Pros: helps prevent the potential for poor performance on a compute host from attaching too many volumes. Cons: doesn't let anyone opt-in to a higher maximum if their environment can handle it. B) Creating a config option to let operators choose how many volumes allowed to attach to a single instance. Pros: lets operators opt-in to a maximum that works in their environment. Cons: it's not discoverable for those calling the API. C) Create a configurable API limit for maximum number of volumes to attach to a single instance that is either a quota or similar to a quota. Pros: lets operators opt-in to a maximum that works in their environment. Cons: it's yet another quota? Please chime in with your use cases and/or thoughts on the different approaches. Thanks for your help, -melanie From mriedemos at gmail.com Thu Jun 7 18:07:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 13:07:48 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Message-ID: <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> On 6/7/2018 12:56 PM, melanie witt wrote: > Recently, we've received interest about increasing the maximum number of > allowed volumes to attach to a single instance > 26. The limit of 26 is > because of a historical limitation in libvirt (if I remember correctly) > and is no longer limited at the libvirt level in the present day. So, > we're looking at providing a way to attach more than 26 volumes to a > single instance and we want your feedback. The 26 volumes thing is a libvirt driver restriction. There was a bug at one point because powervm (or powervc) was capping out at 80 volumes per instance because of restrictions in the build_requests table in the API DB: https://bugs.launchpad.net/nova/+bug/1621138 They wanted to get to 128, because that's how power rolls. > > We'd like to hear from operators and users about their use cases for > wanting to be able to attach a large number of volumes to a single > instance. If you could share your use cases, it would help us greatly in > moving forward with an approach for increasing the maximum. > > Some ideas that have been discussed so far include: > > A) Selecting a new, higher maximum that still yields reasonable > performance on a single compute host (64 or 128, for example). Pros: > helps prevent the potential for poor performance on a compute host from > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher > maximum if their environment can handle it. > > B) Creating a config option to let operators choose how many volumes > allowed to attach to a single instance. Pros: lets operators opt-in to a > maximum that works in their environment. Cons: it's not discoverable for > those calling the API. I'm not a fan of a non-discoverable config option which will impact API behavior indirectly, i.e. on cloud A I can boot from volume with 64 volumes but not on cloud B. > > C) Create a configurable API limit for maximum number of volumes to > attach to a single instance that is either a quota or similar to a > quota. Pros: lets operators opt-in to a maximum that works in their > environment. Cons: it's yet another quota? This seems the most reasonable to me if we're going to do this, but I'm probably in the minority. Yes more quota limits sucks, but it's (1) discoverable by API users and therefore (2) interoperable. If we did the quota thing, I'd probably default to unlimited and let the cinder volume quota cap it for the project as it does today. Then admins can tune it as needed. -- Thanks, Matt From mriedemos at gmail.com Thu Jun 7 18:08:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 13:08:56 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> Message-ID: <2b0c02b2-0194-4f86-4719-058d08384e5b@gmail.com> +operators (I forgot) On 6/7/2018 1:07 PM, Matt Riedemann wrote: > On 6/7/2018 12:56 PM, melanie witt wrote: >> Recently, we've received interest about increasing the maximum number >> of allowed volumes to attach to a single instance > 26. The limit of >> 26 is because of a historical limitation in libvirt (if I remember >> correctly) and is no longer limited at the libvirt level in the >> present day. So, we're looking at providing a way to attach more than >> 26 volumes to a single instance and we want your feedback. > > The 26 volumes thing is a libvirt driver restriction. > > There was a bug at one point because powervm (or powervc) was capping > out at 80 volumes per instance because of restrictions in the > build_requests table in the API DB: > > https://bugs.launchpad.net/nova/+bug/1621138 > > They wanted to get to 128, because that's how power rolls. > >> >> We'd like to hear from operators and users about their use cases for >> wanting to be able to attach a large number of volumes to a single >> instance. If you could share your use cases, it would help us greatly >> in moving forward with an approach for increasing the maximum. >> >> Some ideas that have been discussed so far include: >> >> A) Selecting a new, higher maximum that still yields reasonable >> performance on a single compute host (64 or 128, for example). Pros: >> helps prevent the potential for poor performance on a compute host >> from attaching too many volumes. Cons: doesn't let anyone opt-in to a >> higher maximum if their environment can handle it. >> >> B) Creating a config option to let operators choose how many volumes >> allowed to attach to a single instance. Pros: lets operators opt-in to >> a maximum that works in their environment. Cons: it's not discoverable >> for those calling the API. > > I'm not a fan of a non-discoverable config option which will impact API > behavior indirectly, i.e. on cloud A I can boot from volume with 64 > volumes but not on cloud B. > >> >> C) Create a configurable API limit for maximum number of volumes to >> attach to a single instance that is either a quota or similar to a >> quota. Pros: lets operators opt-in to a maximum that works in their >> environment. Cons: it's yet another quota? > > This seems the most reasonable to me if we're going to do this, but I'm > probably in the minority. Yes more quota limits sucks, but it's (1) > discoverable by API users and therefore (2) interoperable. > > If we did the quota thing, I'd probably default to unlimited and let the > cinder volume quota cap it for the project as it does today. Then admins > can tune it as needed. > -- Thanks, Matt From rleander at redhat.com Thu Jun 7 18:24:07 2018 From: rleander at redhat.com (Rain Leander) Date: Thu, 7 Jun 2018 20:24:07 +0200 Subject: [openstack-dev] [RDO] Rocky Milestone 2 RDO Test Days June 14-15 Message-ID: Who’s up for a rematch?- Rocky Milestone 2 is here and we’re ready to rumble! Join RDO [0] on June 14 & 15 for an awesome time of taking down bugs and fighting errors in the most recent release [1].- We certainly won’t be pulling any punches. Want to get in on the action? We’re looking for developers, users, operators, quality engineers, writers, and, yes, YOU. If you’re reading this, we think you’re a champion and we want your help! We’ll have packages for the following platforms: * RHEL 7 * CentOS 7 You’ll want a fresh install with latest updates installed so that there’s no hard-to-reproduce interactions with other things. We’ll be collecting feedback, writing up tickets, filing bugs, and answering questions. Even if you only have a few hours to spare, we’d love your help taking this new version for a spin to work out any kinks. Not only will this help identify issues early in the development process, but you can be the one of the first to cut your teeth on the latest versions of your favorite deployment methods like TripleO, PackStack, and Kolla. Interested? We’ll be gathering on #rdo (on Freenode IRC) for any associated questions/discussion, and working through the “Does it work?” tests [2]. As Rocky said, “The world ain’t all sunshine and rainbows,” but with your help, we can keep moving forward and make the RDO world better for those around us. Hope to see you on the 14th & 15th! [0] https://www.rdoproject.org/ [1] https://www.rdoproject.org/testday/ [2] https://www.rdoproject.org/testday/tests/ -- K Rain Leander OpenStack Community Liaison Open Source and Standards Team https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Jun 7 18:28:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Jun 2018 14:28:02 -0400 Subject: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons Message-ID: <1528395799-sup-4077@lrrr.local> As we discussed in today's office hours, I have set up some space in the wiki for us to track which TC members are volunteering to act as liaison to the teams and other groups within the community to ensure they have the assistance and support they need from the TC. https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons For the first round, please sign up for groups you are interested in helping. We will work out some sort of assignment system for the rest so we have good coverage. The list is quite long, so I don't expect everyone to be checking in with the groups weekly. But we do need to get a handle on where things stand now, and work out a way to keep up to date over time. My hope is that by dividing the work up, we won't *all* have to be tracking all of the groups and we won't let anyone slip through the cracks. Doug From jaypipes at gmail.com Thu Jun 7 18:54:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 7 Jun 2018 14:54:54 -0400 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Message-ID: <99d90de9-74b3-76d4-4320-5ce10a411234@gmail.com> On 06/07/2018 01:56 PM, melanie witt wrote: > Hello Stackers, > > Recently, we've received interest about increasing the maximum number of > allowed volumes to attach to a single instance > 26. The limit of 26 is > because of a historical limitation in libvirt (if I remember correctly) > and is no longer limited at the libvirt level in the present day. So, > we're looking at providing a way to attach more than 26 volumes to a > single instance and we want your feedback. > > We'd like to hear from operators and users about their use cases for > wanting to be able to attach a large number of volumes to a single > instance. If you could share your use cases, it would help us greatly in > moving forward with an approach for increasing the maximum. > > Some ideas that have been discussed so far include: > > A) Selecting a new, higher maximum that still yields reasonable > performance on a single compute host (64 or 128, for example). Pros: > helps prevent the potential for poor performance on a compute host from > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher > maximum if their environment can handle it. > > B) Creating a config option to let operators choose how many volumes > allowed to attach to a single instance. Pros: lets operators opt-in to a > maximum that works in their environment. Cons: it's not discoverable for > those calling the API. > > C) Create a configurable API limit for maximum number of volumes to > attach to a single instance that is either a quota or similar to a > quota. Pros: lets operators opt-in to a maximum that works in their > environment. Cons: it's yet another quota? If Cinder tracks volume attachments as consumable resources, then this would be my preference. Best, -jay From chris.friesen at windriver.com Thu Jun 7 19:33:27 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 7 Jun 2018 13:33:27 -0600 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> Message-ID: <5B198887.6050204@windriver.com> On 06/07/2018 12:07 PM, Matt Riedemann wrote: > On 6/7/2018 12:56 PM, melanie witt wrote: >> C) Create a configurable API limit for maximum number of volumes to attach to >> a single instance that is either a quota or similar to a quota. Pros: lets >> operators opt-in to a maximum that works in their environment. Cons: it's yet >> another quota? > > This seems the most reasonable to me if we're going to do this, but I'm probably > in the minority. Yes more quota limits sucks, but it's (1) discoverable by API > users and therefore (2) interoperable. Quota seems like kind of a blunt instrument, since it might not make sense for a little single-vCPU guest to get the same number of connections as a massive guest with many dedicated vCPUs. (Since you can fit many more of the former on a given compute node.) If what we care about is the number of connections per compute node it almost feels like a resource that should be tracked...but you wouldn't want to have one instance consume all of the connections on the node so you're back to needing a per-instance limit of some sort. Chris From zigo at debian.org Thu Jun 7 20:17:11 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 7 Jun 2018 22:17:11 +0200 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: <1528313781-sup-5000@lrrr.local> References: <1528313781-sup-5000@lrrr.local> Message-ID: On 06/06/2018 10:04 PM, Doug Hellmann wrote: > I have started submitting a series of patches to fix up the tox.ini > settings for projects as a step towards running "python3 first" > [1]. The point of doing this now is to give teams a head start on > understanding the work involved as we consider whether to make this > a community goal. > > The current patches are all mechanically generated changes to the > basepython value for environments that seem to be likely candidates. > They're basically the "easy" part of the transition. I've left any > changes that will need more discussion alone for now. > > In particular, I've skipped over any tox environments with "functional" > in the name, since I thought those ran functional tests. Teams will > need to decide whether to change those job definitions, or duplicate > them and run them under python 2 and 3. Since we are not dropping > python 2 support until the U cycle, I suggest going ahead and running > the jobs twice. > > Note that changing the tox settings won't actually change some of the > jobs. For example, with our current PTI definition, the documentation > and releasenotes jobs do not run under tox. That means those will need > to be changed by editing the zuul configuration for the repository. > > I have started to make notes for tracking the work in > https://etherpad.openstack.org/p/python3-first -- including some notes > about taking the next step to update the zuul job definitions and common > issues we've already encountered to help folks debug job failures. > > I could use some help keeping an eye on these changes and getting > them through the gate. If you are interested in helping, please > leave a comment on the review you are willing to shepherd. > > Doug > > [1] https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged) tl;dr: all projects must move their functional tests under Py3, uwsgi and SSL, otherwise there's undetected issues. Doug, Attempting to make puppet-openstack fully work for Debian (more on this soon as hopefully I'll be able to announce soon the general availability and success with testing), and having all Debian packages running in Python 3, what I have discovered is that there's a general issue when running tests: they do not take into account the fact that we cannot run with Eventlet, because Python 3 + Eventlet + SSL = handshake crash. In other words, all projects need to run their API functional tests using either mod_wsgi or uwsgi (and more likely the later), using Python 3 *AND* SSL, as this is how operators will run. Unfortunately, this is far from being the case at the moment, and I have been able to find defects in at least 3 services, and sometimes, helped, or contributed patches myself. More in details, projects after projects, just so you understand what I mean. No finger pointing here, these are just to illustrate my argumentation and show the kind of problems we're having. sahara-api looks completely broken under Python 3 (ie: error 500 on each GET requests), unless this tiny 3 liner patch is applied https://review.openstack.org/#/c/573179/ (hopefully, a better patch will happen upstream, I'm just a package maintainer not having enough time to get involved deep enough in each individual projects...) Cinder-volume fails to load with Ceph driver if using Luminous and Python 3 unless this patch is applied: https://review.openstack.org/#/c/568813/ neutron-server, under Python 3 and SSL, has to be transformed into neutron-api served over uwsgi (probably also works using mod_wsgi). In such case, you must run neutron-rpc-server to handle messages from the bus. But then, the rpc-server doesn't understand plugin modules unless this patch is added: https://review.openstack.org/555608 and also, firewall rules aren't received by the openvswitch-agent (and therefore, security group rules aren't being applied to iptables) unless this patch is reverted: https://review.openstack.org/434678 and then it works for openvswitch agent, though for linuxbridge, I still couldn't ping floating IPs (I'm still investigating the problem, I'm not sure yet if it's a routing problem or what...). And that's not even over, as it looks like tempest's scenario test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops is still broken with openvswitch for me: http://logs.openstack.org/28/564328/48/check/puppet-openstack-integration-4-scenario001-tempest-debian-stable/5e64694/job-output.txt.gz#_2018-06-07_12_42_50_760463 Help is desperately wanted here as I spent a really long time on this already, it's still failing, and I don't understand what's going on! All this to say, the more general issue here is that projects, while unit testing under Python 3, aren't running functional tests the same way that operators will (ie: with SSL and uwsgi/mod_wsgi), and because of that, some problems aren't being detected. Just a hint, to run something like neutron-api with ipv4, 6 and SSL, you'd run it this way: /usr/bin/uwsgi_python35 --https-socket [::]:9696,/usr/local/share/ca-certificates/puppet_openstack.crt,/etc/neutron/ssl/private/debian-stretch-ovh-gra1-0004341140.pem --ini /etc/neutron/neutron-api-uwsgi.ini --pyargv "--config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file=/etc/neutron/fwaas_driver.ini" Notice the [::] meaning ipv4 AND ipv6 at the same time, and how --pyargv needs a quoted list of arguments. Hopefully, this is slowly being addressed, though I would very much like if there was a general move to using uwsgi, preferably with SSL and ipv6 turned on and tested. It'd be also nice if projects could ship the uwsgi ini file that they use for functional tests, as it looks like it sometimes depend on the project (for example, I had to add rem-header: content-length for nova-placement-api to avoid connection reset by peer, but I'm not sure if it even is harmful for other projects). Here's an example from the Debian packaging: https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api-uwsgi.ini See how I'm deliberately *not* setting "pyargv" there, and prefer it to be set by the init script / systemd service (so that it can be dynamic and load the configuration file needed for the activated plugin), and that http-socket / https-socket is also dynamic, ie: --https-socket is used on the command line if a pair of certificate + private key is found on the hard disk under /etc/neutron/ssl. See https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api.init.in and https://salsa.debian.org/openstack-team/debian/openstack-pkg-tools/tree/debian/queens/init-template to understand how it's built. Hoping that these packaging-related insights are helpful for the project at large, cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Jun 7 20:25:47 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 7 Jun 2018 22:25:47 +0200 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: <5a1f84c9-8037-71d3-2611-33909099f842@gmx.com> References: <1528313781-sup-5000@lrrr.local> <5a1f84c9-8037-71d3-2611-33909099f842@gmx.com> Message-ID: On 06/06/2018 10:14 PM, Sean McGinnis wrote: > On 06/06/2018 03:04 PM, Doug Hellmann wrote: >> I have started submitting a series of patches to fix up the tox.ini >> settings for projects as a step towards running "python3 first" >> [1]. The point of doing this now is to give teams a head start on >> understanding the work involved as we consider whether to make this >> a community goal. > > I would ask that you stop. > > While I think this is useful as a quick way of finding out which projects > will require additional work here and which don't, this is just creating > a lot of work and overlap. > > Some teams are not ready to take this on right now. So unless you are > planning on actually following through with making the failing ones work, > it is just adding to the set of failing patches in their review queue. > > Other teams are already working on this and working through the failures > due to the differences between python 2 and 3. So these just end up being > duplication and a distraction for limited review capacity. Sean, Reading these words is very much disappointing to me. I very much like the coordination work that Doug is engaging into, and it'd be very frustrating if some projects were refusing to clean their technical debt regarding Python 3 support. As I wrote in my mail to Doug, the biggest issue I've seen is projects not really setting their functional tests under real world conditions. Anyone pushing in that direction should be warmly welcomed At some point, this should be a top priority so we all get rid of this transition work once and for all, without having that one annoying project that is still lagging behind (see how I'm *not* naming anyone on purpose...). If you're not happy about the way Doug is doing, just make it happen the way you prefer. As long as it's done soon, everyone will be happy. Cheers, Thomas Goirand (zigo) From kennelson11 at gmail.com Thu Jun 7 20:25:57 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 7 Jun 2018 13:25:57 -0700 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <20180604180742.GA6404@sm-xps> References: <20180604180742.GA6404@sm-xps> Message-ID: Hello :) I think that these two goals definitely fit the criteria we discussed in Vancouver during the S Release Goal Forum Session. I know Storyboard Migration was also mentioned after I had to dip out to another session so I wanted to follow up on that. I know it doesn't fit the shiny user facing docket that was discussed at the Forum, but I do think its time we make migration official in some capacity as a release goal or some other way. Having migrated Ironic and having TripleO on the schedule for migration (as requested during the last goal discussion) in addition to having migrated Heat, Barbican and several others in the last few months we have reached the point that I think migration of the rest of the projects is attainable by the end of Stein. Thoughts? -Kendall (diablo_rojo) On Mon, Jun 4, 2018 at 11:08 AM Sean McGinnis wrote: > Hi everyone, > > This is to continue the discussion of the goal selection for the Stein > release. > I had previously sent out a recap of our discussion at the Forum here: > > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130999.html > > Now we need to actually narrow things down and pick a couple goals. > > Strawman > ======== > > Just to set a starting point for debate, I propose the following two as > goals > for Stein: > > - Cold Upgade Support > - Python 3 first > > As a reminder of other ideas, here is the link to the backlog of goal ideas > we've kept so far: > > https://etherpad.openstack.org/p/community-goals > > Feel free to add more to that list, and if you have been involved in any > of the > things that have been completed there, please remove things you don't think > should still be there. > > This is by no means an exhaustive list of what we could or should do for > goals. > > With that, I'll go over the choices that I've proposed for the strawman. > > Python 3 First > ============== > > One of the things brought up in the session was picking things that bring > excitement and are obvious benefits to deployers and users of OpenStack > services. While this one is maybe not as immediately obvious, I think this > is something that will end up helping deployers and also falls into the > tech > debt reduction category that will help us move quicker long term. > > Python 2 is going away soon, so I think we need something to help compel > folks > to work on making sure we are ready to transition. This will also be a good > point to help switch the mindset over to Python 3 being the default used > everywhere, with our Python 2 compatibility being just to continue legacy > support. > > Cold Upgrade Support > ==================== > > The other suggestion in the Forum session related to upgrades was the > addition > of "upgrade check" CLIs for each project, and I was tempted to suggest > that as > my second strawman choice. For some projects that would be a very minimal > or > NOOP check, so it would probably be easy to complete the goal. But > ultimately > what I think would bring the most value would be the work on supporting > cold > upgrade, even if it will be more of a stretch for some projects to > accomplish. > > Upgrades have been a major focus of discussion lately, especially as our > operators have been trying to get closer to the latest work upstream. This > has > been an ongoing challenge. > > There has also been a lot of talk about LTS releases. We've landed on fast > forward upgrade to get between several releases, but I think improving > upgrades > eases the way both for easier and more frequent upgrades and also getting > to > the point some day where maybe we can think about upgrading over several > releases to be able to do something like an LTS to LTS upgrade. > > Neither one of these upgrade goals really has a clearly defined plan that > projects can pick up now and start working on, but I think with those > involved > in these areas we should be able to come up with a perscriptive plan for > projects to follow. > > And it would really move our fast forward upgrade story forward. > > Next Steps > ========== > > I'm hoping with a strawman proposal we have a basis for debating the > merits of > these and getting closer to being able to officially select Stein goals. We > still have some time, but I would like to avoid making late-cycle > selections so > teams can start planning ahead for what will need to be done in Stein. > > Please feel free to promote other ideas for goals. That would be a good > way for > us to weigh the pro's and con's between these and whatever else you have in > mind. Then hopefully we can come to some consensus and work towards clearly > defining what needs to be done and getting things well documented for > teams to > pick up as soon as they wrap up Rocky (or sooner). > > --- > Sean (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Jun 7 20:53:31 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 7 Jun 2018 22:53:31 +0200 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: On 06/04/2018 08:59 PM, Ivan Kolodyazhny wrote: > I hope we'll have Ubuntu 18.04 LTS on our gates for this activity soon. > It becomes > important not only for developers but for operators and vendors too. By the time the project will be gating on Python 3.6, most likely there's going to be 3.7 or even 3.8 in Debian Sid, and I'll get all the broken stuff alone again... Can't we try to get Sid in the gate, at least in non-voting mode, so we get to see problems early rather than late? As developers, we should always aim for the future, and Bionic should already be considered the past release to maintain, rather than the one to focus on. If we can't get Sid, then at least should we consider the non-LTS (always latest) Ubuntu releases? Cheers, Thomas Goirand (zigo) From mriedemos at gmail.com Thu Jun 7 21:17:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 16:17:06 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <99d90de9-74b3-76d4-4320-5ce10a411234@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <99d90de9-74b3-76d4-4320-5ce10a411234@gmail.com> Message-ID: <41e61eee-c2f5-589f-6f1a-89e82e1eb6c6@gmail.com> On 6/7/2018 1:54 PM, Jay Pipes wrote: > > If Cinder tracks volume attachments as consumable resources, then this > would be my preference. Cinder does: https://developer.openstack.org/api-ref/block-storage/v3/#attachments However, there is no limit in Cinder on those as far as I know. -- Thanks, Matt From doug at doughellmann.com Thu Jun 7 21:19:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 07 Jun 2018 17:19:38 -0400 Subject: [openstack-dev] [tc][ptl][python3][help-wanted] starting work on "python 3 first" transition In-Reply-To: References: <1528313781-sup-5000@lrrr.local> Message-ID: <1528406225-sup-4923@lrrr.local> Excerpts from Thomas Goirand's message of 2018-06-07 22:17:11 +0200: > On 06/06/2018 10:04 PM, Doug Hellmann wrote: > > I have started submitting a series of patches to fix up the tox.ini > > settings for projects as a step towards running "python3 first" > > [1]. The point of doing this now is to give teams a head start on > > understanding the work involved as we consider whether to make this > > a community goal. > > > > The current patches are all mechanically generated changes to the > > basepython value for environments that seem to be likely candidates. > > They're basically the "easy" part of the transition. I've left any > > changes that will need more discussion alone for now. > > > > In particular, I've skipped over any tox environments with "functional" > > in the name, since I thought those ran functional tests. Teams will > > need to decide whether to change those job definitions, or duplicate > > them and run them under python 2 and 3. Since we are not dropping > > python 2 support until the U cycle, I suggest going ahead and running > > the jobs twice. > > > > Note that changing the tox settings won't actually change some of the > > jobs. For example, with our current PTI definition, the documentation > > and releasenotes jobs do not run under tox. That means those will need > > to be changed by editing the zuul configuration for the repository. > > > > I have started to make notes for tracking the work in > > https://etherpad.openstack.org/p/python3-first -- including some notes > > about taking the next step to update the zuul job definitions and common > > issues we've already encountered to help folks debug job failures. > > > > I could use some help keeping an eye on these changes and getting > > them through the gate. If you are interested in helping, please > > leave a comment on the review you are willing to shepherd. > > > > Doug > > > > [1] https://review.openstack.org/#/q/topic:python3-first+(status:open+OR+status:merged) > > tl;dr: all projects must move their functional tests under Py3, uwsgi > and SSL, otherwise there's undetected issues. > > Doug, > > Attempting to make puppet-openstack fully work for Debian (more on this > soon as hopefully I'll be able to announce soon the general availability > and success with testing), and having all Debian packages running in > Python 3, what I have discovered is that there's a general issue when > running tests: they do not take into account the fact that we cannot run > with Eventlet, because Python 3 + Eventlet + SSL = handshake crash. > > In other words, all projects need to run their API functional tests > using either mod_wsgi or uwsgi (and more likely the later), using Python > 3 *AND* SSL, as this is how operators will run. Unfortunately, this is > far from being the case at the moment, and I have been able to find > defects in at least 3 services, and sometimes, helped, or contributed > patches myself. > > More in details, projects after projects, just so you understand what I > mean. No finger pointing here, these are just to illustrate my > argumentation and show the kind of problems we're having. > > sahara-api looks completely broken under Python 3 (ie: error 500 on each > GET requests), unless this tiny 3 liner patch is applied > https://review.openstack.org/#/c/573179/ (hopefully, a better patch will > happen upstream, I'm just a package maintainer not having enough time to > get involved deep enough in each individual projects...) > > Cinder-volume fails to load with Ceph driver if using Luminous and > Python 3 unless this patch is applied: > https://review.openstack.org/#/c/568813/ > > neutron-server, under Python 3 and SSL, has to be transformed into > neutron-api served over uwsgi (probably also works using mod_wsgi). In > such case, you must run neutron-rpc-server to handle messages from the > bus. But then, the rpc-server doesn't understand plugin modules unless > this patch is added: https://review.openstack.org/555608 and also, > firewall rules aren't received by the openvswitch-agent (and therefore, > security group rules aren't being applied to iptables) unless this patch > is reverted: https://review.openstack.org/434678 and then it works for > openvswitch agent, though for linuxbridge, I still couldn't ping > floating IPs (I'm still investigating the problem, I'm not sure yet if > it's a routing problem or what...). And that's not even over, as it > looks like tempest's scenario > test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops is > still broken with openvswitch for me: > http://logs.openstack.org/28/564328/48/check/puppet-openstack-integration-4-scenario001-tempest-debian-stable/5e64694/job-output.txt.gz#_2018-06-07_12_42_50_760463 > Help is desperately wanted here as I spent a really long time on this > already, it's still failing, and I don't understand what's going on! > > All this to say, the more general issue here is that projects, while > unit testing under Python 3, aren't running functional tests the same > way that operators will (ie: with SSL and uwsgi/mod_wsgi), and because > of that, some problems aren't being detected. Just a hint, to run > something like neutron-api with ipv4, 6 and SSL, you'd run it this way: > > /usr/bin/uwsgi_python35 --https-socket > [::]:9696,/usr/local/share/ca-certificates/puppet_openstack.crt,/etc/neutron/ssl/private/debian-stretch-ovh-gra1-0004341140.pem > --ini /etc/neutron/neutron-api-uwsgi.ini --pyargv > "--config-file=/etc/neutron/neutron.conf > --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini > --config-file=/etc/neutron/fwaas_driver.ini" > > Notice the [::] meaning ipv4 AND ipv6 at the same time, and how --pyargv > needs a quoted list of arguments. > > Hopefully, this is slowly being addressed, though I would very much like > if there was a general move to using uwsgi, preferably with SSL and ipv6 > turned on and tested. > > It'd be also nice if projects could ship the uwsgi ini file that they > use for functional tests, as it looks like it sometimes depend on the > project (for example, I had to add rem-header: content-length for > nova-placement-api to avoid connection reset by peer, but I'm not sure > if it even is harmful for other projects). Here's an example from the > Debian packaging: > > https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api-uwsgi.ini > > See how I'm deliberately *not* setting "pyargv" there, and prefer it to > be set by the init script / systemd service (so that it can be dynamic > and load the configuration file needed for the activated plugin), and > that http-socket / https-socket is also dynamic, ie: --https-socket is > used on the command line if a pair of certificate + private key is found > on the hard disk under /etc/neutron/ssl. See > https://salsa.debian.org/openstack-team/services/neutron/blob/debian/queens/debian/neutron-api.init.in > and > https://salsa.debian.org/openstack-team/debian/openstack-pkg-tools/tree/debian/queens/init-template > to understand how it's built. > > Hoping that these packaging-related insights are helpful for the project > at large, cheers, > > Thomas Goirand (zigo) > Thanks for all of these details, Thomas, I will work on summarizing these notes in https://etherpad.openstack.org/p/python3-first so project teams can consider which parts relate to their services. Doug From cboylan at sapwetik.org Thu Jun 7 21:19:41 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 07 Jun 2018 14:19:41 -0700 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: <1528406381.3803675.1400278592.4E27ED17@webmail.messagingengine.com> On Thu, Jun 7, 2018, at 1:53 PM, Thomas Goirand wrote: > On 06/04/2018 08:59 PM, Ivan Kolodyazhny wrote: > > I hope we'll have Ubuntu 18.04 LTS on our gates for this activity soon. > > It becomes > > important not only for developers but for operators and vendors too. > > By the time the project will be gating on Python 3.6, most likely > there's going to be 3.7 or even 3.8 in Debian Sid, and I'll get all the > broken stuff alone again... Can't we try to get Sid in the gate, at > least in non-voting mode, so we get to see problems early rather than > late? As developers, we should always aim for the future, and Bionic > should already be considered the past release to maintain, rather than > the one to focus on. If we can't get Sid, then at least should we > consider the non-LTS (always latest) Ubuntu releases? We stopped following latest Ubuntu when they dropped non LTS support to 9 months. What we do have are suse tumbleweed images which should get us brand new everything in a rolling fashion. If people are interested in this type of work I'd probably start there. Clark From fungi at yuggoth.org Thu Jun 7 21:33:15 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Jun 2018 21:33:15 +0000 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <1528406381.3803675.1400278592.4E27ED17@webmail.messagingengine.com> References: <20180604180742.GA6404@sm-xps> <1528406381.3803675.1400278592.4E27ED17@webmail.messagingengine.com> Message-ID: <20180607213314.6yyhy3lzgu35y4r6@yuggoth.org> On 2018-06-07 14:19:41 -0700 (-0700), Clark Boylan wrote: [...] > We stopped following latest Ubuntu when they dropped non LTS > support to 9 months. What we do have are suse tumbleweed images > which should get us brand new everything in a rolling fashion. If > people are interested in this type of work I'd probably start > there. Though following up on the work already started to get debian-sid images wouldn't hurt. I know people think Sid is constantly broken, but I run it on my workstation and all my portable devices (and have for a couple decades) so I'm fairly confident the minimal install image build isn't likely to actually break with any significant frequency once we get it working. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Thu Jun 7 22:31:11 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 07 Jun 2018 15:31:11 -0700 Subject: [openstack-dev] [infra][all] Upcoming Zuul behavior change for files and irrelevant-files Message-ID: <874lie1b68.fsf@meyer.lemoncheese.net> Hi, Earlier[1][2], we discussed proposals to make files and irrelevant-files easier to use -- particularly ways to make them overridable. We settled on an approach, and it is now implemented. We plan on upgrading OpenStack's Zuul to the new behavior on Monday, June 11, 2018. To summarize the change: Files and irrelevant-files are treated as overwriteable attributes and evaluated after branch-matching variants are combined. * Files and irrelevant-files are overwritten, so the last value encountered when combining all the matching variants (looking only at branches) wins. * It's possible to both reduce and expand the scope of jobs, but the user may need to manually copy values from a parent or other variant in order to do so. * It will no longer be possible to alter a job attribute by adding a variant with only a files matcher -- in all cases files and irrelevant-files are used solely to determine whether the job is run, not to determine whether to apply a variant. This is a behavior change to Zuul that is not possible[3] to support in a backwards compatible way. That means that on Monday, there may be sudden alterations to the set of jobs which run on changes. Considering that many of us can barely predict what happens at all when multiple irrelevant-files stanzas enter the picture, it's not possible[4] to say in advance exactly what the changes will be. Suffice it to say that, on Monday, if some jobs you were expecting to run on a change don't, or some jobs you were not expecting to run do, then you will need to alter the files or irrelevant-files matchers on those jobs. Hopefully the new approach is sufficiently intuitive that corrective changes will be simple to make. Jobs which have no more than one files or irrelevant-files attribute involved in their construction (likely the bulk of the jobs out there) are unlikely to need any immediate changes. Please let us know in #openstack-infra if you encounter any problems and we'll be happy to help. Hopefully after we cross this speedbump we'll find the files and irrelevant-files matchers much more useful. -Jim [1] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130074.html [2] http://lists.zuul-ci.org/pipermail/zuul-discuss/2018-May/000397.html [3] At least, not possible with a reasonable amount of effort. [4] Of course it's possible but only with an unhealthy amount of beer. From mriedemos at gmail.com Thu Jun 7 22:48:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 17:48:50 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> On 6/7/2018 3:25 PM, Kendall Nelson wrote: > I know it doesn't fit the shiny user facing docket that was discussed at > the Forum, but I do think its time we make migration official in some > capacity as a release goal or some other way. Having migrated Ironic and > having TripleO on the schedule for migration (as requested during the > last goal discussion) in addition to having migrated Heat, Barbican and > several others in the last few months we have reached the point that I > think migration of the rest of the projects is attainable by the end of > Stein. > > Thoughts? I haven't used it much, but it would be really nice if someone could record a modern 'how to storyboard' video for just basic usage/flows since most people are used to launchpad by now so dealing with an entirely new task tracker is not trivial (or at least, not something I want to spend a lot of time figuring out). I found: https://www.youtube.com/watch?v=b2vJ9G5pNb4 https://www.youtube.com/watch?v=n_PaKuN4Skk But those are a bit old. -- Thanks, Matt From jsbryant at electronicjungle.net Fri Jun 8 00:25:02 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Thu, 7 Jun 2018 19:25:02 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <41e61eee-c2f5-589f-6f1a-89e82e1eb6c6@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <99d90de9-74b3-76d4-4320-5ce10a411234@gmail.com> <41e61eee-c2f5-589f-6f1a-89e82e1eb6c6@gmail.com> Message-ID: On Thu, Jun 7, 2018, 4:17 PM Matt Riedemann wrote: > On 6/7/2018 1:54 PM, Jay Pipes wrote: > > > > If Cinder tracks volume attachments as consumable resources, then this > > would be my preference. > > Cinder does: > > https://developer.openstack.org/api-ref/block-storage/v3/#attachments > > However, there is no limit in Cinder on those as far as I know. > > There is no limit as we have no idea what to limit at. > There is no limit as we don't know what to limit at. Could depend on the host, the protocol or the backend. Also that is counting attachments for a volume. I don't think that helps us determine how many attachments a host had without additional work. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jun 8 01:30:15 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 7 Jun 2018 21:30:15 -0400 Subject: [openstack-dev] [glance] bug smash today Message-ID: Today (Friday, 8 June UTC time) is our Rocky-2 bug smash. Hang out in #openstack-glance and keep the etherpad updated: https://etherpad.openstack.org/p/glance-rocky-bug-smash cheers, brian From mriedemos at gmail.com Fri Jun 8 01:35:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 7 Jun 2018 20:35:23 -0500 Subject: [openstack-dev] [nova][osc] Documenting compute API microversion gaps in OSC Message-ID: I've started an etherpad [1] to identify the compute API microversion gaps in python-openstackclient. It's a small start right now so I would appreciate some help on this, even just a few people looking at a couple of these per day would get it done quickly. Not all compute API microversions will require explicit changes to OSC, for example 2.3 [2] just adds some more fields to some API responses which might automatically get dumped in "show" commands. We just need to verify that the fields that come back in the response are actually shown by the CLI and then mark it in the etherpad. Once we identify the gaps, we can start talking about actually closing those gaps and deprecating the nova CLI, which could be part of a community wide goal - but there are other things going on in OSC right now (major refactor to use the SDK, core reviewer needs) so we'll have to figure out when the time is right. [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc [2] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-kilo -- Thanks, Matt From rico.lin.guanyu at gmail.com Fri Jun 8 06:40:36 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 8 Jun 2018 14:40:36 +0800 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: References: Message-ID: Thanks, Zane for putting this up. It's a great service to expose infrastructure to application, and a potential cross-community works as well. > > Would you be interested in working on a new project to implement this > integration? Reply to this thread and let's collect a list of volunteers > to form the initial core review team. > Glad to help > I'd prefer to go with the pure-Ansible autogenerated way so we can have > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] > brokers they have 10, 11 and 17 services respectively, so arguably we > could get a comparable number of features exposed without investing > crazy amounts of time if we had to write templates explicitly. > If we going to generate another project to provide this service, I believe to use pure-Ansible will be a better option indeed. Once service gets stable, it's actually quite easy(at first glance) for Heat to adopt this (just create a service broker with our new service while creating a resource I believe?). Sounds like the use case of service broker might be when application request for a single resource exposed with Broker. And the resource dependency will be relatively simple. And we should just keep it simple and don't start thinking about who and how that application was created and keep the application out of dependency (I mean if the user likes to manage the total dependency, they can consider using heat with service broker once we integrated). -- May The Force of OpenStack Be With You, Rico Lin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Fri Jun 8 07:39:25 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 8 Jun 2018 07:39:25 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: * What is the API between Glance and the Glance backends? * How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? * Is it possible to have different OpenStack versions in the different cloud instances? * Can a cloud instance use the locally synchronised images in case of a network connection break? * Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: * If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Fri Jun 8 07:40:28 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 8 Jun 2018 15:40:28 +0800 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> Message-ID: Kendall Nelson 於 2018年6月8日 週五 上午4:26寫道: > > I think that these two goals definitely fit the criteria we discussed in Vancouver during the S Release Goal Forum Session. I know Storyboard Migration was also mentioned after I had to dip out to another session so I wanted to follow up on that. > +1. To migrate to StoryBoard do seems like a good way to go. Heat just moved to StoryBoard, so there is no much long-term running experiences to share about, but it does look like a good way to target the piece which we been missing of. A workflow to connect users, ops, and developers (within Launchpad, we only care about bugs, and what generate that bug? well...we don't care). With Story + Task-oriented things can change (To me this is shiny). For migrate experience, the migration is quick, so if there is no project really really only can survive with Launchpad, I think there is no blocker for this goal. Also, it's quite convenient to target your story with your old bug, since your story id is your bug id. Since it might be difficult for all project directly migrated to it, IMO we should at least have a potential goal for T release (or a long-term goal for Stein?). Or we can directly set this as a Stein goal as well. Why? Because of the very first Story ID actually started from 2000000(and as I mentioned, after migrating, your story id is exactly your bug id ). So once we generate bug with ID 2000000, things will become interesting (and hard to migrate). Current is 1775759, so one or two years I guess? To interpreted `might be difficult` above, The overall experience is great, some small things should get improve: - I can't tell if current story is already reported or not. There is no way to filter stories and checking conflict if there is. - Things going slow if we try to use Board in StoryBoard to filter out a great number of stories (like when I need to see all `High Priority` tagged stories) - Needs better documentation, In Heat we create an Etherpad to describe and collect Q&A on how people can better adopt StoryBoard. It will be great if teams can directly get this information. Overall, I think this is a nice goal, and it's actually painless to migrate. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Fri Jun 8 07:44:10 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 8 Jun 2018 15:44:10 +0800 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> References: <20180604180742.GA6404@sm-xps> <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> Message-ID: Matt Riedemann 於 2018年6月8日 週五 上午6:49寫道: > I haven't used it much, but it would be really nice if someone could > record a modern 'how to storyboard' video for just basic usage/flows > since most people are used to launchpad by now so dealing with an > entirely new task tracker is not trivial (or at least, not something I > want to spend a lot of time figuring out). > > I found: > > https://www.youtube.com/watch?v=b2vJ9G5pNb4 > > https://www.youtube.com/watch?v=n_PaKuN4Skk > > But those are a bit old. > I create an Etherpad to collect Q&A on Migrate from Launchpad to StoryBoard for Heat (most information were general). Hope this helps https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Fri Jun 8 08:13:07 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 8 Jun 2018 16:13:07 +0800 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <20180604180742.GA6404@sm-xps> References: <20180604180742.GA6404@sm-xps> Message-ID: Sean McGinnis 於 2018年6月5日 週二 上午2:07寫道: > > Python 3 First > ============== > > One of the things brought up in the session was picking things that bring > excitement and are obvious benefits to deployers and users of OpenStack > services. While this one is maybe not as immediately obvious, I think this > is something that will end up helping deployers and also falls into the tech > debt reduction category that will help us move quicker long term. > > Python 2 is going away soon, so I think we need something to help compel folks > to work on making sure we are ready to transition. This will also be a good > point to help switch the mindset over to Python 3 being the default used > everywhere, with our Python 2 compatibility being just to continue legacy > support. > +1 on Python3 first goal I think it's great if we can start investigating how projects been doing with py3.5+. And to have a check job for py3.6 will be a nice start for this goal. If it's possible, our goal should match with what most users are facing. Mention 3.6 because of Ubuntu Artful and Fedora 26 use it by default (see [1] for more info). [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131193.html -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri Jun 8 08:16:01 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 8 Jun 2018 10:16:01 +0200 Subject: [openstack-dev] [nova][osc] Documenting compute API microversion gaps in OSC In-Reply-To: References: Message-ID: On Fri, Jun 8, 2018 at 3:35 AM, Matt Riedemann wrote: > I've started an etherpad [1] to identify the compute API microversion gaps > in python-openstackclient. > > It's a small start right now so I would appreciate some help on this, even > just a few people looking at a couple of these per day would get it done > quickly. > > Not all compute API microversions will require explicit changes to OSC, > for example 2.3 [2] just adds some more fields to some API responses which > might automatically get dumped in "show" commands. We just need to verify > that the fields that come back in the response are actually shown by the > CLI and then mark it in the etherpad. > > Once we identify the gaps, we can start talking about actually closing > those gaps and deprecating the nova CLI, which could be part of a community > wide goal - but there are other things going on in OSC right now (major > refactor to use the SDK, core reviewer needs) so we'll have to figure out > when the time is right. > > [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc > [2] https://docs.openstack.org/nova/latest/reference/api-microve > rsion-history.html#maximum-in-kilo > > Good idea, Matt. I think we could maybe discuss with the First Contact SIG because it looks to me some developers could help us for that, while it doesn't need to be a Nova expert. I'll also try to see how I can help on this. -Sylvain -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Jun 8 09:35:45 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 8 Jun 2018 11:35:45 +0200 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> Message-ID: <20180608093545.GE11695@paraplu> On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote: > On 6/7/2018 12:56 PM, melanie witt wrote: > > Recently, we've received interest about increasing the maximum number of > > allowed volumes to attach to a single instance > 26. The limit of 26 is > > because of a historical limitation in libvirt (if I remember correctly) > > and is no longer limited at the libvirt level in the present day. So, > > we're looking at providing a way to attach more than 26 volumes to a > > single instance and we want your feedback. > > The 26 volumes thing is a libvirt driver restriction. The original limitation of 26 disks was because at that time there was no 'virtio-scsi'. (With 'virtio-scsi', each of its controller allows upto 256 targets, and each target can use any LUN (Logical Unit Number) from 0 to 16383 (inclusive). Therefore, the maxium allowable disks on a single 'virtio-scsi' controller is 256 * 16384 == 4194304.) Source[1]. [...] > > Some ideas that have been discussed so far include: > > > > A) Selecting a new, higher maximum that still yields reasonable > > performance on a single compute host (64 or 128, for example). Pros: > > helps prevent the potential for poor performance on a compute host from > > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher > > maximum if their environment can handle it. Option (A) can still be considered: We can limit it to 256 disks. Why? FWIW, I did some digging here: The upstream libguestfs project after some thorough testing, arrived at a limit of 256 disks, and suggest the same for Nova. And if anyone wants to increase that limit, the proposer should come up with a fully worked through test plan. :-) (Try doing any meaningful I/O to so many disks at once, and see how well that works out.) What more, the libguestfs upstream tests 256 disks, and even _that_ fails sometimes: https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs out of memory with 256 virtio-scsi disks" The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also required a corresponding fix in QEMU[2], which is available from version v2.11.0 onwards.) [...] [1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html -- virtio-scsi limits [2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d -- /kashyap From Greg.Waines at windriver.com Fri Jun 8 11:45:55 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 8 Jun 2018 11:45:55 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines , "openstack-dev at lists.openstack.org" , "edge-computing at lists.openstack.org" Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: - What is the API between Glance and the Glance backends? - How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? - Is it possible to have different OpenStack versions in the different cloud instances? - Can a cloud instance use the locally synchronised images in case of a network connection break? - Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: - If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Fri Jun 8 12:31:56 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Fri, 8 Jun 2018 20:31:56 +0800 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <1527869418-sup-3208@lrrr.local> <1527960022-sup-7990@lrrr.local> <1528148963-sup-59@lrrr.local> <0b6101d3fd8e$cc38bc50$64aa34f0$@gmail.com> Message-ID: IMO, the goal is that we try to encourage the good, not to get in the way to those who can't reach that goal. A tag is a good way to encourage, but it also not a fair way for those projects who barely got enough core member to review (Think about those projects got less than four active cores). Wondering if anyone got ideas on how we can reach that goal (tag can be a way, just IMO need to provide a fair condition to all). How about we set policy and document to encourage people to join core reviewer (this can join forces with the Enterprise guideline we plan in Forum) if they wish to provide diversity to project. On the second idea, I think TC (or people who powered by TC) should provide (or guidance project to provide) a health check report for projects. TCs have been looking for Liaisons with projects ([1]). This definitely is a good report as a feedback from projects to TC. (also a good way to understand what each project been doing and is that project need any help). So to provide a guideline for projects to understand how they can do better. Guideline means both -1 and +1 (for who running projects for long enough to be a core/PTL, should at least understand that -1 only means since this project is under TC's guidance, we just try to help.). Therefore a -1 is important. As an alternative, we can also try to target problem when it occurred, but personally wonder who as a single core reviewer in team dear to speak out in this case. I think this is a hard issue to do, but we have to pick one action from all actions and try to run and see. And it's better than keep things the way they are and ignore things. [1] https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons Michael Johnson 於 2018年6月7日 週四 上午2:48寫道: > Octavia also has an informal rule about two cores from the same > company merging patches. I support this because it makes sure we have > a diverse perspective on the patches. Specifically it has worked well > for us as all of the cores have different cloud designs, so it catches > anything that would limit/conflict with the different OpenStack > topologies. > > That said, we don't hard enforce this or police it, it is just an > informal policy to make sure we get input from the wider team. > Currently we only have one company with two cores. > > That said, my issue with the current diversity calculations is they > tend to be skewed by the PTL role. People have a tendency to defer to > the PTL to review/comment/merge patches, so if the PTL shares a > company with another core the diversity numbers get skewed heavily > towards that company. > > Michael > > On Wed, Jun 6, 2018 at 5:06 AM, wrote: > >> -----Original Message----- > >> From: Doug Hellmann > >> Sent: Monday, June 4, 2018 5:52 PM > >> To: openstack-dev > >> Subject: Re: [openstack-dev] [tc] Organizational diversity tag > >> > >> Excerpts from Zane Bitter's message of 2018-06-04 17:41:10 -0400: > >> > On 02/06/18 13:23, Doug Hellmann wrote: > >> > > Excerpts from Zane Bitter's message of 2018-06-01 15:19:46 -0400: > >> > >> On 01/06/18 12:18, Doug Hellmann wrote: > >> > > > >> > > [snip] > >> > Apparently enough people see it the way you described that this is > >> > probably not something we want to actively spread to other projects at > >> > the moment. > >> > >> I am still curious to know which teams have the policy. If it is more > >> widespread than I realized, maybe it's reasonable to extend it and use > it as > >> the basis for a health check after all. > >> > > > > A while back, Trove had this policy. When Rackspace, HP, and Tesora had > core reviewers, (at various times, eBay, IBM and Red Hat also had cores), > the agreement was that multiple cores from any one company would not merge > a change unless it was an emergency. It was not formally written down (to > my knowledge). > > > > It worked well, and ensured that the operators didn't get surprised by > some unexpected thing that took down their service. > > > > -amrith > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Fri Jun 8 13:17:46 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 8 Jun 2018 14:17:46 +0100 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: Message-ID: Hi, Answering inline. Best, Erno "jokke" Kuvaja On Thu, Jun 7, 2018 at 11:49 AM, Csatari, Gergely (Nokia - HU/Budapest) wrote: > Hi, > > > > I did some work ont he figures and realised, that I have some questions > related to the alternative options: > > > > Multiple backends option: > > What is the API between Glance and the Glance backends? glance_store library > How is it possible to implement location aware synchronisation (synchronise > images only to those cloud instances where they are needed)? This needs bit of hooking. We need to update the locations into Glance once the replication has happened. > Is it possible to have different OpenStack versions in the different cloud > instances? In my understanding it's not supported to mix versions within OpenStack cloud apart from during upgrade. > Can a cloud instance use the locally synchronised images in case of a > network connection break? That depends a lot of the implementation. If there is local glance node with replicated db and store, yes. > Is it possible to implement this without storing database credentials ont he > edge cloud instances? Again depending of the deployment. You definitely cannot have both, access during network outage and access without db credentials. if one needs to have local access of images without db credentials, there is always possibility for the local Ceph back-end with remote glance-api node. In this case Nova can talk directly to the local Ceph back-end and communicate with centralized glance-api that has the credentials to the db. The problem with loosing the network in this scenario is that Nova will have no idea if the user has rights to use the image or not and it will not know the path to that image's data. > > > > Independent synchronisation service: > > If I understood [1] correctly mixmatch can help Nova to attach a remote > volume, but it will not help in synchronizing the images. is this true? > > > > > > As I promised in the Edge Compute Group call I plan to organize an IRC > review meeting to check the wiki. Please indicate your availability in [2]. > > > > [1]: https://mixmatch.readthedocs.io/en/latest/ > > [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 > > > > Br, > > Gerg0 > > > > From: Csatari, Gergely (Nokia - HU/Budapest) > Sent: Wednesday, May 23, 2018 8:59 PM > To: OpenStack Development Mailing List (not for usage questions) > ; edge-computing at lists.openstack.org > Subject: [edge][glance]: Wiki of the possible architectures for image > synchronisation > > > > Hi, > > > > Here I send the wiki page [1] where I summarize what I understood from the > Forum session about image synchronisation in edge environment [2], [3]. > > > > Please check and correct/comment. > > > > Thanks, > > Gerg0 > > > > > > [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment > > [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images > > [3]: > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > From dms at danplanet.com Fri Jun 8 13:46:01 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 08 Jun 2018 06:46:01 -0700 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> (melanie witt's message of "Thu, 7 Jun 2018 10:56:53 -0700") References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Message-ID: > Some ideas that have been discussed so far include: FYI, these are already in my order of preference. > A) Selecting a new, higher maximum that still yields reasonable > performance on a single compute host (64 or 128, for example). Pros: > helps prevent the potential for poor performance on a compute host > from attaching too many volumes. Cons: doesn't let anyone opt-in to a > higher maximum if their environment can handle it. I prefer this because I think it can be done per virt driver, for whatever actually makes sense there. If powervm can handle 500 volumes in a meaningful way on one instance, then that's cool. I think libvirt's limit should likely be 64ish. > B) Creating a config option to let operators choose how many volumes > allowed to attach to a single instance. Pros: lets operators opt-in to > a maximum that works in their environment. Cons: it's not discoverable > for those calling the API. This is a fine compromise, IMHO, as it lets operators tune it per compute node based on the virt driver and the hardware. If one compute is using nothing but iSCSI over a single 10g link, then they may need to clamp that down to something more sane. Like the per virt driver restriction above, it's not discoverable via the API, but if it varies based on compute node and other factors in a single deployment, then making it discoverable isn't going to be very easy anyway. > C) Create a configurable API limit for maximum number of volumes to > attach to a single instance that is either a quota or similar to a > quota. Pros: lets operators opt-in to a maximum that works in their > environment. Cons: it's yet another quota? Do we have any other quota limits that are per-instance like this would be? If not, then this would likely be weird, but if so, then this would also be an option, IMHO. However, it's too much work for what is really not a hugely important problem, IMHO, and both of the above are lighter-weight ways to solve this and move on. --Dan From jungleboyj at gmail.com Fri Jun 8 15:04:18 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 8 Jun 2018 10:04:18 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> References: <20180604180742.GA6404@sm-xps> <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> Message-ID: <29b1b85a-67b0-b1b1-e001-e80dc8a05e48@gmail.com> On 6/7/2018 5:48 PM, Matt Riedemann wrote: > On 6/7/2018 3:25 PM, Kendall Nelson wrote: >> I know it doesn't fit the shiny user facing docket that was discussed >> at the Forum, but I do think its time we make migration official in >> some capacity as a release goal or some other way. Having migrated >> Ironic and having TripleO on the schedule for migration (as requested >> during the last goal discussion) in addition to having migrated Heat, >> Barbican and several others in the last few months we have reached >> the point that I think migration of the rest of the projects is >> attainable by the end of Stein. >> >> Thoughts? > > I haven't used it much, but it would be really nice if someone could > record a modern 'how to storyboard' video for just basic usage/flows > since most people are used to launchpad by now so dealing with an > entirely new task tracker is not trivial (or at least, not something I > want to spend a lot of time figuring out). > > I found: > > https://www.youtube.com/watch?v=b2vJ9G5pNb4 > > https://www.youtube.com/watch?v=n_PaKuN4Skk > > But those are a bit old. > Matt, The following presentation was done in Boston.  If I remember correctly they covered some of the basics on how to use Storyboard. [1] I feel your pain on the migration.  I have used it a bit and and it is kind of like a combination of Launchpad and Trello.  I think once we start using it the learning curve won't be so bad. Jay From gfm at us.ibm.com Fri Jun 8 15:31:18 2018 From: gfm at us.ibm.com (Gerald McBrearty) Date: Fri, 8 Jun 2018 10:31:18 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Message-ID: Dan Smith wrote on 06/08/2018 08:46:01 AM: > From: Dan Smith > To: melanie witt > Cc: "OpenStack Development Mailing List \(not for usage questions\)" > , openstack-operators at lists.openstack.org > Date: 06/08/2018 08:48 AM > Subject: Re: [openstack-dev] [nova] increasing the number of allowed > volumes attached per instance > 26 > > > Some ideas that have been discussed so far include: > > FYI, these are already in my order of preference. > > > A) Selecting a new, higher maximum that still yields reasonable > > performance on a single compute host (64 or 128, for example). Pros: > > helps prevent the potential for poor performance on a compute host > > from attaching too many volumes. Cons: doesn't let anyone opt-in to a > > higher maximum if their environment can handle it. > > I prefer this because I think it can be done per virt driver, for > whatever actually makes sense there. If powervm can handle 500 volumes > in a meaningful way on one instance, then that's cool. I think libvirt's > limit should likely be 64ish. > As long as this can be done on a per virt driver basis as Dan says I think also would prefer this option. Actually the meaning fully number is much higher that 500 for powervm. I'm thinking the powervm limit could likely be 4096ish. On powervm we have a OS where the meaningful limit is 4096 volumes but routinely most operators would have between 1000-2000. -Gerald > > B) Creating a config option to let operators choose how many volumes > > allowed to attach to a single instance. Pros: lets operators opt-in to > > a maximum that works in their environment. Cons: it's not discoverable > > for those calling the API. > > This is a fine compromise, IMHO, as it lets operators tune it per > compute node based on the virt driver and the hardware. If one compute > is using nothing but iSCSI over a single 10g link, then they may need to > clamp that down to something more sane. > > Like the per virt driver restriction above, it's not discoverable via > the API, but if it varies based on compute node and other factors in a > single deployment, then making it discoverable isn't going to be very > easy anyway. > > > C) Create a configurable API limit for maximum number of volumes to > > attach to a single instance that is either a quota or similar to a > > quota. Pros: lets operators opt-in to a maximum that works in their > > environment. Cons: it's yet another quota? > > Do we have any other quota limits that are per-instance like this would > be? If not, then this would likely be weird, but if so, then this would > also be an option, IMHO. However, it's too much work for what is really > not a hugely important problem, IMHO, and both of the above are > lighter-weight ways to solve this and move on. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > INVALID URI REMOVED > u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=i0r4x6W1L_PMd5Bym8J36w&m=Vg5MEvB0VELjModDoJF8PGcmUinnq- > kfFxavTqfnYYw&s=xe_2YmabBZEJJmtBK-4LZPh68rG3UI6dVqoZq6zKlIA&e= > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Jun 8 16:09:03 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 8 Jun 2018 11:09:03 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <3442ae9b-9b77-7a6a-8ff9-3a159fd5999f@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> <46c5cb94-61ba-4f3b-fa13-0456463fb485@gmail.com> <3442ae9b-9b77-7a6a-8ff9-3a159fd5999f@fried.cc> Message-ID: <1bb13a60-590a-44d8-f667-103e3cb24e14@fried.cc> There is now a blueprint [1] and draft spec [2]. Reviews welcomed. [1] https://blueprints.launchpad.net/nova/+spec/reshape-provider-tree [2] https://review.openstack.org/#/c/572583/ On 06/04/2018 06:00 PM, Eric Fried wrote: > There has been much discussion. We've gotten to a point of an initial > proposal and are ready for more (hopefully smaller, hopefully > conclusive) discussion. > > To that end, there will be a HANGOUT tomorrow (TUESDAY, JUNE 5TH) at > 1500 UTC. Be in #openstack-placement to get the link to join. > > The strawpeople outlined below and discussed in the referenced etherpad > have been consolidated/distilled into a new etherpad [1] around which > the hangout discussion will be centered. > > [1] https://etherpad.openstack.org/p/placement-making-the-(up)grade > > Thanks, > efried > > On 06/01/2018 01:12 PM, Jay Pipes wrote: >> On 05/31/2018 02:26 PM, Eric Fried wrote: >>>> 1. Make everything perform the pivot on compute node start (which can be >>>>     re-used by a CLI tool for the offline case) >>>> 2. Make everything default to non-nested inventory at first, and provide >>>>     a way to migrate a compute node and its instances one at a time (in >>>>     place) to roll through. >>> >>> I agree that it sure would be nice to do ^ rather than requiring the >>> "slide puzzle" thing. >>> >>> But how would this be accomplished, in light of the current "separation >>> of responsibilities" drawn at the virt driver interface, whereby the >>> virt driver isn't supposed to talk to placement directly, or know >>> anything about allocations? >> FWIW, I don't have a problem with the virt driver "knowing about >> allocations". What I have a problem with is the virt driver *claiming >> resources for an instance*. >> >> That's what the whole placement claims resources things was all about, >> and I'm not interested in stepping back to the days of long racy claim >> operations by having the compute nodes be responsible for claiming >> resources. >> >> That said, once the consumer generation microversion lands [1], it >> should be possible to *safely* modify an allocation set for a consumer >> (instance) and move allocation records for an instance from one provider >> to another. >> >> [1] https://review.openstack.org/#/c/565604/ >> >>> Here's a first pass: >>> >>> The virt driver, via the return value from update_provider_tree, tells >>> the resource tracker that "inventory of resource class A on provider B >>> have moved to provider C" for all applicable AxBxC.  E.g. >>> >>> [ { 'from_resource_provider': , >>>      'moved_resources': [VGPU: 4], >>>      'to_resource_provider': >>>    }, >>>    { 'from_resource_provider': , >>>      'moved_resources': [VGPU: 4], >>>      'to_resource_provider': >>>    }, >>>    { 'from_resource_provider': , >>>      'moved_resources': [ >>>          SRIOV_NET_VF: 2, >>>          NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, >>>          NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, >>>      ], >>>      'to_resource_provider': >>>    } >>> ] >>> >>> As today, the resource tracker takes the updated provider tree and >>> invokes [1] the report client method update_from_provider_tree [2] to >>> flush the changes to placement.  But now update_from_provider_tree also >>> accepts the return value from update_provider_tree and, for each "move": >>> >>> - Creates provider C (as described in the provider_tree) if it doesn't >>> already exist. >>> - Creates/updates provider C's inventory as described in the >>> provider_tree (without yet updating provider B's inventory).  This ought >>> to create the inventory of resource class A on provider C. >> >> Unfortunately, right here you'll introduce a race condition. As soon as >> this operation completes, the scheduler will have the ability to throw >> new instances on provider C and consume the inventory from it that you >> intend to give to the existing instance that is consuming from provider B. >> >>> - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. >> >> For each consumer of resources on rp B, right? >> >>> - Updates provider B's inventory. >> >> Again, this is problematic because the scheduler will have already begun >> to place new instances on B's inventory, which could very well result in >> incorrect resource accounting on the node. >> >> We basically need to have one giant new REST API call that accepts the >> list of "move instructions" and performs all of the instructions in a >> single transaction. :( >> >>> (*There's a hole here: if we're splitting a glommed-together inventory >>> across multiple new child providers, as the VGPUs in the example, we >>> don't know which allocations to put where.  The virt driver should know >>> which instances own which specific inventory units, and would be able to >>> report that info within the data structure.  That's getting kinda close >>> to the virt driver mucking with allocations, but maybe it fits well >>> enough into this model to be acceptable?) >> >> Well, it's not really the virt driver *itself* mucking with the >> allocations. It's more that the virt driver is telling something *else* >> the move instructions that it feels are needed... >> >>> Note that the return value from update_provider_tree is optional, and >>> only used when the virt driver is indicating a "move" of this ilk.  If >>> it's None/[] then the RT/update_from_provider_tree flow is the same as >>> it is today. >>> >>> If we can do it this way, we don't need a migration tool.  In fact, we >>> don't even need to restrict provider tree "reshaping" to release >>> boundaries.  As long as the virt driver understands its own data model >>> migrations and reports them properly via update_provider_tree, it can >>> shuffle its tree around whenever it wants. >> >> Due to the many race conditions we would have in trying to fudge >> inventory amounts (the reserved/total thing) and allocation movement for >>> 1 consumer at a time, I'm pretty sure the only safe thing to do is have >> a single new HTTP endpoint that would take this list of move operations >> and perform them atomically (on the placement server side of course). >> >> Here's a strawman for how that HTTP endpoint might look like: >> >> https://etherpad.openstack.org/p/placement-migrate-operations >> >> feel free to markup and destroy. >> >> Best, >> -jay >> >>> Thoughts? >>> >>> -efried >>> >>> [1] >>> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 >>> >>> [2] >>> https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Fri Jun 8 18:03:46 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Fri, 8 Jun 2018 13:03:46 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> References: <20180604180742.GA6404@sm-xps> <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> Message-ID: <4225a615-1faa-d07c-e0df-42e65e2f995e@gmail.com> On 6/7/2018 5:48 PM, Matt Riedemann wrote: > On 6/7/2018 3:25 PM, Kendall Nelson wrote: >> I know it doesn't fit the shiny user facing docket that was discussed >> at the Forum, but I do think its time we make migration official in >> some capacity as a release goal or some other way. Having migrated >> Ironic and having TripleO on the schedule for migration (as requested >> during the last goal discussion) in addition to having migrated Heat, >> Barbican and several others in the last few months we have reached >> the point that I think migration of the rest of the projects is >> attainable by the end of Stein. >> >> Thoughts? > > I haven't used it much, but it would be really nice if someone could > record a modern 'how to storyboard' video for just basic usage/flows > since most people are used to launchpad by now so dealing with an > entirely new task tracker is not trivial (or at least, not something I > want to spend a lot of time figuring out). > > I found: > > https://www.youtube.com/watch?v=b2vJ9G5pNb4 > > https://www.youtube.com/watch?v=n_PaKuN4Skk > > But those are a bit old. > Helps if I include the link to the video: https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration Hope that helps. Jay From mriedemos at gmail.com Fri Jun 8 19:05:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 8 Jun 2018 14:05:41 -0500 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: <4225a615-1faa-d07c-e0df-42e65e2f995e@gmail.com> References: <20180604180742.GA6404@sm-xps> <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> <4225a615-1faa-d07c-e0df-42e65e2f995e@gmail.com> Message-ID: On 6/8/2018 1:03 PM, Jay S Bryant wrote: > Helps if I include the link to the video: > https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration > > > Hope that helps. Yeah I linked to that in my original email - that's about the migration, not usage. I want to see stuff like what's the normal workflow for a new user to storyboard, reporting bugs, linking those to gerrit changes, multiple tasks, how to search for stuff (search was failing me big time yesterday), etc. Also, I can't even add a 2nd task to an existing story without getting a 500 transaction error from the DB, so that seems like a major scaling issue if we're all going to be migrating. -- Thanks, Matt From mrhillsman at gmail.com Fri Jun 8 19:42:55 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Fri, 8 Jun 2018 14:42:55 -0500 Subject: [openstack-dev] Reminder: UC Meeting Monday 1400UTC Message-ID: Hey everyone, Please see https://wiki.openstack.org/wiki/Governance/ Foundation/UserCommittee for UC meeting info and add additional agenda items if needed. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jun 8 21:11:11 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 8 Jun 2018 14:11:11 -0700 Subject: [openstack-dev] [TC] Stein Goal Selection In-Reply-To: References: <20180604180742.GA6404@sm-xps> <5ac1a7b4-51a7-e9c0-d6cc-2670561f3424@gmail.com> <4225a615-1faa-d07c-e0df-42e65e2f995e@gmail.com> Message-ID: I've been slowly working on building out and adding to the documentation about how to do things, but I can make that my top priority so that you all have a little more guidance. I'll try to get some patches out in the next week or so. Storyboard seems complicated but I think most of the mental hoops are just that you have the flexibility to manage and organize work however you'd like. You being both an individual and you being the project. Also, each project is so different (some use bps and specs, some ignore bps entirely, others use milestones, some just care about what bugs are new...) we didn't want to force a lot of required fields and whatnot on users. I can see how the flexibility can be a bit daunting though. Hopefully the docs I write will help clarify things. Also, that video does talk a little bit about usage at the end actually. Adam goes into different ways of using worklists and boards to organize things. Keep an eye out for my patches though :) -Kendall (diablo_rojo) On Fri, Jun 8, 2018 at 12:06 PM Matt Riedemann wrote: > On 6/8/2018 1:03 PM, Jay S Bryant wrote: > > Helps if I include the link to the video: > > > https://www.openstack.org/videos/boston-2017/storyboard-101-survival-guide-to-the-great-migration > > > > > > Hope that helps. > > Yeah I linked to that in my original email - that's about the migration, > not usage. I want to see stuff like what's the normal workflow for a new > user to storyboard, reporting bugs, linking those to gerrit changes, > multiple tasks, how to search for stuff (search was failing me big time > yesterday), etc. > > Also, I can't even add a 2nd task to an existing story without getting a > 500 transaction error from the DB, so that seems like a major scaling > issue if we're all going to be migrating. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Fri Jun 8 21:20:08 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 08 Jun 2018 14:20:08 -0700 Subject: [openstack-dev] [vitrage] update_timestamp precision Message-ID: Hi I'm building integration with Vitrage webhook and looking for some clarification on the timestamp precision to expect. In the sample webhook payload found in doc the resource and the alarm shows different time stamp precisions: https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plug in.html Thank you! From ekcs.openstack at gmail.com Fri Jun 8 22:39:35 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 8 Jun 2018 15:39:35 -0700 Subject: [openstack-dev] [vitrage] matching webhook vs alarm list Message-ID: Hi I'm building integration with Vitrage webhook and looking for some clarification on what ID to use for matching a webhook notification to the specific alarm from the alarm list. In the sample alarm list response, there is an 'id' field and a 'vitrage_id' field [1], where as in the sample webhook notification payload, there is a 'vitrage_id' field [2]. I'd assume we can match by the 'vitrage_id', but the samples have very different formats for 'vitrage_id', so I just want to confirm. Thank you! [1] https://docs.openstack.org/vitrage/latest/contributor/vitrage-api.html#id22 [2] { "notification": "vitrage.alarm.activate", "payload": { "vitrage_id": "2def31e9-6d9f-4c16-b007-893caa806cd4", "resource": { "vitrage_id": "437f1f4c-ccce-40a4-ac62-1c2f1fd9f6ac", "name": "app-1-server-1-jz6qvznkmnif", "update_timestamp": "2018-01-22 10:00:34.327142+00:00", "vitrage_category": "RESOURCE", "vitrage_operational_state": "OK", "vitrage_type": "nova.instance", "project_id": "8f007e5ba0944e84baa6f2a4f2b5d03a", "id": "9b7d93b9-94ec-41e1-9cec-f28d4f8d702c" }, "update_timestamp": "2018-01-22T10:00:34Z", "vitrage_category": "ALARM", "state": "Active", "vitrage_type": "vitrage", "vitrage_operational_severity": "WARNING", "name": "Instance memory performance degraded" } } https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plugin.html From zbitter at redhat.com Sat Jun 9 01:19:59 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 8 Jun 2018 21:19:59 -0400 Subject: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker In-Reply-To: References: Message-ID: <9a2e5b4a-2ac7-dee3-5a3e-5d985244a952@redhat.com> On 08/06/18 02:40, Rico Lin wrote: > Thanks, Zane for putting this up. > It's a great service to expose infrastructure to application, and a > potential cross-community works as well. > > > > Would you be interested in working on a new project to implement this > > integration? Reply to this thread and let's collect a list of volunteers > > to form the initial core review team. > > > Glad to help > > > I'd prefer to go with the pure-Ansible autogenerated way so we can have > > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] > > brokers they have 10, 11 and 17 services respectively, so arguably we > > could get a comparable number of features exposed without investing > > crazy amounts of time if we had to write templates explicitly. > > > If we going to generate another project to provide this service, I > believe to use pure-Ansible will be a better option indeed. TBH I don't think we can know for sure until we've tried building a few playbooks by hand and figured out whether they're similar enough that we can autogenerate them all, or if they need so much hand-tuning that it isn't feasible. But I'm a big fan of autogeneration if it works. > Once service gets stable, it's actually quite easy(at first glance) for > Heat to adopt this (just create a service broker with our new service > while creating a resource I believe?). IIUC you're talking about a Heat resource that calls out to a service broker using the Open Service Broker API? (Basically acting like the Kubernetes Service Catalog.) That would be cool, as it would allow us to orchestrate services written for Kubernetes/CloudFoundry using Heat. Although probably not as easy as it sounds at first glance ;) It wouldn't rely on _this_ set of playbook bundles though, because this one is only going to expose OpenStack resources, which are already exposed in Heat. (Unless you're suggesting we replace all of the current resource plugins in Heat with Ansible playbooks via the service broker? In which case... that's not gonna happen ;) So Heat could adopt this at any time to add support for resources exposed by _other_ service brokers, such as the AWS/Azure/GCE service brokers or other playbooks exposed through Automation Broker. > Sounds like the use case of service broker might be when application > request for a single resource exposed with Broker. And the resource > dependency will be relatively simple. And we should just keep it simple > and don't start thinking about who and how that application was created > and keep the application out of dependency (I mean if the user likes to > manage the total dependency, they can consider using heat with service > broker once we integrated). > > -- > May The Force of OpenStack Be With You, > > Rico Lin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From rico.lin.guanyu at gmail.com Sat Jun 9 02:28:50 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sat, 9 Jun 2018 10:28:50 +0800 Subject: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker In-Reply-To: <9a2e5b4a-2ac7-dee3-5a3e-5d985244a952@redhat.com> References: <9a2e5b4a-2ac7-dee3-5a3e-5d985244a952@redhat.com> Message-ID: Zane Bitter 於 2018年6月9日 週六 上午9:20寫道: > > IIUC you're talking about a Heat resource that calls out to a service > broker using the Open Service Broker API? (Basically acting like the > Kubernetes Service Catalog.) That would be cool, as it would allow us to > orchestrate services written for Kubernetes/CloudFoundry using Heat. > Although probably not as easy as it sounds at first glance ;) In my previous glance, I was thought about our new service will also wrap up API with Ansible playbooks. A playbook to create a resource, and another playbook to control Service Broker API. So we can directly use that playbook instead of calling Service broker APIs. No?:) I think we can start trying to build playbooks before we start planning on crazy ideas:) > > It wouldn't rely on _this_ set of playbook bundles though, because this > one is only going to expose OpenStack resources, which are already > exposed in Heat. (Unless you're suggesting we replace all of the current > resource plugins in Heat with Ansible playbooks via the service broker? > In which case... that's not gonna happen ;) Right, we should use OS::Heat::Stack to expose resources from other OpenStack, not with this. > > So Heat could adopt this at any time to add support for resources > exposed by _other_ service brokers, such as the AWS/Azure/GCE service > brokers or other playbooks exposed through Automation Broker. > I like the idea to add support for resources exposed by other service borkers -- May The Force of OpenStack Be With You, Rico Lin 林冠宇 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Sun Jun 10 21:48:51 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sun, 10 Jun 2018 16:48:51 -0500 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][solum][tatu][trove] pycrypto is dead and insecure, you should migrate part 2 In-Reply-To: <20180604190624.tjki5sydsoj45sgo@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> <20180604190624.tjki5sydsoj45sgo@gentoo.org> Message-ID: <20180610214851.ggluegjgpd6mtkdz@gentoo.org> On 18-06-04 14:06:24, Matthew Thode wrote: > On 18-05-13 12:22:06, Matthew Thode wrote: > > This is a reminder to the projects called out that they are using old, > > unmaintained and probably insecure libraries (it's been dead since > > 2014). Please migrate off to use the cryptography library. We'd like > > to drop pycrypto from requirements for rocky. > > > > See also, the bug, which has most of you cc'd already. > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > | solum | requirements.txt | 24 | pycrypto # Public Domain | > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > In order by name, notes follow. > > daisycloud-core - looks like AES / random functions are used > freezer - looks like AES / random functions are used > solum - looks like AES / RSA functions are used > trove - has a review!!! https://review.openstack.org/#/c/560292/ > > The following projects are not tracked so we won't wait on them. > fuel-dev-tools, fuel-web, tatu > > so it looks like progress is being made, so we have that going for us, > which is nice. What can I do to help move this forward? > It does not look like the projects (other than trove) are moving forward on this. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Jun 11 00:15:48 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Mon, 11 Jun 2018 09:15:48 +0900 Subject: [openstack-dev] [nova][osc] Documenting compute API microversion gaps in OSC In-Reply-To: References: Message-ID: <163ec32e590.127c1beab82721.8573493454235239292@ghanshyammann.com> ---- On Fri, 08 Jun 2018 17:16:01 +0900 Sylvain Bauza wrote ---- > > > On Fri, Jun 8, 2018 at 3:35 AM, Matt Riedemann wrote: > I've started an etherpad [1] to identify the compute API microversion gaps in python-openstackclient. > > It's a small start right now so I would appreciate some help on this, even just a few people looking at a couple of these per day would get it done quickly. > > Not all compute API microversions will require explicit changes to OSC, for example 2.3 [2] just adds some more fields to some API responses which might automatically get dumped in "show" commands. We just need to verify that the fields that come back in the response are actually shown by the CLI and then mark it in the etherpad. > > Once we identify the gaps, we can start talking about actually closing those gaps and deprecating the nova CLI, which could be part of a community wide goal - but there are other things going on in OSC right now (major refactor to use the SDK, core reviewer needs) so we'll have to figure out when the time is right. > > [1] https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc > [2] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#maximum-in-kilo > > > Good idea, Matt. I think we could maybe discuss with the First Contact SIG because it looks to me some developers could help us for that, while it doesn't need to be a Nova expert. +1. We have FirstContact SIG meeting on Wed[1] and i will put this as one of the contribution item for new developers. As of now, there is no new contributor who contacted FirstContact sig but we keep tracking the same and will help this item as soon as we find any new developers. [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda > > I'll also try to see how I can help on this. > -Sylvain > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Mon Jun 11 02:48:44 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Mon, 11 Jun 2018 11:48:44 +0900 Subject: [openstack-dev] [nova] nova API meeting schedule Message-ID: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> Hi All, As you might know, we used to have Nova API subteam meeting on Wed [1] but we did not continue that this year due to unavailability of members. As per discussion with melanie , we would like to continue the API meeting either on meeting channel (openstack-meeting-4) or as office hour on Nova channel. We have 2 options for that: 1. If there are members from USA/Europe TZ would like to join API meeting regularly then, we will continue the meeting on meeting channel with more suitable time considering Asia TZ also. I will initiate the doodle vote to select the time suitable for all interested members. 2. If no member from USA/Europe TZ then, myself and Alex will conduct the API meeting as office hour on Nova channel during our day time (something between UTC+1 to UTC + 9). There is not much activity on Nova channel during our TZ so it will be ok to use Nova channel. In this case, we will release the current occupied meeting channel. Please let us know who all would like to join API meeting so that we can pursue accordingly. [1] https://wiki.openstack.org/wiki/Meetings/NovaAPI -Nova API Subteam From tobias.urdin at crystone.com Mon Jun 11 03:20:47 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Mon, 11 Jun 2018 03:20:47 +0000 Subject: [openstack-dev] [ovs] [neutron] openvswitch flows firewall driver Message-ID: <72e1c6c5254c43638f8a67cb8fa10f0e@mb01.staff.ognet.se> Hello everybody, I'm cross-posting this with operators list. The openvswitch flows-based stateful firewall driver which uses the conntrack support in Linux kernel >= 4.3 (iirc) has been marked as experimental for several releases now, is there any information about flaws in this and why it should not be used in production? It's still marked as experimental or missing documentation in the networking guide [1]. And to operators; is anybody running the OVS stateful firewall in production? (firewall_driver = openvswitch) Appreciate any feedback :) Best regards [1] https://docs.openstack.org/neutron/queens/admin/config-ovsfwdriver.html From shake.chen at gmail.com Mon Jun 11 05:25:19 2018 From: shake.chen at gmail.com (Shake Chen) Date: Mon, 11 Jun 2018 13:25:19 +0800 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][solum][tatu][trove] pycrypto is dead and insecure, you should migrate part 2 In-Reply-To: <20180610214851.ggluegjgpd6mtkdz@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> <20180604190624.tjki5sydsoj45sgo@gentoo.org> <20180610214851.ggluegjgpd6mtkdz@gentoo.org> Message-ID: These project seem dies. On Mon, Jun 11, 2018 at 5:48 AM, Matthew Thode wrote: > On 18-06-04 14:06:24, Matthew Thode wrote: > > On 18-05-13 12:22:06, Matthew Thode wrote: > > > This is a reminder to the projects called out that they are using old, > > > unmaintained and probably insecure libraries (it's been dead since > > > 2014). Please migrate off to use the cryptography library. We'd like > > > to drop pycrypto from requirements for rocky. > > > > > > See also, the bug, which has most of you cc'd already. > > > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > > > > +----------------------------------------+------------------ > ---------------------------------------------------+------+- > --------------------------------------------------+ > > | Repository | Filename > | Line | Text > | > > +----------------------------------------+------------------ > ---------------------------------------------------+------+- > --------------------------------------------------+ > > | daisycloud-core | code/daisy/requirements.txt > | 17 | pycrypto>=2.6 # Public > Domain | > > | freezer | requirements.txt > | 21 | pycrypto>=2.6 # Public > Domain | > > | fuel-dev-tools | contrib/fuel-setup/requirements.txt > | 5 | pycrypto==2.6.1 > | > > | fuel-web | nailgun/requirements.txt > | 24 | pycrypto>=2.6.1 > | > > | solum | requirements.txt > | 24 | pycrypto # Public Domain > | > > | tatu | requirements.txt > | 7 | pycrypto>=2.6.1 > | > > | tatu | test-requirements.txt > | 7 | pycrypto>=2.6.1 > | > > | trove | integration/scripts/files/ > requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # > Public Domain | > > | trove | integration/scripts/files/ > requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # > Public Domain | > > | trove | requirements.txt > | 47 | pycrypto>=2.6 # Public > Domain | > > +----------------------------------------+------------------ > ---------------------------------------------------+------+- > --------------------------------------------------+ > > > > In order by name, notes follow. > > > > daisycloud-core - looks like AES / random functions are used > > freezer - looks like AES / random functions are used > > solum - looks like AES / RSA functions are used > > trove - has a review!!! https://review.openstack.org/# > /c/560292/ > > > > The following projects are not tracked so we won't wait on them. > > fuel-dev-tools, fuel-web, tatu > > > > so it looks like progress is being made, so we have that going for us, > > which is nice. What can I do to help move this forward? > > > > It does not look like the projects (other than trove) are moving forward > on this. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Shake Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhengzhenyulixi at gmail.com Mon Jun 11 06:20:42 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 11 Jun 2018 14:20:42 +0800 Subject: [openstack-dev] [nova] nova API meeting schedule In-Reply-To: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> References: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> Message-ID: Glad to hear that the API meeting is happening again, I would also love to join. On Mon, Jun 11, 2018 at 10:49 AM Ghanshyam wrote: > Hi All, > > As you might know, we used to have Nova API subteam meeting on Wed [1] but > we did not continue that this year due to unavailability of members. > > As per discussion with melanie , we would like to continue the API meeting > either on meeting channel (openstack-meeting-4) or as office hour on Nova > channel. We have 2 options for that: > > 1. If there are members from USA/Europe TZ would like to join API meeting > regularly then, we will continue the meeting on meeting channel with more > suitable time considering Asia TZ also. I will initiate the doodle vote to > select the time suitable for all interested members. > > 2. If no member from USA/Europe TZ then, myself and Alex will conduct the > API meeting as office hour on Nova channel during our day time (something > between UTC+1 to UTC + 9). There is not much activity on Nova channel > during our TZ so it will be ok to use Nova channel. In this case, we will > release the current occupied meeting channel. > > Please let us know who all would like to join API meeting so that we can > pursue accordingly. > > [1] https://wiki.openstack.org/wiki/Meetings/NovaAPI > > -Nova API Subteam > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Mon Jun 11 08:34:17 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Mon, 11 Jun 2018 08:34:17 +0000 Subject: [openstack-dev] [vitrage] matching webhook vs alarm list In-Reply-To: References: Message-ID: <6C8EF4E4-0046-4BE9-98F5-F8C6EDF30510@nokia.com> Hi Eric, The format of the vitrage_id was changed to UUID in Pike release. It appears that the API documentation [1] is outdated. I’ll fix it. The vitrage_id that you get in the webhook notification should match the one coming from ‘vitrage alarm list’. The ‘id’ field is determined by the external monitor, so it might be different. Best Regards, Ifat ---------- Forwarded message --------- From: Eric K > Date: Sat, 9 Jun 2018 at 01:40 Subject: [openstack-dev] [vitrage] matching webhook vs alarm list To: OpenStack Development Mailing List (not for usage questions) > Hi I'm building integration with Vitrage webhook and looking for some clarification on what ID to use for matching a webhook notification to the specific alarm from the alarm list. In the sample alarm list response, there is an 'id' field and a 'vitrage_id' field [1], where as in the sample webhook notification payload, there is a 'vitrage_id' field [2]. I'd assume we can match by the 'vitrage_id', but the samples have very different formats for 'vitrage_id', so I just want to confirm. Thank you! [1] https://docs.openstack.org/vitrage/latest/contributor/vitrage-api.html#id22 [2] { "notification": "vitrage.alarm.activate", "payload": { "vitrage_id": "2def31e9-6d9f-4c16-b007-893caa806cd4", "resource": { "vitrage_id": "437f1f4c-ccce-40a4-ac62-1c2f1fd9f6ac", "name": "app-1-server-1-jz6qvznkmnif", "update_timestamp": "2018-01-22 10:00:34.327142+00:00", "vitrage_category": "RESOURCE", "vitrage_operational_state": "OK", "vitrage_type": "nova.instance", "project_id": "8f007e5ba0944e84baa6f2a4f2b5d03a", "id": "9b7d93b9-94ec-41e1-9cec-f28d4f8d702c" }, "update_timestamp": "2018-01-22T10:00:34Z", "vitrage_category": "ALARM", "state": "Active", "vitrage_type": "vitrage", "vitrage_operational_severity": "WARNING", "name": "Instance memory performance degraded" } } https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plugin.html __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyangii at gmail.com Mon Jun 11 08:46:20 2018 From: hyangii at gmail.com (Jae Sang Lee) Date: Mon, 11 Jun 2018 17:46:20 +0900 Subject: [openstack-dev] [kolla] cinder-api image doesn't run as a cinder user. Message-ID: Hi, stackers. We are distributing OpenStack to kubernetes using the docker image generated by kolla. I recently upgraded from ocata to pike and found that the cinder-api container does not run as a cinder user. So it does not pass our unit test. This seems to have been fixed in the following code, https://review.openstack.org/#/c/463535/2/docker/cinder/cinder-api/Dockerfile.j2,unified Is there a reason why it should not be run as a cinder user? Other services except cinder-api (cinder-scheduler, cinder-volume, cinder-backup) are all running as cinder user. If this is a simple bug, try to fix it. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dabarren at gmail.com Mon Jun 11 09:02:07 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 11 Jun 2018 11:02:07 +0200 Subject: [openstack-dev] [kolla] cinder-api image doesn't run as a cinder user. In-Reply-To: References: Message-ID: Hi, >From Pike cinder-api only runs as a wsgi process and container has been migrated into an apache process, currenty we run apache as root user and not as service's user. Regards 2018-06-11 10:46 GMT+02:00 Jae Sang Lee : > Hi, stackers. > > We are distributing OpenStack to kubernetes using the docker image > generated by kolla. I recently upgraded from ocata to pike and found that > the cinder-api container does not run as a cinder user. > So it does not pass our unit test. > > This seems to have been fixed in the following code, > https://review.openstack.org/#/c/463535/2/docker/cinder/ > cinder-api/Dockerfile.j2,unified > > Is there a reason why it should not be run as a cinder user? Other services > except cinder-api (cinder-scheduler, cinder-volume, cinder-backup) are > all running as cinder user. If this is a simple bug, try to fix it. > > Thanks. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Mon Jun 11 09:12:38 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Mon, 11 Jun 2018 09:12:38 +0000 Subject: [openstack-dev] [vitrage] update_timestamp precision In-Reply-To: References: Message-ID: <9EA3E23E-BAFB-4752-8DA7-B07012FE54FA@nokia.com> Hi Eric, Apparently we have inconsistent behavior between the different datasources. The format of the timestamp should be '%Y-%m-%dT%H:%M:%SZ' as defined in [1]. We need to go over the code and make sure all datasources are aligned. I created a bug for it [2]. [1] https://github.com/openstack/vitrage/blob/master/vitrage/datasources/transformer_base.py [2] https://bugs.launchpad.net/vitrage/+bug/1776181 Best regards, Ifat ---------- Forwarded message --------- From: Eric K > Date: Sat, 9 Jun 2018 at 00:20 Subject: [openstack-dev] [vitrage] update_timestamp precision To: OpenStack Development Mailing List (not for usage questions) > Hi I'm building integration with Vitrage webhook and looking for some clarification on the timestamp precision to expect. In the sample webhook payload found in doc the resource and the alarm shows different time stamp precisions: https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plug in.html Thank you! __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jun 11 09:53:52 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Jun 2018 11:53:52 +0200 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases Message-ID: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Hi everyone, As some of the OpenStack deliverables get more mature, we need to adjust our release policies to best handle the case of deliverables that do not need to be updated that much. This discussion started with how to handle those "stable" libraries, but is actually also relevant for "stable" services. Our current models include cycle-tied models (with-intermediary, with-milestones, trailing) and a purely cycle-independent model. Main OpenStack deliverables (the service components that you can deploy to build an OpenStack cloud) are all "released" on a cycle. Libraries are typically maintained per-cycle as well. What happens if no change is pushed to a service or library during a full cycle ? What should we do then ? Options include: 1/ Force artificial releases, even if there are no changes This is the current state. It allows to reuse the exact same process, but creates useless churn and version number confusion. 2/ Do not force releases, but still create branches from latest releases In this variant we would not force an artificial re-release, but we would still create a branch from the last available release, in order to be able to land future patches and do bugfix or security releases as needed. 2bis/ Like 2, but only create the branch when needed Same as the previous one, except that rather than proactively create the stable branch around release time, we'd wait until the branch is actually needed to create it. 3/ Do not force releases, and reuse stable branches from cycle to cycle In this model, if there is no change in a library in Rocky, stable/rocky would never be created, and stable/queens would be used instead. Only one branch would get maintained for the 2 cycles. While this reduces the churn, it's a bit complex to wrap your head around the consequences, and measure how confusing this could be in practice... 4/ Stop worrying about stable branches at all for those "stable" things The idea here would be to stop doing stable branches for those things that do not release that much anymore. This could be done by switching them to the "independent" release model, or to a newly-created model. While good for "finished" deliverables, this option could create issues for things that are inactive for a couple cycles and then pick up activity again -- switching back to being cycle-tied would likely be confusing. My current preference is option 2. It's a good trade-off which reduces churn while keeping a compatibility with the system used for more active components. Compared to 2bis, it's a bit more work (although done in one patch during the release process), but creating the branches in advance means they are ready to be used when someone wants to backport something there, likely reducing process pain. One caveat with this model is that we need to be careful with version numbers. Imagine a library that did a 1.18.0 release for queens (which stable/queens is created from). Nothing happens in Rocky, so we create stable/rocky from the same 1.18.0 release. Same in Stein, so we create stable/stein from the same 1.18.0 release. During the Telluride[1] cycle some patches land and we want to release that. In order to leave room for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, and go directly to 1.20.0. I think we can build release checks to ensure that, but that's something to keep in mind. Thoughts ? [1] It's never too early to campaign for your favorite T name -- Thierry Carrez (ttx) From sferdjao at redhat.com Mon Jun 11 09:55:29 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Mon, 11 Jun 2018 11:55:29 +0200 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <20180608093545.GE11695@paraplu> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> <20180608093545.GE11695@paraplu> Message-ID: <20180611095529.GA3344@redhat> On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote: > On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote: > > On 6/7/2018 12:56 PM, melanie witt wrote: > > > Recently, we've received interest about increasing the maximum number of > > > allowed volumes to attach to a single instance > 26. The limit of 26 is > > > because of a historical limitation in libvirt (if I remember correctly) > > > and is no longer limited at the libvirt level in the present day. So, > > > we're looking at providing a way to attach more than 26 volumes to a > > > single instance and we want your feedback. > > > > The 26 volumes thing is a libvirt driver restriction. > > The original limitation of 26 disks was because at that time there was > no 'virtio-scsi'. > > (With 'virtio-scsi', each of its controller allows upto 256 targets, and > each target can use any LUN (Logical Unit Number) from 0 to 16383 > (inclusive). Therefore, the maxium allowable disks on a single > 'virtio-scsi' controller is 256 * 16384 == 4194304.) Source[1]. Not totally true for Nova. Nova handles one virtio-scsi controller per guest and plug all the volumes in one target so in theory that would be 16384 LUN (only). But you made a good point the 26 volumes thing is not a libvirt driver restriction. For example the QEMU SCSI native implementation handles 256 disks. About the virtio-blk limitation I made the same finding but Tsuyoshi Nagata shared an interesting point saying that virtio-blk is not longer limited by the number of PCI slot available. That in recent kernel and QEMU version [0]. I could join what you are suggesting at the bottom and fix the limit to 256 disks. [0] https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py at 162 > [...] > > > > Some ideas that have been discussed so far include: > > > > > > A) Selecting a new, higher maximum that still yields reasonable > > > performance on a single compute host (64 or 128, for example). Pros: > > > helps prevent the potential for poor performance on a compute host from > > > attaching too many volumes. Cons: doesn't let anyone opt-in to a higher > > > maximum if their environment can handle it. > > Option (A) can still be considered: We can limit it to 256 disks. Why? > > FWIW, I did some digging here: > > The upstream libguestfs project after some thorough testing, arrived at > a limit of 256 disks, and suggest the same for Nova. And if anyone > wants to increase that limit, the proposer should come up with a fully > worked through test plan. :-) (Try doing any meaningful I/O to so many > disks at once, and see how well that works out.) > > What more, the libguestfs upstream tests 256 disks, and even _that_ > fails sometimes: > > https://bugzilla.redhat.com/show_bug.cgi?id=1478201 -- "kernel runs > out of memory with 256 virtio-scsi disks" > > The above bug is fixed now in kernel-4.17.0-0.rc3.git1.2. (And also > required a corresponding fix in QEMU[2], which is available from version > v2.11.0 onwards.) > > [...] > > > [1] https://lists.nongnu.org/archive/html/qemu-devel/2017-04/msg02823.html > -- virtio-scsi limits > [2] https://git.qemu.org/?p=qemu.git;a=commit;h=5c0919d > > -- > /kashyap > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gergely.csatari at nokia.com Mon Jun 11 10:44:20 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 11 Jun 2018 10:44:20 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: Message-ID: Hi, Going inline. -----Original Message----- From: Erno Kuvaja [mailto:ekuvaja at redhat.com] Sent: Friday, June 8, 2018 3:18 PM Hi, Answering inline. Best, Erno "jokke" Kuvaja On Thu, Jun 7, 2018 at 11:49 AM, Csatari, Gergely (Nokia - HU/Budapest) wrote: > Hi, > > > > I did some work ont he figures and realised, that I have some > questions related to the alternative options: > > > > Multiple backends option: > > What is the API between Glance and the Glance backends? glance_store library > How is it possible to implement location aware synchronisation > (synchronise images only to those cloud instances where they are needed)? This needs bit of hooking. We need to update the locations into Glance once the replication has happened. [G0]: Okay, but how to avoid the replication to sites where the image is not needed? > Is it possible to have different OpenStack versions in the different > cloud instances? In my understanding it's not supported to mix versions within OpenStack cloud apart from during upgrade. [G0]: Understood. This might be a problem ont he long run. With lots of edge cloud instance it can not be guaranteed, that all of them are upgraded in one go. > Can a cloud instance use the locally synchronised images in case of a > network connection break? That depends a lot of the implementation. If there is local glance node with replicated db and store, yes. [G0]: So we need a replicated Glance DB, a store and a backend in every edge cloud instance for this? How the database would be syncronised in this case? > Is it possible to implement this without storing database credentials > ont he edge cloud instances? Again depending of the deployment. You definitely cannot have both, access during network outage and access without db credentials. if one needs to have local access of images without db credentials, there is always possibility for the local Ceph back-end with remote glance-api node. In this case Nova can talk directly to the local Ceph back-end and communicate with centralized glance-api that has the credentials to the db. The problem with loosing the network in this scenario is that Nova will have no idea if the user has rights to use the image or not and it will not know the path to that image's data. [G0]: Okay > Independent synchronisation service: > > If I understood [1] correctly mixmatch can help Nova to attach a > remote volume, but it will not help in synchronizing the images. is this true? > > As I promised in the Edge Compute Group call I plan to organize an IRC > review meeting to check the wiki. Please indicate your availability in [2]. > > [1]: https://mixmatch.readthedocs.io/en/latest/ > > [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 [G0]: Please add your availability here. Thanks, Gerg0 > > > > Br, > > Gerg0 > > > > From: Csatari, Gergely (Nokia - HU/Budapest) > Sent: Wednesday, May 23, 2018 8:59 PM > To: OpenStack Development Mailing List (not for usage questions) > ; > edge-computing at lists.openstack.org > Subject: [edge][glance]: Wiki of the possible architectures for image > synchronisation > > > > Hi, > > > > Here I send the wiki page [1] where I summarize what I understood from > the Forum session about image synchronisation in edge environment [2], [3]. > > > > Please check and correct/comment. > > > > Thanks, > > Gerg0 > > > > > > [1]: > https://wiki.openstack.org/wiki/Image_handling_in_edge_environment > > [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images > > [3]: > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events > /21768/image-handling-in-an-edge-cloud-infrastructure > > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > From sombrafam at gmail.com Mon Jun 11 12:36:14 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 11 Jun 2018 09:36:14 -0300 Subject: [openstack-dev] [kolla] cinder-api image doesn't run as a cinder user. In-Reply-To: References: Message-ID: Hi Eduardo, Just for curiosity, do you know the motivation why many OS services were moved to run on apache instead of just listen? Any reference about that? Erlon Em seg, 11 de jun de 2018 às 06:02, Eduardo Gonzalez escreveu: > Hi, > > From Pike cinder-api only runs as a wsgi process and container has been > migrated into an apache process, currenty we run apache as root user and > not as service's user. > > Regards > > > > 2018-06-11 10:46 GMT+02:00 Jae Sang Lee : > >> Hi, stackers. >> >> We are distributing OpenStack to kubernetes using the docker image >> generated by kolla. I recently upgraded from ocata to pike and found >> that the cinder-api container does not run as a cinder user. >> So it does not pass our unit test. >> >> This seems to have been fixed in the following code, >> >> https://review.openstack.org/#/c/463535/2/docker/cinder/cinder-api/Dockerfile.j2,unified >> >> Is there a reason why it should not be run as a cinder user? Other >> services except cinder-api (cinder-scheduler, cinder-volume, >> cinder-backup) are all running as cinder user. If this is a simple bug, try >> to fix it. >> >> Thanks. >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Jun 11 12:37:34 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 11 Jun 2018 07:37:34 -0500 Subject: [openstack-dev] Reminder: UC Meeting Today 1400UTC / 0900CST Message-ID: Hey everyone, Please see https://wiki.openstack.org/wiki/Governance/Foundation/ UserCommittee for UC meeting info and add additional agenda items if needed. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Jun 11 12:40:09 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 11 Jun 2018 13:40:09 +0100 (BST) Subject: [openstack-dev] [nova] nova API meeting schedule In-Reply-To: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> References: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> Message-ID: On Mon, 11 Jun 2018, Ghanshyam wrote: > 2. If no member from USA/Europe TZ then, myself and Alex will > conduct the API meeting as office hour on Nova channel during our > day time (something between UTC+1 to UTC + 9). There is not much > activity on Nova channel during our TZ so it will be ok to use > Nova channel. In this case, we will release the current occupied > meeting channel. I think this is the better option since it works well for the people who are already actively interested. If that situation changes, you can always do something different. And if you do some kind of summary of anything important at the meeting (whenever the time) then people who can't attend can be in the loop. I was trying to attend the API meeting for a while (back when it was happening) but had to cut it out as its impossible to pay attention to everything and something had to give. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Mon Jun 11 12:47:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 11 Jun 2018 13:47:30 +0100 (BST) Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Message-ID: On Mon, 11 Jun 2018, Thierry Carrez wrote: > 2/ Do not force releases, but still create branches from latest releases > > In this variant we would not force an artificial re-release, but we would > still create a branch from the last available release, in order to be able to > land future patches and do bugfix or security releases as needed. This one seems best because of: > creating > the branches in advance means they are ready to be used when someone wants to > backport something there, likely reducing process pain. Really glad to see this happening. We need to make sure that we don't accidentally make low-activity because mature and stable look the same as low-activity because dead. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From bodenvmw at gmail.com Mon Jun 11 13:09:06 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 11 Jun 2018 07:09:06 -0600 Subject: [openstack-dev] [neutron] neutron-lib 1.15.0 syntax error in LOG.debug Message-ID: The 1.15.0 release of neutron-lib contains a syntax error in a LOG debug call that was fixed with [1]. We are working to release the fix with 1.16.0 [2]. If you are running into this error [3], it maybe necessary to exclude neutron-lib 1.15.0 and pickup 1.16.0 once we get it released. Feel free to reach out on #openstack-neutron if you have questions. Thanks [1] https://review.openstack.org/#/c/574068/ [2] https://review.openstack.org/#/c/573826/ [3] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22from%20file%20runtime.py%2C%20line%2062%5C%22 From skaplons at redhat.com Mon Jun 11 13:23:02 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 11 Jun 2018 15:23:02 +0200 Subject: [openstack-dev] [ovs] [neutron] openvswitch flows firewall driver In-Reply-To: <72e1c6c5254c43638f8a67cb8fa10f0e@mb01.staff.ognet.se> References: <72e1c6c5254c43638f8a67cb8fa10f0e@mb01.staff.ognet.se> Message-ID: <2CCDDC5F-BD4C-4722-BAA0-80A97A016E37@redhat.com> Hi, I’m not sure about Queens but recently with [1] we switched default security group driver in devstack to „openvswitch”. Since at least month we have scenario gate job with this SG driver running as voting and gating. Currently, after switch devstack default driver to openvswitch it’s tested in many jobs in Neutron. [1] https://review.openstack.org/#/c/568297/ > Wiadomość napisana przez Tobias Urdin w dniu 11.06.2018, o godz. 05:20: > > Hello everybody, > I'm cross-posting this with operators list. > > The openvswitch flows-based stateful firewall driver which uses the > conntrack support in Linux kernel >= 4.3 (iirc) has been > marked as experimental for several releases now, is there any > information about flaws in this and why it should not be used in production? > > It's still marked as experimental or missing documentation in the > networking guide [1]. > > And to operators; is anybody running the OVS stateful firewall in > production? (firewall_driver = openvswitch) > > Appreciate any feedback :) > Best regards > > [1] https://docs.openstack.org/neutron/queens/admin/config-ovsfwdriver.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From opensrloo at gmail.com Mon Jun 11 13:53:24 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 11 Jun 2018 09:53:24 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) Message-ID: Hi, I don't want to hijack the initial thread, but am now feeling somewhat guilty about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the beginning of this cycle. To date, I have not been pleased with replacing Launchpad with Storyboard. I believe that Storyboard is somewhat still-in-progress, and that there were/are some features (stories) that are outstanding that would make its use better. >From my point of view (as a developer and core, not a project manager or PTL) using Storyboard has made my day-to-day work worse. Granted, any migration is without headaches. But some of the main things, like searching for our RFEs (that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how to limit a search to only the 'ironic' project using that 'search' like GUI, so I have been frustrated trying to find particular bugs that I *knew* existed but had not memorized the bug number. I haven't been as involved upstream this cycle, so perhaps I have missed other emails that have mentioned how to get around or do things with Storyboard. I would caution folks that are thinking about migrating; I wish we had delayed it until there was better support/features/stories implemented with Storyboard. At the time, I was also negligent about actually trying out Storyboard before we pushed the button (because I assumed it would be ok, others were using it, why wouldn't it suffice?) Perhaps Storyboard can address most of my issues now? Maybe updated documentation would help? (I believe the last time I tried to use Storyboard was 2 weeks ago, when I was 'search'ing for an old bug in Storyboard. I gave up.) I apologize for not writing a detailed email with specific examples of what is lacking (for me) and in hindsight should have sent out email at the time I encountered issues/had questions. I guess I am hoping that others can fill-in-the-blanks and ask for things that would make Storyboard (more) usable. No, I didn't watch any videos about using Storyboard, just like I've never watched any video about using Launchpad, Trello, jira, . I did try looking for documentation at some point though and I don't recall finding what I was looking for. --ruby On Thu, Jun 7, 2018 at 4:25 PM, Kendall Nelson wrote: > Hello :) > > I think that these two goals definitely fit the criteria we discussed in > Vancouver during the S Release Goal Forum Session. I know Storyboard > Migration was also mentioned after I had to dip out to another session so I > wanted to follow up on that. > > I know it doesn't fit the shiny user facing docket that was discussed at > the Forum, but I do think its time we make migration official in some > capacity as a release goal or some other way. Having migrated Ironic and > having TripleO on the schedule for migration (as requested during the last > goal discussion) in addition to having migrated Heat, Barbican and several > others in the last few months we have reached the point that I think > migration of the rest of the projects is attainable by the end of Stein. > > Thoughts? > > -Kendall (diablo_rojo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Jun 11 14:00:41 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 11 Jun 2018 16:00:41 +0200 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: Message-ID: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> Hi, On 06/11/2018 03:53 PM, Ruby Loo wrote: > Hi, > > I don't want to hijack the initial thread, but am now feeling somewhat guilty > about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the > beginning of this cycle. To date, I have not been pleased with replacing > Launchpad with Storyboard. I believe that Storyboard is somewhat > still-in-progress, and that there were/are some features (stories) that are > outstanding that would make its use better. > > From my point of view (as a developer and core, not a project manager or PTL) > using Storyboard has made my day-to-day work worse. Granted, any migration is > without headaches. But some of the main things, like searching for our RFEs > (that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how > to limit a search to only the 'ironic' project using that 'search' like GUI, so > I have been frustrated trying to find particular bugs that I *knew* existed but > had not memorized the bug number. Yeah, I cannot fully understand the search. I would expect something explicit like Launchpad or better something command-based like "project:openstack/ironic pxe". This does not seem to work, so I also wonder how to filter all stories affecting a project. Bonus point for giving stories names. They don't even have to be unique, but I have something like https://storyboard.openstack.org/#!/story/100500-some-readable-slug/ (where 100500 is an actual story ID) it would help my browser locate them in my history. > > I haven't been as involved upstream this cycle, so perhaps I have missed other > emails that have mentioned how to get around or do things with Storyboard. I > would caution folks that are thinking about migrating; I wish we had delayed it > until there was better support/features/stories implemented with Storyboard. At > the time, I was also negligent about actually trying out Storyboard before we > pushed the button (because I assumed it would be ok, others were using it, why > wouldn't it suffice?) Perhaps Storyboard can address most of my issues now? > Maybe updated documentation would help? (I believe the last time I tried to use > Storyboard was 2 weeks ago, when I was 'search'ing for an old bug in Storyboard. > I gave up.) > > I apologize for not writing a detailed email with specific examples of what is > lacking (for me) and in hindsight should have sent out email at the time I > encountered issues/had questions. I guess I am hoping that others can > fill-in-the-blanks and ask for things that would make Storyboard (more) usable. > > No, I didn't watch any videos about using Storyboard, just like I've never > watched any video about using Launchpad, Trello, jira, . I did try > looking for documentation at some point though and I don't recall finding what I > was looking for. > > --ruby > > > On Thu, Jun 7, 2018 at 4:25 PM, Kendall Nelson > wrote: > > Hello :) > > I think that these two goals definitely fit the criteria we discussed in > Vancouver during the S Release Goal Forum Session. I know Storyboard > Migration was also mentioned after I had to dip out to another session so I > wanted to follow up on that. > > I know it doesn't fit the shiny user facing docket that was discussed at the > Forum, but I do think its time we make migration official in some capacity > as a release goal or some other way. Having migrated Ironic and having > TripleO on the schedule for migration (as requested during the last goal > discussion) in addition to having migrated Heat, Barbican and several others > in the last few months we have reached the point that I think migration of > the rest of the projects is attainable by the end of Stein. > > Thoughts? > > -Kendall (diablo_rojo) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dabarren at gmail.com Mon Jun 11 14:06:52 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Mon, 11 Jun 2018 16:06:52 +0200 Subject: [openstack-dev] [kolla] cinder-api image doesn't run as a cinder user. In-Reply-To: References: Message-ID: Hi, See a global openstack goal to move into wsgi https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html And spec for cinder https://review.openstack.org/#/c/192683/6/specs/liberty/non-eventlet-wsgi-app.rst Regards 2018-06-11 14:36 GMT+02:00 Erlon Cruz : > Hi Eduardo, > > Just for curiosity, do you know the motivation why many OS services were > moved to run on apache instead of just listen? Any reference about that? > > Erlon > > Em seg, 11 de jun de 2018 às 06:02, Eduardo Gonzalez > escreveu: > >> Hi, >> >> From Pike cinder-api only runs as a wsgi process and container has been >> migrated into an apache process, currenty we run apache as root user and >> not as service's user. >> >> Regards >> >> >> >> 2018-06-11 10:46 GMT+02:00 Jae Sang Lee : >> >>> Hi, stackers. >>> >>> We are distributing OpenStack to kubernetes using the docker image >>> generated by kolla. I recently upgraded from ocata to pike and found >>> that the cinder-api container does not run as a cinder user. >>> So it does not pass our unit test. >>> >>> This seems to have been fixed in the following code, >>> https://review.openstack.org/#/c/463535/2/docker/cinder/ >>> cinder-api/Dockerfile.j2,unified >>> >>> Is there a reason why it should not be run as a cinder user? Other >>> services except cinder-api (cinder-scheduler, cinder-volume, >>> cinder-backup) are all running as cinder user. If this is a simple bug, try >>> to fix it. >>> >>> Thanks. >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Mon Jun 11 14:08:05 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Mon, 11 Jun 2018 11:08:05 -0300 Subject: [openstack-dev] [kolla] cinder-api image doesn't run as a cinder user. In-Reply-To: References: Message-ID: Thanks! Em seg, 11 de jun de 2018 às 11:07, Eduardo Gonzalez escreveu: > Hi, > > See a global openstack goal to move into wsgi > https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html > And spec for cinder > > > https://review.openstack.org/#/c/192683/6/specs/liberty/non-eventlet-wsgi-app.rst > > Regards > > 2018-06-11 14:36 GMT+02:00 Erlon Cruz : > >> Hi Eduardo, >> >> Just for curiosity, do you know the motivation why many OS services were >> moved to run on apache instead of just listen? Any reference about that? >> >> Erlon >> >> Em seg, 11 de jun de 2018 às 06:02, Eduardo Gonzalez >> escreveu: >> >>> Hi, >>> >>> From Pike cinder-api only runs as a wsgi process and container has been >>> migrated into an apache process, currenty we run apache as root user and >>> not as service's user. >>> >>> Regards >>> >>> >>> >>> 2018-06-11 10:46 GMT+02:00 Jae Sang Lee : >>> >>>> Hi, stackers. >>>> >>>> We are distributing OpenStack to kubernetes using the docker image >>>> generated by kolla. I recently upgraded from ocata to pike and found >>>> that the cinder-api container does not run as a cinder user. >>>> So it does not pass our unit test. >>>> >>>> This seems to have been fixed in the following code, >>>> >>>> https://review.openstack.org/#/c/463535/2/docker/cinder/cinder-api/Dockerfile.j2,unified >>>> >>>> Is there a reason why it should not be run as a cinder user? Other >>>> services except cinder-api (cinder-scheduler, cinder-volume, >>>> cinder-backup) are all running as cinder user. If this is a simple bug, try >>>> to fix it. >>>> >>>> Thanks. >>>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Jun 11 14:17:09 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 11 Jun 2018 09:17:09 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 4 June 2018 Message-ID: # Keystone Team Update - Week of 4 June 2018 ## News Sorry this didn't make it out last week. This week we were busy wrapping up specification discussion before spec freeze. Most of which revolved around unified limits [0]. We're also starting to see implementations for MFA receipts [1] and application credentials capability lists [2]. [0] https://review.openstack.org/#/c/540803/ [1] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:spec/auth_receipts [2] https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic:bp/whitelist-extension-for-app-creds ## Open Specs Search query: https://bit.ly/2G8Ai5q With the last few bits for hierarchical limits addressed and the specification merged, we don't expect to accept any more specifications for the Rocky release. ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 28 changes last week. Most of which were to move keystone off its homegrown WSGI implementation. Converting to Flask is a pretty big move for keystone and the team, but it reduces technical dept and will help with maintenance costs in the future since it's one less wheel we have to look after. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 50 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Please take it look if you have time to do a review or two. ## Bugs This week we opened 7 new bugs, closed 5, and fixed 5. Bugs opened (7) Bug #1775094 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1775094 Bug #1774654 (keystone:Undecided) opened by Wyllys Ingersoll https://bugs.launchpad.net/keystone/+bug/1774654 Bug #1774688 (keystone:Undecided) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1774688                                                                                                       Bug #1775140 (keystone:Undecided) opened by Andras Kovi https://bugs.launchpad.net/keystone/+bug/1775140                                                                                                          Bug #1775207 (keystone:Undecided) opened by Pavlo Shchelokovskyy https://bugs.launchpad.net/keystone/+bug/1775207                                                                                                 Bug #1775295 (keystone:Undecided) opened by johnpham https://bugs.launchpad.net/keystone/+bug/1775295                                                                                                             Bug #1774722 (oslo.config:Low) opened by Kent Wu https://bugs.launchpad.net/oslo.config/+bug/1774722                                                                                                                                                                                                                                                                                                                                Bugs closed (5)                                                                                                                                                                                                   Bug #1578466 (keystone:Medium) https://bugs.launchpad.net/keystone/+bug/1578466                                                                                                                                   Bug #1578401 (keystone:Low) https://bugs.launchpad.net/keystone/+bug/1578401                                                                                                                                      Bug #1775140 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1775140                                                                                                                                Bug #1775295 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1775295                                                                                                                                Bug #1774722 (oslo.config:Low) https://bugs.launchpad.net/oslo.config/+bug/1774722                                                                                                                                                                                                                                                                                                                                                  Bugs fixed (5)                                                                                                                                                                                                    Bug #1728907 (keystone:Low) fixed by Gage Hugo https://bugs.launchpad.net/keystone/+bug/1728907                                                                                                                   Bug #1673859 (oslo.policy:Undecided) fixed by ChangBo Guo(gcb) https://bugs.launchpad.net/oslo.policy/+bug/1673859 Bug #1741073 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1741073 Bug #1771442 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1771442 Bug #1773473 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1773473 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Specification freeze and rocky-2 were last Friday. If you're working on a feature for this release (strict two level enforcement models [0], MFA receipts [1], capability lists [2], or basic default roles [3]), feature freeze is a month away. Reminder that we've bumped feature freeze up two weeks ahead of rocky-3 this release due to the issues we had last release with the rush before freeze. If you need help, please socialize it somewhere (ML or IRC). If you are available for reviews, some people implementing those features have asked for early feedback.  [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/strict-two-level-enforcement-model.html [1] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/mfa-auth-receipt.html [2] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/capabilities-app-creds.html [3] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html ## Shout-outs Thanks to Morgan for all the work he did this week to get keystone converted to Flask [0]! This is going to help us a bunch in the future with things we've been talking about for a while (e.g. improved granularity for scope checks in keystone's API). [0] https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:flaskification ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Mon Jun 11 14:23:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 10:23:22 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> Message-ID: <1528726300-sup-9083@lrrr.local> Excerpts from Dmitry Tantsur's message of 2018-06-11 16:00:41 +0200: > Hi, > > On 06/11/2018 03:53 PM, Ruby Loo wrote: > > Hi, > > > > I don't want to hijack the initial thread, but am now feeling somewhat guilty > > about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the > > beginning of this cycle. To date, I have not been pleased with replacing > > Launchpad with Storyboard. I believe that Storyboard is somewhat > > still-in-progress, and that there were/are some features (stories) that are > > outstanding that would make its use better. > > > > From my point of view (as a developer and core, not a project manager or PTL) > > using Storyboard has made my day-to-day work worse. Granted, any migration is > > without headaches. But some of the main things, like searching for our RFEs > > (that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how > > to limit a search to only the 'ironic' project using that 'search' like GUI, so > > I have been frustrated trying to find particular bugs that I *knew* existed but > > had not memorized the bug number. > > Yeah, I cannot fully understand the search. I would expect something explicit > like Launchpad or better something command-based like "project:openstack/ironic > pxe". This does not seem to work, so I also wonder how to filter all stories > affecting a project. > Searching tripped me up for the first couple of weeks, too. Storyboard's search field is a lot "smarter" than expected. Or maybe you'd call it "magic". Either way, it was confusing, but you don't have to use any special syntax in the UI. To search for a project, type the name of the project in the search field and then *wait* for the list of drop-down options to appear. The first item in the list will be a "raw" search for the term. The others will have little icons indicating their type. The project icon looks like a little cube, for example. If I go to https://storyboard.openstack.org/#!/search and type "openstack/ironic" I get a list that includes openstack/ironic, openstack/ironic-inspector, etc. Select the project you want from the list and hit enter, and you'll get a list of all of the stories with tasks attached to the project. To search based on words in the title or body of the story or task, just type those and then select the item with the magnifying glass icon for the "raw" search. It's not necessary to use search to get a list of open items, though. You can also navigate directly to a project or group of projects. For example, by clicking the "Project Groups" icon on the left you end up at https://storyboard.openstack.org/#!/project_group/list and by entering "ironic" in the search field there you'll see that there are 23 projects in the ironic group (wow!). Clicking the name of the project group will take you to a view showing the current open items. I strongly encourage teams to set up worklists or dashboards with saved searches or manually curated lists of stories or tasks. For example, the release team uses https://storyboard.openstack.org/#!/board/64 to keep track of our work within the cycle. Doug From gergely.csatari at nokia.com Mon Jun 11 14:28:54 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 11 Jun 2018 14:28:54 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: Hi, Thanks for the comments. I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 8, 2018 1:46 PM To: Csatari, Gergely (Nokia - HU/Budapest) ; OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: * What is the API between Glance and the Glance backends? * How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? * Is it possible to have different OpenStack versions in the different cloud instances? * Can a cloud instance use the locally synchronised images in case of a network connection break? * Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: * If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Jun 11 14:48:14 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 11 Jun 2018 16:48:14 +0200 Subject: [openstack-dev] [nova] Notification update week 24 Message-ID: <1528728494.13902.1@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Medium] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields This is also visible in tssurya's environment. I'm wondering if we can implement a nova-manage heal-instance-flavor command for these environment as I'm not sure I will be able to find the root cause why the disable field is missing from these flavors. No update on other bugs and we have no new bugs tagged with notifications. Features -------- Sending full traceback in versioned notifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications We are iterating with Kevin on the implementation and sample test in https://review.openstack.org/#/c/564092/ . Add notification support for trusted_certs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is part of the bp nova-validate-certificates implementation series to extend some of the instance notifications. I'm +2 on the notification impact in https://review.openstack.org/#/c/563269 waiting for the rest of the series to merge. Introduce Pending VM state ~~~~~~~~~~~~~~~~~~~~~~~~~~ The spec https://review.openstack.org/#/c/554212 still not exactly define what will be in the select_destination notification payload. Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications We are iterating on the implementation in https://review.openstack.org/#/c/536243 No progress: ~~~~~~~~~~~~ * Versioned notification transformation https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * Introduce instance.lock and instance.unlock notifications https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Blocked: ~~~~~~~~ * Add versioned notifications for removing a member from a server group https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications Weekly meeting -------------- We skip the meeting this week (week 24). The next meeting will be held on 19th of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180619T170000 Cheers, gibi From doug at doughellmann.com Mon Jun 11 14:49:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 10:49:57 -0400 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Message-ID: <1528728119-sup-6948@lrrr.local> Excerpts from Thierry Carrez's message of 2018-06-11 11:53:52 +0200: > Hi everyone, > > As some of the OpenStack deliverables get more mature, we need to adjust > our release policies to best handle the case of deliverables that do not > need to be updated that much. This discussion started with how to handle > those "stable" libraries, but is actually also relevant for "stable" > services. > > Our current models include cycle-tied models (with-intermediary, > with-milestones, trailing) and a purely cycle-independent model. Main > OpenStack deliverables (the service components that you can deploy to > build an OpenStack cloud) are all "released" on a cycle. Libraries are > typically maintained per-cycle as well. What happens if no change is > pushed to a service or library during a full cycle ? What should we do > then ? > > Options include: > > 1/ Force artificial releases, even if there are no changes > > This is the current state. It allows to reuse the exact same process, > but creates useless churn and version number confusion. > > 2/ Do not force releases, but still create branches from latest releases > > In this variant we would not force an artificial re-release, but we > would still create a branch from the last available release, in order to > be able to land future patches and do bugfix or security releases as needed. > > 2bis/ Like 2, but only create the branch when needed > > Same as the previous one, except that rather than proactively create the > stable branch around release time, we'd wait until the branch is > actually needed to create it. > > 3/ Do not force releases, and reuse stable branches from cycle to cycle > > In this model, if there is no change in a library in Rocky, stable/rocky > would never be created, and stable/queens would be used instead. Only > one branch would get maintained for the 2 cycles. While this reduces the > churn, it's a bit complex to wrap your head around the consequences, and > measure how confusing this could be in practice... > > 4/ Stop worrying about stable branches at all for those "stable" things > > The idea here would be to stop doing stable branches for those things > that do not release that much anymore. This could be done by switching > them to the "independent" release model, or to a newly-created model. > While good for "finished" deliverables, this option could create issues > for things that are inactive for a couple cycles and then pick up > activity again -- switching back to being cycle-tied would likely be > confusing. > > > My current preference is option 2. > > It's a good trade-off which reduces churn while keeping a compatibility > with the system used for more active components. Compared to 2bis, it's > a bit more work (although done in one patch during the release process), > but creating the branches in advance means they are ready to be used > when someone wants to backport something there, likely reducing process > pain. > > One caveat with this model is that we need to be careful with version > numbers. Imagine a library that did a 1.18.0 release for queens (which > stable/queens is created from). Nothing happens in Rocky, so we create > stable/rocky from the same 1.18.0 release. Same in Stein, so we create > stable/stein from the same 1.18.0 release. During the Telluride[1] cycle > some patches land and we want to release that. In order to leave room > for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, > and go directly to 1.20.0. I think we can build release checks to ensure > that, but that's something to keep in mind. > > Thoughts ? > > [1] It's never too early to campaign for your favorite T name Although I originally considered it separate, reviewing your summary I suspect option 2bis is most likely to turn into option 3, in practice. I think having the choice between options 2 and switching to an independent release model (maybe only for libraries) is going to be best, at least to start out. Stein will be the first series where we have to actually deal with this, so we can see how it goes and discuss alternatives if we run into issues. Doug From doug at doughellmann.com Mon Jun 11 14:52:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 10:52:05 -0400 Subject: [openstack-dev] [tc] technically committee status for 11 June Message-ID: <1528728615-sup-9258@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at:https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: * PowerVMStackers following stable policy https://review.openstack.org/#/c/562591 * Add openstackclient to Chef OpenStack https://review.openstack.org/#/c/571504 Other approved changes: * provide more detail about the expectations we place on goal champions https://review.openstack.org/564060 Office hour logs from last week: * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-05-09.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-06-01.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-07-15.00.html I posted my summary of the joint leadership meeting held between the Foundation Board, User Committee, and TC at the summit: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131115.html == Ongoing Discussions == The TC has started planning for more active tracking of the work of project teams and SIGs, to identify inter-group issues earlier and to help work through them as needed. The first step in the process is to have TC members attached as liaisons to all of the groups. When all TC members have had a chance to sign up as liaisons for teams where they are already active, I will make assignments to fill out the roster so that all teams are covered. * http://lists.openstack.org/pipermail/openstack-dev/2018-June/131293.html * https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons The resolution laying out the Python 2 deprecation timeline and deadline for supporting Python 3 has enough votes to be approved, but we had several TC members traveling last week so I am going to hold it open until 12 June in case anyone else has comments. * https://review.openstack.org/#/c/571011/ The governance change to add a "Castellan-compatible key store" to the base services list is up for review. * https://review.openstack.org/572656 The first batch of git repositories for StarlingX, containing only projects that are not forks of anything else, have been imported. * https://review.openstack.org/#/c/569562/ == TC member actions/focus/discussions for the coming week(s) == I have proposed a small change to the house-rules clarifying how typo fixes proposed by the chair should be handled. * https://review.openstack.org/572811 TC members, please sign up as a liaison to project teams and SIGs (see above). == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From doug at doughellmann.com Mon Jun 11 15:02:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 11:02:41 -0400 Subject: [openstack-dev] [tc] proposing changes to the project-team-guide-core review team In-Reply-To: <1528214965-sup-1044@lrrr.local> References: <1528214965-sup-1044@lrrr.local> Message-ID: <1528729332-sup-699@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-05 12:14:19 -0400: > The review team [1] for the project-team-guide repository (managed > by the TC) hasn't been updated for a while. I would like to propose > removing a few reviewers who are no longer active, and adding one > new reviewer. > > My understanding is that Kyle Mestery and Nikhil Komawar have both > moved on from OpenStack to other projects, so I propose that we > remove them from the core team. > > Chris Dent has been active with reviews lately and has expressed > willingness to help manage the guide. I propose that we add him to > the team. > > Please let me know what you think, > Doug > > [1] https://review.openstack.org/#/admin/groups/953,members After a week with no objections, I have added Chris to the review team for the project-team-guide repository. Thank you again, Chris, for offering to help! Doug From kchamart at redhat.com Mon Jun 11 15:06:04 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 11 Jun 2018 17:06:04 +0200 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <20180611095529.GA3344@redhat> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> <20180608093545.GE11695@paraplu> <20180611095529.GA3344@redhat> Message-ID: <20180611150604.GF11695@paraplu> On Mon, Jun 11, 2018 at 11:55:29AM +0200, Sahid Orentino Ferdjaoui wrote: > On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote: > > On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote: [...] > > > The 26 volumes thing is a libvirt driver restriction. > > > > The original limitation of 26 disks was because at that time there was > > no 'virtio-scsi'. > > > > (With 'virtio-scsi', each of its controller allows upto 256 targets, and > > each target can use any LUN (Logical Unit Number) from 0 to 16383 > > (inclusive). Therefore, the maxium allowable disks on a single > > 'virtio-scsi' controller is 256 * 16384 == 4194304.) Source[1]. > > Not totally true for Nova. Nova handles one virtio-scsi controller per > guest and plug all the volumes in one target so in theory that would > be 16384 LUN (only). Yeah, I could've been clearer that I was only talking the maximum allowable disks regardless of how Nova handles it. > But you made a good point the 26 volumes thing is not a libvirt driver > restriction. For example the QEMU SCSI native implementation handles > 256 disks. > > About the virtio-blk limitation I made the same finding but Tsuyoshi > Nagata shared an interesting point saying that virtio-blk is not longer > limited by the number of PCI slot available. That in recent kernel and > QEMU version [0]. > > I could join what you are suggesting at the bottom and fix the limit > to 256 disks. Right, that's for KVM-based hypervisors. Eric Fried on IRC said the other day that for IBM POWER hypervisor they have tested (not with OpenStack) upto 4000 disks. But I am yet to see any more concrete details from POWER hypervisor users on this thread. If people can't seem to reach an agreement on the limits, we may have to settle with conditionals: if kvm|qemu: return 256 elif POWER: return 4000 elif: ... Before that we need concrete data that it is a _reasonble_ limit for POWER hypervisor (and possibly others). > [0] https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py at 162 [...] -- /kashyap From openstack at fried.cc Mon Jun 11 15:14:33 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 11 Jun 2018 10:14:33 -0500 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <20180611150604.GF11695@paraplu> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> <20180608093545.GE11695@paraplu> <20180611095529.GA3344@redhat> <20180611150604.GF11695@paraplu> Message-ID: <4bd5a90a-46f4-492f-4f13-201872d43919@fried.cc> I thought we were leaning toward the option where nova itself doesn't impose a limit, but lets the virt driver decide. I would really like NOT to see logic like this in any nova code: > if kvm|qemu: > return 256 > elif POWER: > return 4000 > elif: > ... On 06/11/2018 10:06 AM, Kashyap Chamarthy wrote: > On Mon, Jun 11, 2018 at 11:55:29AM +0200, Sahid Orentino Ferdjaoui wrote: >> On Fri, Jun 08, 2018 at 11:35:45AM +0200, Kashyap Chamarthy wrote: >>> On Thu, Jun 07, 2018 at 01:07:48PM -0500, Matt Riedemann wrote: > > [...] > >>>> The 26 volumes thing is a libvirt driver restriction. >>> >>> The original limitation of 26 disks was because at that time there was >>> no 'virtio-scsi'. >>> >>> (With 'virtio-scsi', each of its controller allows upto 256 targets, and >>> each target can use any LUN (Logical Unit Number) from 0 to 16383 >>> (inclusive). Therefore, the maxium allowable disks on a single >>> 'virtio-scsi' controller is 256 * 16384 == 4194304.) Source[1]. >> >> Not totally true for Nova. Nova handles one virtio-scsi controller per >> guest and plug all the volumes in one target so in theory that would >> be 16384 LUN (only). > > Yeah, I could've been clearer that I was only talking the maximum > allowable disks regardless of how Nova handles it. > >> But you made a good point the 26 volumes thing is not a libvirt driver >> restriction. For example the QEMU SCSI native implementation handles >> 256 disks. >> >> About the virtio-blk limitation I made the same finding but Tsuyoshi >> Nagata shared an interesting point saying that virtio-blk is not longer >> limited by the number of PCI slot available. That in recent kernel and >> QEMU version [0]. >> >> I could join what you are suggesting at the bottom and fix the limit >> to 256 disks. > > Right, that's for KVM-based hypervisors. > > Eric Fried on IRC said the other day that for IBM POWER hypervisor they > have tested (not with OpenStack) upto 4000 disks. But I am yet to see > any more concrete details from POWER hypervisor users on this thread. > > If people can't seem to reach an agreement on the limits, we may have to > settle with conditionals: > > if kvm|qemu: > return 256 > elif POWER: > return 4000 > elif: > ... > > Before that we need concrete data that it is a _reasonble_ limit for > POWER hypervisor (and possibly others). > >> [0] https://review.openstack.org/#/c/567472/16/nova/virt/libvirt/blockinfo.py at 162 > > [...] > From pc2929 at att.com Mon Jun 11 16:17:11 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Mon, 11 Jun 2018 16:17:11 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528726300-sup-9083@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> Message-ID: Jumping into the general Storyboard topic, but distinct from the previous questions about searching, is there any equivalent in Storyboard to the Launchpad series and milestones diagrams? e.g.: https://launchpad.net/nova/+series https://launchpad.net/neutron/+series https://launchpad.net/cinder/+series https://launchpad.net/networking-sfc/+series https://launchpad.net/bgpvpn/+series As I understand from what I've read and seen on summit talk recordings, anyone can create any view of the data they please and they can share their personalized view with whomever they want, but that is basically the complete opposite of standardization. Does Storyboard have any plans to provide any standard views that are consistent across projects? Or is it focused solely on the "in club" who know what dashboard views are custom to each project? For anyone trying to follow multiple projects at a strategic level (i.e. not down in the weeds day to day, but checking in weekly or monthly) to see what's planned, what's deferred, and what's completed for either upcoming milestones or looking back to see if something did or did not get finished, a consistent cross-project UI of some kind is essential. For example, with virtually no insider involvement with Nova, I was able to locate this view of what's going on for the Rocky series: https://launchpad.net/nova/rocky How would I locate that same information for a project in Storyboard without constructing my own custom worklist or finding an insider to share their worklist with me? -- Paul Carver VoIP: 732-545-7377 Cell: 908-803-1656 E: pcarver at att.com Q Instant Message If you look closely enough at the present you can find loose bits of the future just lying around. From jungleboyj at gmail.com Mon Jun 11 16:32:09 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 11 Jun 2018 11:32:09 -0500 Subject: [openstack-dev] [cinder] Recordings from the Vancouver Summit Uploaded Message-ID: All, I have gotten the Vancouver Summit Forum session recordings uploaded to YouTube.  You can see the links in our wiki here: https://wiki.openstack.org/wiki/VancouverSummit2018Summary Let me know if you have any questions. Thanks! Jay From zbitter at redhat.com Mon Jun 11 16:39:36 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 11 Jun 2018 12:39:36 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528726300-sup-9083@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> Message-ID: On 11/06/18 10:23, Doug Hellmann wrote: > Excerpts from Dmitry Tantsur's message of 2018-06-11 16:00:41 +0200: >> Hi, >> >> On 06/11/2018 03:53 PM, Ruby Loo wrote: >>> Hi, >>> >>> I don't want to hijack the initial thread, but am now feeling somewhat guilty >>> about not being vocal wrt Storyboard. Yes, ironic migrated to Storyboard in the >>> beginning of this cycle. To date, I have not been pleased with replacing >>> Launchpad with Storyboard. I believe that Storyboard is somewhat >>> still-in-progress, and that there were/are some features (stories) that are >>> outstanding that would make its use better. >>> >>> From my point of view (as a developer and core, not a project manager or PTL) >>> using Storyboard has made my day-to-day work worse. Granted, any migration is >>> without headaches. But some of the main things, like searching for our RFEs >>> (that we had tagged in Launchpad) wasn't possible. I haven't yet figured out how >>> to limit a search to only the 'ironic' project using that 'search' like GUI, so >>> I have been frustrated trying to find particular bugs that I *knew* existed but >>> had not memorized the bug number. >> >> Yeah, I cannot fully understand the search. I would expect something explicit >> like Launchpad or better something command-based like "project:openstack/ironic >> pxe". This does not seem to work, so I also wonder how to filter all stories >> affecting a project. >> > > Searching tripped me up for the first couple of weeks, too. > Storyboard's search field is a lot "smarter" than expected. Or maybe > you'd call it "magic". Either way, it was confusing, but you don't have > to use any special syntax in the UI. > > To search for a project, type the name of the project in the search > field and then *wait* for the list of drop-down options to appear. > The first item in the list will be a "raw" search for the term. The > others will have little icons indicating their type. The project > icon looks like a little cube, for example. If I go to > https://storyboard.openstack.org/#!/search and type "openstack/ironic" > I get a list that includes openstack/ironic, openstack/ironic-inspector, > etc. > > Select the project you want from the list and hit enter, and you'll > get a list of all of the stories with tasks attached to the project. Yeah, it's actually pretty powerful, but the UX is a pain. For a workflow as common as searching within a project, there should never be a step that involves *waiting*. This could be easily fixed: if the user is on the page for a project (or project group) and clicks on the search tab, the search field should be autopopulated with the project so they only have to opt out when they want to search something else, rather than opt in every time by typing the project's name again... waiting... clicking on one of the inscrutable icons. (Prepopulating in this way would also help teach people how the search field works and what the little icons mean, so that it wouldn't take weeks to figure out how to search within a project even when you have to start from scratch.) There are a lot of rough edges like this. An issue tracker is an incredibly complicated class of application, and projects like Launchpad have literally millions of issues tracked, so basically everything that could come up has. Storyboard is not at that stage yet. Some other bugbears: * There's no help link anywhere. (This appears to be because there's nothing to link to.) * There's no way to mark a story as a duplicate of another. * Numeric IDs in URLs instead of project names are a serious barrier to usability. * Default query size of 10 unless you (a) are logged in, and (b) increased it to something sane in your Profile (pro tip: do this now!) makes it really painful to use, especially since the full text search is not very accurate, the prev/next arrows appear to be part of a competition to make UI elements as tiny as possible(4 pixels wide, and even the click target is only 16), and moving between pages is kinda slow. Also I changed the setting in my profile the other day, and when I logged in again today it had been reset to 10. * Actually, I just tried scrolling through the project list after setting the query size back to 100, and the ranges I got were: - 1 to 100 of 344 ok so far - 101 to 200 of 344 good good - 100101 to 344 of 344 wat * Actually, *is* there any full-text search? The search page says only that you can search for "Keyword in a story or task title". That would explain why it's impossible to find most things you're looking for. * You can't even use Google to search it, I suspect because only issues that are linked to from other sites are visible to the crawler due to there being no sitemap.xml. * Showing project groups in reverse chronological order of their creation instead of alphabetical order is bizarre. * Launchpad fields always display in fixed-width fonts with linebreaks preserved. Storyboard uses Markdown. The migration process makes no attempt to preserve the formatting, so a lot of the descriptions/comments containing code/logs/heat templates is unreadable. * No upper limit on text box width makes for super long lines of text that are difficult to read. Proportionally-spaced text should generally be limited to a maximum width of ~33em http://webtypography.net/2.1.2 (in fact the typography in general is wanting, starting with all of the text being too small). * References in comments to other bugs, which were links in Launchpad, are not linked to either the bugs in Launchpad or the stories in Storyboard after the migration. The good news is these are just rough edges, and they all seem fixable. (Except for the lack of full-text search, if that actually is the case.) The bad news is that we're at the stage where it's not ready for primetime, but it won't get ready without major projects starting to use it, but while it's getting ready those projects are in for a lot of pain and are awkwardly disconnected from all of the projects still on Launchpad (you can't simply move a ticket from one project to another any more if they're on different systems). cheers, Zane. > To search based on words in the title or body of the story or task, > just type those and then select the item with the magnifying glass > icon for the "raw" search. > > It's not necessary to use search to get a list of open items, though. > You can also navigate directly to a project or group of projects. > For example, by clicking the "Project Groups" icon on the left you > end up at https://storyboard.openstack.org/#!/project_group/list > and by entering "ironic" in the search field there you'll see that > there are 23 projects in the ironic group (wow!). Clicking the name > of the project group will take you to a view showing the current > open items. > > I strongly encourage teams to set up worklists or dashboards with > saved searches or manually curated lists of stories or tasks. For > example, the release team uses > https://storyboard.openstack.org/#!/board/64 to keep track of our work > within the cycle. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jungleboyj at gmail.com Mon Jun 11 16:42:52 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 11 Jun 2018 11:42:52 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> Message-ID: <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> On 6/11/2018 11:17 AM, CARVER, PAUL wrote: > Jumping into the general Storyboard topic, but distinct from the previous questions about searching, is there any equivalent in Storyboard to the Launchpad series and milestones diagrams? e.g.: > > https://launchpad.net/nova/+series > https://launchpad.net/neutron/+series > https://launchpad.net/cinder/+series > https://launchpad.net/networking-sfc/+series > https://launchpad.net/bgpvpn/+series > > As I understand from what I've read and seen on summit talk recordings, anyone can create any view of the data they please and they can share their personalized view with whomever they want, but that is basically the complete opposite of standardization. Does Storyboard have any plans to provide any standard views that are consistent across projects? Or is it focused solely on the "in club" who know what dashboard views are custom to each project? Paul, this is actually one of the big concerns I have with the move to Storyboard is the fact that there is no longer standardization across projects.  When I asked about this it was noted that it would be important for Cinder to document how we use Storyboard so people can refer to the documentation and know how to use it.  This, however, seems needlessly complicated. Would have expected how to use Storyboard was going to be used to be documented/recommended before hand. > For anyone trying to follow multiple projects at a strategic level (i.e. not down in the weeds day to day, but checking in weekly or monthly) to see what's planned, what's deferred, and what's completed for either upcoming milestones or looking back to see if something did or did not get finished, a consistent cross-project UI of some kind is essential. Agreed.  Wonder if at the next midcycle it would be worth having a cross project discussion to try and create some consistency.  That, however, would require buy-in from at least the core projects. > For example, with virtually no insider involvement with Nova, I was able to locate this view of what's going on for the Rocky series: https://launchpad.net/nova/rocky > How would I locate that same information for a project in Storyboard without constructing my own custom worklist or finding an insider to share their worklist with me? > From doug at doughellmann.com Mon Jun 11 16:54:15 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 12:54:15 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> Message-ID: <1528735833-sup-4611@lrrr.local> Excerpts from Jay S Bryant's message of 2018-06-11 11:42:52 -0500: > > On 6/11/2018 11:17 AM, CARVER, PAUL wrote: > > Jumping into the general Storyboard topic, but distinct from the previous questions about searching, is there any equivalent in Storyboard to the Launchpad series and milestones diagrams? e.g.: > > > > https://launchpad.net/nova/+series > > https://launchpad.net/neutron/+series > > https://launchpad.net/cinder/+series > > https://launchpad.net/networking-sfc/+series > > https://launchpad.net/bgpvpn/+series > > > > As I understand from what I've read and seen on summit talk recordings, anyone can create any view of the data they please and they can share their personalized view with whomever they want, but that is basically the complete opposite of standardization. Does Storyboard have any plans to provide any standard views that are consistent across projects? Or is it focused solely on the "in club" who know what dashboard views are custom to each project? > Paul, this is actually one of the big concerns I have with the move to > Storyboard is the fact that there is no longer standardization across > projects.  When I asked about this it was noted that it would be > important for Cinder to document how we use Storyboard so people can > refer to the documentation and know how to use it.  This, however, seems > needlessly complicated. Would have expected how to use Storyboard was > going to be used to be documented/recommended before hand. I'm not sure what sort of project-specific documentation we think we need. Each project team can set up its own board or worklist for a given series. The "documentation" just needs to point to that thing, right? Each team may also decide to use a set of tags, and those would need to be documented, but that's no different from launchpad. > > For anyone trying to follow multiple projects at a strategic level (i.e. not down in the weeds day to day, but checking in weekly or monthly) to see what's planned, what's deferred, and what's completed for either upcoming milestones or looking back to see if something did or did not get finished, a consistent cross-project UI of some kind is essential. > Agreed.  Wonder if at the next midcycle it would be worth having a cross > project discussion to try and create some consistency.  That, however, > would require buy-in from at least the core projects. Why? If we have a large number of projects who agree to use the tool a certain way, that seems good, regardless of whether any specific teams are included in the group. Let the outliers document their processes, if they end up being significantly different. > > For example, with virtually no insider involvement with Nova, I was able to locate this view of what's going on for the Rocky series: https://launchpad.net/nova/rocky > > How would I locate that same information for a project in Storyboard without constructing my own custom worklist or finding an insider to share their worklist with me? > > > From skaplons at redhat.com Mon Jun 11 17:01:37 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 11 Jun 2018 19:01:37 +0200 Subject: [openstack-dev] [neutron] Neutron CI meeting 12.06.2018 cancelled Message-ID: Hi, I will be traveling tomorrow so I can’t chair CI meeting on 12.06.2018 so it is cancelled. Next meeting will be normally at 19.06.2018. — Slawek Kaplonski Senior software engineer Red Hat From pc2929 at att.com Mon Jun 11 17:17:23 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Mon, 11 Jun 2018 17:17:23 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528735833-sup-4611@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> Message-ID: Doug Hellmann wrote: >I'm not sure what sort of project-specific documentation we think we need. Perhaps none if there is a standard, but is there a standard? Can you give me examples in Storyboard of "standard" views that present information even vaguely similar to https://launchpad.net/nova/+series and https://launchpad.net/nova/rocky ? Or is every project on their own to invent the views that they will use, independent of what any other project is doing? >Each project team can set up its own board or worklist for a given series. The "documentation" just needs to point to that thing, right? If we're relying on each team to set up its own board or worklist then it sounds like there is not a standard. In which case, we're back to the need for each project to document (at a minimum) where to find its view and perhaps also how to interpret it. >Each team may also decide to use a set of tags, and those would need to be documented, but that's no different from launchpad. I agree, use of tags is likely to be team specific, but where can someone find those tags without mind-melding with an experienced member of the project? E.g. If I navigate to the fairly obvious URL: https://bugs.launchpad.net/neutron I can see a list of tags on the right side of the page, sorted in descending order by frequency of use. On the other hand, if I follow the intuitive process of going to https://storyboard.openstack.org and clicking on "Project Groups" and then clicking "heat" and then clicking "openstack/heat" I reach the somewhat less obvious URL https://storyboard.openstack.org/#!/project/989 and no indication at all of what tags might be useful in this project. From ekcs.openstack at gmail.com Mon Jun 11 17:49:40 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 11 Jun 2018 10:49:40 -0700 Subject: [openstack-dev] [vitrage] update_timestamp precision In-Reply-To: <9EA3E23E-BAFB-4752-8DA7-B07012FE54FA@nokia.com> References: <9EA3E23E-BAFB-4752-8DA7-B07012FE54FA@nokia.com> Message-ID: Thank you, Ifat! From: "Afek, Ifat (Nokia - IL/Kfar Sava)" Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, June 11, 2018 at 2:12 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [vitrage] update_timestamp precision > Hi Eric, > > Apparently we have inconsistent behavior between the different datasources. > The format of the timestamp should be '%Y-%m-%dT%H:%M:%SZ' as defined in [1]. > We need to go over the code and make sure all datasources are aligned. I > created a bug for it [2]. > > [1] > https://github.com/openstack/vitrage/blob/master/vitrage/datasources/transform > er_base.py > > [2] https://bugs.launchpad.net/vitrage/+bug/1776181 > Best regards, > Ifat > > > ---------- Forwarded message --------- > From: Eric K > Date: Sat, 9 Jun 2018 at 00:20 > Subject: [openstack-dev] [vitrage] update_timestamp precision > To: OpenStack Development Mailing List (not for usage questions) > > > > Hi I'm building integration with Vitrage webhook and looking for some > clarification on the timestamp precision to expect. > > In the sample webhook payload found in doc the resource and the alarm > shows different time stamp precisions: > https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plug > in.html > .html> > > > Thank you! > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Mon Jun 11 17:51:12 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Mon, 11 Jun 2018 10:51:12 -0700 Subject: [openstack-dev] [vitrage] matching webhook vs alarm list In-Reply-To: <6C8EF4E4-0046-4BE9-98F5-F8C6EDF30510@nokia.com> References: <6C8EF4E4-0046-4BE9-98F5-F8C6EDF30510@nokia.com> Message-ID: Thank you for the explanation! Eric From: "Afek, Ifat (Nokia - IL/Kfar Sava)" Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, June 11, 2018 at 1:34 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [vitrage] matching webhook vs alarm list > Hi Eric, > > The format of the vitrage_id was changed to UUID in Pike release. It appears > that the API documentation [1] is outdated. I¹ll fix it. > The vitrage_id that you get in the webhook notification should match the one > coming from Œvitrage alarm list¹. The Œid¹ field is determined by the external > monitor, so it might be different. > > Best Regards, > Ifat > > > ---------- Forwarded message --------- > From: Eric K > Date: Sat, 9 Jun 2018 at 01:40 > Subject: [openstack-dev] [vitrage] matching webhook vs alarm list > To: OpenStack Development Mailing List (not for usage questions) > > > > Hi I'm building integration with Vitrage webhook and looking for some > clarification on what ID to use for matching a webhook notification to > the specific alarm from the alarm list. In the sample alarm list > response, there is an 'id' field and a 'vitrage_id' field [1], where > as in the sample webhook notification payload, there is a 'vitrage_id' > field [2]. I'd assume we can match by the 'vitrage_id', but the > samples have very different formats for 'vitrage_id', so I just want > to confirm. Thank you! > > [1] > https://docs.openstack.org/vitrage/latest/contributor/vitrage-api.html#id22 > [2] > { > "notification": "vitrage.alarm.activate", > "payload": { > "vitrage_id": "2def31e9-6d9f-4c16-b007-893caa806cd4", > "resource": { > "vitrage_id": "437f1f4c-ccce-40a4-ac62-1c2f1fd9f6ac", > "name": "app-1-server-1-jz6qvznkmnif", > "update_timestamp": "2018-01-22 10:00:34.327142+00:00", > "vitrage_category": "RESOURCE", > "vitrage_operational_state": "OK", > "vitrage_type": "nova.instance", > "project_id": "8f007e5ba0944e84baa6f2a4f2b5d03a", > "id": "9b7d93b9-94ec-41e1-9cec-f28d4f8d702c" > }, > "update_timestamp": "2018-01-22T10:00:34Z", > "vitrage_category": "ALARM", > "state": "Active", > "vitrage_type": "vitrage", > "vitrage_operational_severity": "WARNING", > "name": "Instance memory performance degraded" > } > } > https://docs.openstack.org/vitrage/latest/contributor/notifier-webhook-plugin. > html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From manjeet.s.bhatia at intel.com Mon Jun 11 18:31:31 2018 From: manjeet.s.bhatia at intel.com (Bhatia, Manjeet S) Date: Mon, 11 Jun 2018 18:31:31 +0000 Subject: [openstack-dev] [neutron] Bug deputy report june 5 - June 10 Message-ID: Hi, There were total of 15 new bugs reported I categorized below depending on if they are related to CI or neutron-client or RFE. Only one Was critical bug 1775183 for which fixed is released. There's one high priority bug 1775220 confirmed. Some bugs were marked invalid and incomplete and also listed at the bottom. Some of bugs are still not marked confirmed need further confirmation from related members of Neutron community. Bugs ! 1. https://bugs.launchpad.net/neutron/+bug/1775146 found some flow tables of br-int will be missing After I restarted neutron-openvswitch-agent. 2. https://bugs.launchpad.net/neutron/+bug/1775183 Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent. test_ha_router_restart_agents_no_packet_lost fails often 3. https://bugs.launchpad.net/neutron/+bug/1775382 neutron-openvswitch-agent cannot start on Windows 4. https://bugs.launchpad.net/neutron/+bug/1775496 agentschedulers: concurrent port delete on unscheduling may cause unscheduling to fail. 5. https://bugs.launchpad.net/neutron/+bug/1775644 Neutron fwaas v2 group port binding failed 6. https://bugs.launchpad.net/bugs/1775797 The mac table size of neutron bridges (br-tun, br-int, br-*) is too small by default and eventually makes openvswitch explode 7. https://bugs.launchpad.net/neutron/+bug/1776160 'burst' does not take effect for neutron egress qos bindlimit by ovs RFE reported as Bug 8. https://bugs.launchpad.net/neutron/+bug/1775250 Implement DVR-aware announcement of fixed IP's in neutron-dynamic-routing Neutron CI related bugs ! 9. https://bugs.launchpad.net/neutron/+bug/1775947 tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest failing. 10. https://bugs.launchpad.net/neutron/+bug/1775220 Unit test neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase. test_get_objects_queries_constant fails often. Neutron-client 11. https://bugs.launchpad.net/python-neutronclient/+bug/1775922 neutron net-list with pagination fails on too many subnets Bugs marked Invalid or Incomplete 12. https://bugs.launchpad.net/neutron/+bug/1775310 Unused namespace is appeared. 13. https://bugs.launchpad.net/neutron/+bug/1775415 pagination for list operations behaves inconsistently 14. https://bugs.launchpad.net/bugs/1775580 Networking Option 2: Self-service networks in Neutron 15. https://bugs.launchpad.net/neutron/+bug/1775758 Deprecated auth_url entries in Neutron Queen's install guide Regards ! Manjeet Singh Bhatia -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 11 18:57:30 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jun 2018 18:57:30 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> Message-ID: <20180611185729.mtxgluskq3rztdtp@yuggoth.org> On 2018-06-11 17:17:23 +0000 (+0000), CARVER, PAUL wrote: [...] > Perhaps none if there is a standard, but is there a standard? Can > you give me examples in Storyboard of "standard" views that > present information even vaguely similar to > https://launchpad.net/nova/+series and > https://launchpad.net/nova/rocky ? Or is every project on their > own to invent the views that they will use, independent of what > any other project is doing? [...] I'm just going to come out and call bullshit on this one. How many of the >800 official OpenStack deliverable repos have a view like that with any actual relevant detail? If it's "standard" then certainly more than half, right? Picking through just the set of 40 "services" listed at https://www.openstack.org/software/project-navigator I see that: * swift seems to stop in mitaka * murano in pike * freezer in queens * solum in liberty * aodh in newton * senlin in pike * ironic in queens (but not really as the series just exist with no releases after newton) * octavia in pike * karbor in queens (but not really any detail even then) * searchlight in pike * barbican in queens (but not really since they stopped showing releases in pike) * panko never used it at all * ceilometer stopped in mitaka * monasca never used it * rally has nonstandard series names but stopped sometime in 2017 * congress stopped in queens * watcher in queens * vitrage in queens (but actually stopped milestoning in ocata) * cloudkitty has nonstandard series names ending in 2017 * tricircle stopped using milestones in pike * openstack-ansible ended in newton * kuryr never used it So... of the _minority_ of projects (from that arbitrarily-chosen but presumably "important" sample) who are actually using that feature, how many of them are simply doing it because they thought the release team was still making use of that information? (Hint: they stopped as of mitaka.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Jun 11 19:23:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jun 2018 19:23:40 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> Message-ID: <20180611192340.ksym2bwoe3m6dzxu@yuggoth.org> Others likely have more detailed answers to some of your other points, but as for the ones I happen to know off the top of my head... On 2018-06-11 12:39:36 -0400 (-0400), Zane Bitter wrote: [...] > * There's no help link anywhere. (This appears to be because there's nothing > to link to.) It's https://storyboard.openstack.org/#!/page/about (linked as "About" in the left sidebar of every page on the site) but maybe it should be renamed and/or given an icon? It's also the default page view if you go to the site root while not authenticated. It should probably also link to https://docs.openstack.org/infra/storyboard/ I guess. > * There's no way to mark a story as a duplicate of another. In a twist of irony, this is being tracked in: https://storyboard.openstack.org/#!/story/2000552 https://storyboard.openstack.org/#!/story/2002136 > * Numeric IDs in URLs instead of project names are a serious barrier to > usability. [...] The change series ending in https://review.openstack.org/548343 implements this, and is basically ready to merge any time now. > * You can't even use Google to search it, I suspect because only issues that > are linked to from other sites are visible to the crawler due to there being > no sitemap.xml. [...] There's in-progress work to update storyboard-webclient to newer AngularJS and as a result the situation related to Web search engines will get significantly better. Adding a sitemap or similar index for the benefit of search engines should likely come with or shortly after that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zbitter at redhat.com Mon Jun 11 19:30:17 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 11 Jun 2018 15:30:17 -0400 Subject: [openstack-dev] [all][sdk][heat] Integrating OpenStack and k8s with a service broker In-Reply-To: References: <9a2e5b4a-2ac7-dee3-5a3e-5d985244a952@redhat.com> Message-ID: On 08/06/18 22:28, Rico Lin wrote: > > > Zane Bitter > 於 2018年6 > 月9日 週六 上午9:20寫道: > > > > IIUC you're talking about a Heat resource that calls out to a service > > broker using the Open Service Broker API? (Basically acting like the > > Kubernetes Service Catalog.) That would be cool, as it would allow us to > > orchestrate services written for Kubernetes/CloudFoundry using Heat. > > Although probably not as easy as it sounds at first glance ;) > In my previous glance, I was thought about our new service will also > wrap up API with Ansible playbooks. A playbook to create a resource, and > another playbook to control Service Broker API. So we can directly > use that playbook instead of calling Service broker APIs. No?:) Oh, call the playbooks directly from Heat? That would work for anything else that uses Automation Broker (e.g. the AWS playbook bundles), but not for stuff that has its own service broker implementation (e.g. Azure). That said, it would also be interesting for other reasons if Heat had a way to run Ansible playbooks either directly or via AWX, but now we're getting even further off-topic ;) > I think we can start trying to build playbooks before we start planning > on crazy ideas:) > > > > It wouldn't rely on _this_ set of playbook bundles though, because this > > one is only going to expose OpenStack resources, which are already > > exposed in Heat. (Unless you're suggesting we replace all of the current > > resource plugins in Heat with Ansible playbooks via the service broker? > > In which case... that's not gonna happen ;) > Right, we should use OS::Heat::Stackto expose resources from other > OpenStack, notwith this. +1 > > So Heat could adopt this at any time to add support for resources > > exposed by _other_ service brokers, such as the AWS/Azure/GCE service > > brokers or other playbooks exposed through Automation Broker. > > > > I like the idea to add support for resources exposed by other service > borkers I can already see that I'm going to make this same typo at least 3 times a week. https://www.youtube.com/watch?v=sY_Yf4zz-yo - ZB From whayutin at redhat.com Mon Jun 11 19:34:23 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 11 Jun 2018 13:34:23 -0600 Subject: [openstack-dev] [tripleo] scenario000-multinode-oooq-container-upgrades Message-ID: Greetings, I wanted to let everyone know that we have a keystone only deployment and upgrade job in check non-voting. I'm asking everyone in TripleO to be mindful of this job and to help make sure it continues to pass as we move it from non-voting check to check and eventually gating. Upgrade jobs are particularly difficult to keep running successfully because of the complex workflow itself, job run times and other factors. Your help to ensure we don't merge w/o a pass on this job will go a long way in helping the tripleo upgrades team. There is still work to be done here, however it's much easier to do it with the check non-voting job in place. Thanks all -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Jun 11 19:51:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Jun 2018 14:51:14 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <20180611185729.mtxgluskq3rztdtp@yuggoth.org> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> Message-ID: <634bf96f-5660-642a-3cec-b406d97ffc2f@gmail.com> On 6/11/2018 1:57 PM, Jeremy Stanley wrote: > So... of the_minority_ of projects (from that arbitrarily-chosen > but presumably "important" sample) who are actually using that > feature, how many of them are simply doing it because they thought > the release team was still making use of that information? (Hint: > they stopped as of mitaka.) I never used the series/release/milestone stuff for tracking nova blueprint deliverables because of the release team, I did it purely for project management reasons while I was PTL since it was a clean and easy way to track that information, and allowed me to easily gather stats for blueprint levels across nova releases. Otherwise what you said is correct about that totally being per-project and dependent on the level of OCD of the release liaison / PTL of that project. -- Thanks, Matt From kgiusti at gmail.com Mon Jun 11 19:53:11 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 11 Jun 2018 15:53:11 -0400 Subject: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens Message-ID: Updated subject to include [aodh] and [telemetry] On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann wrote: > Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400: >> Hi, >> >> The telemetry integration test for oslo.messaging has started failing >> on the stable/queens branch [0]. >> >> A quick review of the logs points to a change in heat-tempest-plugin >> that is incompatible with the version of gabbi from queens upper >> constraints (1.40.0) [1][2]. >> >> The job definition [3] includes required-projects that do not have >> stable/queens branches - including heat-tempest-plugin. >> >> My question - how do I prevent this job from breaking when these >> unbranched projects introduce changes that are incompatible with >> upper-constrants for a particular branch? > > Aren't those projects co-gating on the oslo.messaging test job? > > How are the tests working for heat's stable/queens branch? Or telemetry? > (whichever project is pulling in that tempest repo) > I've run the stable/queens branches of both Aodh[1] and Heat[2] - both failed. Though the heat failure is different from what we're seeing on oslo.messaging [3], the same warning about gabbi versions is there [4]. However the Aodh failure is exactly the same as the oslo.messaging one [5] - this makes sense since the oslo.messaging test is basically running the same telemetry-tempest-plugin test. So this isn't something unique to oslo.messaging - the telemetry integration test is busted in stable/queens. I'm going to mark these tests as non-voting on oslo.messaging's queens branch for now so we can land some pending patches. [1] https://review.openstack.org/#/c/574306/ [2] https://review.openstack.org/#/c/574311/ [3] http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223 [4] http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018-06-11_17_09_39_691 [5] http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-integration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143 >> >> I've tried to use override-checkout in the job definition, but that >> seems a bit hacky in this case since the tagged versions don't appear >> to work and I've resorted to a hardcoded ref [4]. >> >> Advice appreciated, thanks! >> >> [0] https://review.openstack.org/#/c/567124/ >> [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624 >> [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332 >> [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250 >> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Ken Giusti (kgiusti at gmail.com) From pc2929 at att.com Mon Jun 11 19:53:47 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Mon, 11 Jun 2018 19:53:47 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <20180611185729.mtxgluskq3rztdtp@yuggoth.org> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> Message-ID: Jeremy Stanley wrote: >I'm just going to come out and call bullshit on this one. How many of the >800 official OpenStack deliverable repos have a view like that with any actual relevant detail? If it's "standard" then certainly more than half, right? Well, that's a bit rude, so I'm not going to get in a swearing contest over whether Nova, Neutron and Cinder are more "important" than 800+ other projects. I picked a handful of projects that I'm most interested in and which also happened to have really clear, accessible and easy to understand information on what they have delivered in the past and are planning to deliver in the future. If I slighted your favorite projects I apologize. So, are you saying the information shown in the examples I gave is not useful? Or just that I've been lucky in the past that the projects I'm most interested in do a better than typical job of managing releases but the future is all downhill? If you're saying it's not useful info and we're better off without it then I'll just have to disagree. If you're saying that it has been replaced with something better, please share the URLs. I'm all for improvements, but saying "only a few people were doing something useful so we should throw it out and nobody do it" isn't a path to improvement. How about we discuss alternate (e.g. better/easier/whatever) ways of making the information available. From jungleboyj at gmail.com Mon Jun 11 20:09:55 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 11 Jun 2018 15:09:55 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528735833-sup-4611@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> Message-ID: On 6/11/2018 11:54 AM, Doug Hellmann wrote: > Excerpts from Jay S Bryant's message of 2018-06-11 11:42:52 -0500: >> On 6/11/2018 11:17 AM, CARVER, PAUL wrote: >>> Jumping into the general Storyboard topic, but distinct from the previous questions about searching, is there any equivalent in Storyboard to the Launchpad series and milestones diagrams? e.g.: >>> >>> https://launchpad.net/nova/+series >>> https://launchpad.net/neutron/+series >>> https://launchpad.net/cinder/+series >>> https://launchpad.net/networking-sfc/+series >>> https://launchpad.net/bgpvpn/+series >>> >>> As I understand from what I've read and seen on summit talk recordings, anyone can create any view of the data they please and they can share their personalized view with whomever they want, but that is basically the complete opposite of standardization. Does Storyboard have any plans to provide any standard views that are consistent across projects? Or is it focused solely on the "in club" who know what dashboard views are custom to each project? >> Paul, this is actually one of the big concerns I have with the move to >> Storyboard is the fact that there is no longer standardization across >> projects.  When I asked about this it was noted that it would be >> important for Cinder to document how we use Storyboard so people can >> refer to the documentation and know how to use it.  This, however, seems >> needlessly complicated. Would have expected how to use Storyboard was >> going to be used to be documented/recommended before hand. > I'm not sure what sort of project-specific documentation we think we > need. > > Each project team can set up its own board or worklist for a given > series. The "documentation" just needs to point to that thing, right? > > Each team may also decide to use a set of tags, and those would need to > be documented, but that's no different from launchpad. As I understood it, there is no required way to use Storyboard for tracking what used to be 'Blueprints'.  So each time will need to document how they use Storyboard to record such things.  If I am incorrect about this I apologize. >>> For anyone trying to follow multiple projects at a strategic level (i.e. not down in the weeds day to day, but checking in weekly or monthly) to see what's planned, what's deferred, and what's completed for either upcoming milestones or looking back to see if something did or did not get finished, a consistent cross-project UI of some kind is essential. >> Agreed.  Wonder if at the next midcycle it would be worth having a cross >> project discussion to try and create some consistency.  That, however, >> would require buy-in from at least the core projects. > Why? If we have a large number of projects who agree to use the tool a > certain way, that seems good, regardless of whether any specific teams > are included in the group. Let the outliers document their processes, if > they end up being significantly different. Fair enough.  It would help if we had buy-in from all of the projects but you are right that nothing prevents us from achieving consensus from those are are willing to participate. >>> For example, with virtually no insider involvement with Nova, I was able to locate this view of what's going on for the Rocky series: https://launchpad.net/nova/rocky >>> How would I locate that same information for a project in Storyboard without constructing my own custom worklist or finding an insider to share their worklist with me? >>> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Mon Jun 11 20:23:12 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jun 2018 20:23:12 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> Message-ID: <20180611202311.4btsydpvfaxslk5j@yuggoth.org> On 2018-06-11 19:53:47 +0000 (+0000), CARVER, PAUL wrote: [...] > Well, that's a bit rude [...] Apologies for the strong language; I did not intend any offense, and it was indeed unnecessary for purposes of my point. > So, are you saying the information shown in the examples I gave is > not useful? I'm saying as far as OpenStack is concerned, it's not a "standard" (which was your original claim). A minority of the 40 official services (so named by the project navigator anyway) are relying on it and I'd wager far fewer still of the >800 deliverable repositories maintained by official OpenStack project teams are either. > Or just that I've been lucky in the past that the projects I'm > most interested in do a better than typical job of managing > releases but the future is all downhill? I think you likely care proportionally more about projects which have been in the OpenStack ecosystem for longer (this is unsurprising) and of those quite a few are tracking series/milestone info in LP because it was integrated with release management once upon a time (up until a couple years ago) so there was a lot of pressure, perhaps even a requirement, to do so and old habits die hard. Matt R. notes in his reply that as PTL he found using it for tracking cycle work independent of whether the Release Management team was still expecting/relying on it, so I don't doubt the usefulness of having some means of continuing to do that (and with StoryBoard there are a few ways you could do it but we didn't want to be proscriptive). Some other teams have found that they prefer a kanban style tool for this sort of effort instead, but have unfortunately turned to proprietary services like Trello as a result. I also don't think that lack of using the series/milestone tracker in LP is an indication that a project is doing a worse job of managing releases. We have a lot more useful automation now around release notes, highlights, release processes scraping references from commit logs, and so on. > If you're saying it's not useful info and we're better off without > it then I'll just have to disagree. If you're saying that it has > been replaced with something better, please share the URLs. https://docs.openstack.org/infra/storyboard/gui/theory.html#worklists-and-boards As I said, we didn't want to start out telling teams how they should be doing their release tracking so that we could see what patterns emerged. If you recall the "specs" experiment years ago, a few teams tried mildly different solutions for moving from LP blueprints with random wiki page links to tracking specifications in Git repositories, and over time they learned successful patterns from each other and mostly converged on similar solutions. There were similar cries back then about "how will users/operators find out what is being planned?" but I think the end result was far better than what it replaced. > I'm all for improvements, but saying "only a few people were doing > something useful so we should throw it out and nobody do it" isn't > a path to improvement. How about we discuss alternate (e.g. > better/easier/whatever) ways of making the information available. I'm not saying anything should be thrown out. I personally don't even feel that teams should be forced to use StoryBoard (or any particular tool for that matter), but just want to focus on making sure we provide useful, free and open tools through which and on which we can collectively collaborate. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Jun 11 20:31:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 11 Jun 2018 16:31:40 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> Message-ID: <1528747807-sup-187@lrrr.local> Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +0000: > Jeremy Stanley wrote: > > >I'm just going to come out and call bullshit on this one. How many of the >800 official OpenStack deliverable repos have a view like that with any actual relevant detail? If it's "standard" then certainly more than half, right? > > Well, that's a bit rude, so I'm not going to get in a swearing contest over whether Nova, Neutron and Cinder are more "important" than 800+ other projects. I picked a handful of projects that I'm most interested in and which also happened to have really clear, accessible and easy to understand information on what they have delivered in the past and are planning to deliver in the future. If I slighted your favorite projects I apologize. > > So, are you saying the information shown in the examples I gave is not useful? > > Or just that I've been lucky in the past that the projects I'm most interested in do a better than typical job of managing releases but the future is all downhill? > > If you're saying it's not useful info and we're better off without it then I'll just have to disagree. If you're saying that it has been replaced with something better, please share the URLs. > > I'm all for improvements, but saying "only a few people were doing something useful so we should throw it out and nobody do it" isn't a path to improvement. How about we discuss alternate (e.g. better/easier/whatever) ways of making the information available. > This thread isn't going in a very productive direction. Please consider your tone as you reply. The release team used to (help) manage the launchpad series data. We stopped doing that a long time ago, as Jeremy pointed out, because it was not useful to *the release team* in the way we were managing the releases. We stopped tracking blueprints and bug fixes to try to predict which release they would land in and built tools to make it easier for teams to declare what they had completed through release notes instead. OpenStack does not have a bunch of project managers signed up to help this kind of information, so it was left up to each project team to track any planning information *they decided was useful* to do their work. If that tracking information happens to be useful to anyone other than contributors, I consider that a bonus. As we shift teams over to Storyboard, we have another opportunity to review the processes and to decide how to use the new tool. Some teams with lightweight processes will be able to move directly with little impact. Other teams who are doing more tracking and planning will need to think about how to do that. The new tool provides some flexibility, and as with any other big change in our community, we're likely to see a bit of divergence before we collectively discover what works and teams converge back to a more consistent approach. That's normal, expected, and desirable. I recommend that people spend a little time experimenting on their own before passing judgement or trying to set standards. Start by looking at the features of the tool itself. Set up a work list and add some stories to it. Set up a board and see how the automatic work lists help keep it up to date as the story or task states change. Do the same with a manually managed board. If you need a project to assign a task to because yours hasn't migrated yet, use openstack/release-test. Then think about the workflows you actually use -- not just the ones you've been doing because that's the way the project has always been managed. Think about how those workflows might translate over to the new tool, based on its features. If you're not sure, ask and we can see what other teams are doing or what people more familiar with the tool suggest trying. Doug From fungi at yuggoth.org Mon Jun 11 20:35:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jun 2018 20:35:41 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> Message-ID: <20180611203541.er7m6jcdabdlzffy@yuggoth.org> On 2018-06-11 15:09:55 -0500 (-0500), Jay S Bryant wrote: [...] > Fair enough. It would help if we had buy-in from all of the > projects but you are right that nothing prevents us from achieving > consensus from those are are willing to participate. [...] Yes, we started out thinking that was going to be the way forward and eventually learned from that mistake. It's debilitatingly depressing to work on a project whose intended users keep saying they want it to be identical to the also-questionably-designed thing they're currently using because any change in process is some amount of effort they can avoid by deferring. Refusing to let anyone use SB year after year because it's wasn't quite ready enough for everybody (even if it was plenty useful for somebody) resulted in the people who had been assigned to work on it rage-quit from the endless negativity being heaped on them. It was only when it found other potential users outside the OpenStack ecosystem entirely that new life was breathed into the project, because it was getting used by somebody who couldn't be told they weren't allowed. We resolved soon after that to discard our prior fear of different projects relying on different tools, realizing that no progress would ever be made if we required everyone to agree to use it first. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kristi at nikolla.me Mon Jun 11 20:38:55 2018 From: kristi at nikolla.me (Kristi Nikolla) Date: Mon, 11 Jun 2018 16:38:55 -0400 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: Answering here questions related to mixmatch, as I'm the main developer. - If I understood [[1](https://mixmatch.readthedocs.io/en/latest/)] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? Correct, it will not synchronize images. What it does is forward API requests across OpenStack clouds, allowing Nova to fetch a remote image. It could be enhanced to "cache" the image, in which case it would be the PULL model. In this architecture, you would have mixmatch acting as a proxy to one or multiple Glance API servers (some remote), and you could even potentially forego having a Glance at all in the edge cloud. - Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) - [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. Rather than remove it completely you could move it to the correct section. :) ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On June 11, 2018 10:28 AM, Csatari, Gergely (Nokia - HU/Budapest) wrote: > Hi, > > Thanks for the comments. > > I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend > > Br, > > Gerg0 > > [ ] > > From: Waines, Greg [mailto:Greg.Waines at windriver.com] > Sent: Friday, June 8, 2018 1:46 PM > To: Csatari, Gergely (Nokia - HU/Budapest) ; OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation > > Responses in-lined below, > > Greg. > > From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM > To: Greg Waines , "openstack-dev at lists.openstack.org" , "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation > > Hi, > > Going inline. > > From: Waines, Greg [mailto:Greg.Waines at windriver.com] > Sent: Thursday, June 7, 2018 2:24 PM > > I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): > > One Glance with multiple backends > > - In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? > > - i.e. GLANCE is a typical shared service in a multi-region environment ? > > [G0]: In my understanding yes. > > - If so, > how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? > > [G0]: This is an open question for me also. > > Several Glances with an independent synchronization service (PUSH) > > - I refer to this as the PUSH model > > - I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images > > - i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs > (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) > > [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? > [Greg] Yeah we should keep it as an alternative. > > - I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization > > - i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites > > [G0]: Yes, the question is how to define these synchronization policies. > [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. > > - Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) > > [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. > > One Glance and multiple Glance API Servers (PULL) > > - I refer to this as the PULL model > > - This is the current model supported in StarlingX’s Distributed Cloud sub-project > > - We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and > > - We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge > > [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. > [Greg] We enabled image caching in the Glance API. > I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. > But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. > > - > > - this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? > > In StarlingX Distributed Cloud, > > We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. > > [G0]: This means that you need an architecture supporting both. > > Just for my curiosity what is the use case for the pull model once you have the push model in place? > > [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. > > Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. > > Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment > > [Greg] Looks good. > > Greg. > > Thanks, > > Gerg0 > > From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM > To: "openstack-dev at lists.openstack.org" , "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation > > Hi, > > I did some work ont he figures and realised, that I have some questions related to the alternative options: > > Multiple backends option: > > - What is the API between Glance and the Glance backends? > > - How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? > > - Is it possible to have different OpenStack versions in the different cloud instances? > > - Can a cloud instance use the locally synchronised images in case of a network connection break? > > - Is it possible to implement this without storing database credentials ont he edge cloud instances? > > Independent synchronisation service: > > - If I understood [[1](https://mixmatch.readthedocs.io/en/latest/)] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? > > As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [[2](https://doodle.com/poll/bddg65vyh4qwxpk5)]. > > [1]: https://mixmatch.readthedocs.io/en/latest/ > > [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 > > Br, > > Gerg0 > > From: Csatari, Gergely (Nokia - HU/Budapest) > Sent: Wednesday, May 23, 2018 8:59 PM > To: OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org > Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation > > Hi, > > Here I send the wiki page [[1](https://wiki.openstack.org/wiki/Image_handling_in_edge_environment)] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. > > Please check and correct/comment. > > Thanks, > > Gerg0 > > [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment > > [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images > > [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Jun 11 21:14:05 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 11 Jun 2018 14:14:05 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: <5784db28-1dc9-e85b-102c-2ae8a7e5d60a@gmail.com> Hi everybody, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. * Certificate Validation - https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates (bpoulos) [END DATE: 2018-06-15] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nova-validate-certificates * Neutron new port binding API for live migration: https://blueprints.launchpad.net/nova/+spec/neutron-new-port-binding-api (mriedem) [END DATE: 2018-06-20] Starts here: https://review.openstack.org/#/c/558001/ * XenAPI: improve the image handler configure:https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement (naichuans) [END DATE: 2018-06-20] Starts here: https://review.openstack.org/#/c/486475/ Best, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From emilien at redhat.com Mon Jun 11 22:02:49 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 11 Jun 2018 15:02:49 -0700 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: <1528214025-sup-920@lrrr.local> References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> <1528214025-sup-920@lrrr.local> Message-ID: On Tue, Jun 5, 2018 at 9:08 AM, Doug Hellmann wrote: [snip] > When all of this is done, a viable project with real users will be > open source instead of closed source. Those contributors, and users, > will be a part of our community instead of looking in from the > outside. The path is ugly, long, and clearly not ideal. But, I > consider the result a win, overall. While I agree with Doug that we assume good faith and hope for the best, I personally think we should help them (what we're doing now) but also make sure we DO NOT set a precedent. We could probably learn from this situation and document in our governance what the TC expects when companies have a fork and need to contribute back at some point. We all know StarlingX isn't alone and I'm pretty sure there are a lot of deployments out there who are in the same situation. I guess my point is, yes for helping StarlingX now but no for incubating future forks if that happens. Like Graham, I think these methods shouldn't be what we encourage in our position. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Jun 11 22:20:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Jun 2018 17:20:45 -0500 Subject: [openstack-dev] [nova] review runway status In-Reply-To: <5784db28-1dc9-e85b-102c-2ae8a7e5d60a@gmail.com> References: <5784db28-1dc9-e85b-102c-2ae8a7e5d60a@gmail.com> Message-ID: <12a83856-f941-b116-93d3-f846f7e9f62e@gmail.com> A few small status updates inline. On 6/11/2018 4:14 PM, melanie witt wrote: > Hi everybody, > > This is just a brief status about the blueprints currently occupying > review runways [0] and an ask for the nova-core team to give these > reviews priority for their code review focus. > > * Certificate Validation - > https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates > (bpoulos) [END DATE: 2018-06-15] > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nova-validate-certificates > I know I need to go back through this series, hopefully can get that done tomorrow. I might just start addressing anything I -1 at this point to keep it going. > > * Neutron new port binding API for live migration: > https://blueprints.launchpad.net/nova/+spec/neutron-new-port-binding-api > (mriedem) [END DATE: 2018-06-20] Starts here: > https://review.openstack.org/#/c/558001/ This already had quite a bit of core review from Eric before it got into the runway slot, but since then it hasn't had any reviews since it got into the slot. I'd ask Eric and Dan specifically to review the bottom change since (1) Eric has been through it already, albeit awhile ago and (2) Dan is familiar with the vif plugging callback event code in that change. > > * XenAPI: improve the image handler > configure:https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement > (naichuans) [END DATE: 2018-06-20] Starts here: > https://review.openstack.org/#/c/486475/ I'm waiting on the xenserver third party CI to pass on this: https://review.openstack.org/#/c/574318/ And then I'll approve the series. > > Best, > -melanie > > [0] https://etherpad.openstack.org/p/nova-runways-rocky > I also reminded people in the nova-scheduler meeting this morning to put stuff in runways since I know there is a lot of placement blueprint work that's going on outside of the runways slots, so it would be nice if that was also at least formally queued up. Just to remind people that we can still review stuff that's not in a slot, but just seeing it in the queue is at least an indication / reminder for me personally that those changes are ready for review when I'm looking. I've also just started adding things into the queue if I know it's ready even if it's not my series. -- Thanks, Matt From zbitter at redhat.com Mon Jun 11 22:49:59 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 11 Jun 2018 18:49:59 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> Message-ID: <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> On 04/06/18 10:13, Zane Bitter wrote: > On 31/05/18 14:35, Julia Kreger wrote: >> Back to the topic of nitpicking! >> >> I virtually sat down with Doug today and we hammered out the positive >> aspects that we feel like are the things that we as a community want >> to see as part of reviews coming out of this effort. The principles >> change[1] in governance has been updated as a result. >> >> I think we are at a point where we have to state high level >> principles, and then also update guidelines or other context providing >> documentation to re-enforce some of items covered in this >> discussion... not just to educate new contributors, but to serve as a >> checkpoint for existing reviewers when making the decision as to how >> to vote change set. The question then becomes where would such >> guidelines or documentation best fit? > > I think the contributor guide is the logical place for it. Kendall > pointed out this existing section: > > https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes > > > It could go in there, or perhaps we separate out the parts about when to > use which review scores into a separate page from the mechanics of how > to use Gerrit. > >> Should we explicitly detail the >> cause/effect that occurs? Should we convey contributor perceptions, or >> maybe even just link to this thread as there has been a massive amount >> of feedback raising valid cases, points, and frustrations. >> >> Personally, I'd lean towards a blended approach, but the question of >> where is one I'm unsure of. Thoughts? > > Let's crowdsource a set of heuristics that reviewers and contributors > should keep in mind when they're reviewing or having their changes > reviewed. I made a start on collecting ideas from this and past threads, > as well as my own reviewing experience, into a document that I've > presumptuously titled "How to Review Changes the OpenStack Way" (but > might be more accurately called "The Frank Sinatra Guide to Code Review" > at the moment): > > https://etherpad.openstack.org/p/review-the-openstack-way > > It's in an etherpad to make it easier for everyone to add their > suggestions and comments (folks in #openstack-tc have made some tweaks > already). After a suitable interval has passed to collect feedback, I'll > turn this into a contributor guide change. It's had a week to percolate (and I've seen quite a few people viewing the etherpad), so here is the review: https://review.openstack.org/574479 - ZB > Have at it! > > cheers, > Zane. > >> -Julia >> >> [1]: https://review.openstack.org/#/c/570940/ From mriedemos at gmail.com Mon Jun 11 22:55:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Jun 2018 17:55:24 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <20180611202311.4btsydpvfaxslk5j@yuggoth.org> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <20180611202311.4btsydpvfaxslk5j@yuggoth.org> Message-ID: <97908b85-4ac5-cec8-02cf-467d86cd6186@gmail.com> On 6/11/2018 3:23 PM, Jeremy Stanley wrote: > If you recall the "specs" experiment years > ago, a few teams tried mildly different solutions for moving from LP > blueprints with random wiki page links to tracking specifications in > Git repositories, and over time they learned successful patterns > from each other and mostly converged on similar solutions. There > were similar cries back then about "how will users/operators find > out what is being planned?" but I think the end result was far > better than what it replaced. The specs thing was mentioned last week in IRC when talking about blueprints in launchpad and I just want to reiterate the specs are more about high level designs and reviewing those designs in Gerrit which was / is a major drawback in the 'whiteboard' in launchpad for working on blueprints - old blueprints that had a design (if they had a design at all) were usually linked from a wiki page. Anyway, specs are design documents per release. Blueprints in launchpad, at least for nova, are the project management tracking tool for that release. Not all blueprints require a spec, but all specs require a blueprint since specs are generally for API changes or other major design changes or features. Just FYI. -- Thanks, Matt From mriedemos at gmail.com Mon Jun 11 23:00:12 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 11 Jun 2018 18:00:12 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528747807-sup-187@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> Message-ID: <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> On 6/11/2018 3:31 PM, Doug Hellmann wrote: > As we shift teams over to Storyboard, we have another opportunity > to review the processes and to decide how to use the new tool. Some > teams with lightweight processes will be able to move directly with > little impact. Other teams who are doing more tracking and planning > will need to think about how to do that. The new tool provides some > flexibility, and as with any other big change in our community, > we're likely to see a bit of divergence before we collectively > discover what works and teams converge back to a more consistent > approach. That's normal, expected, and desirable. > > I recommend that people spend a little time experimenting on their > own before passing judgement or trying to set standards. > > Start by looking at the features of the tool itself. Set up a work > list and add some stories to it. Set up a board and see how the > automatic work lists help keep it up to date as the story or task > states change. Do the same with a manually managed board. If you > need a project to assign a task to because yours hasn't migrated > yet, use openstack/release-test. > > Then think about the workflows you actually use -- not just the > ones you've been doing because that's the way the project has always > been managed. Think about how those workflows might translate over > to the new tool, based on its features. If you're not sure, ask and > we can see what other teams are doing or what people more familiar > with the tool suggest trying. I'm reminded of something we talked about in IRC last week wrt tracking blueprint-type changes over a given series / release in storyboard. It was mentioned that storyboard has a not-yet-implemented epics feature which is really how we'd probably do this (nested stories is another way of thinking about this). So nova could, for example, have an epic for Stein and then track a story for each blueprint, with the old launchpad blueprint 'work items' (which we don't use, but we do have a list of work items in our specs template) tracked as tasks - which would also be nice since you can track tasks like documentation, CLIs (nova and OSC) and tempest testing (if required). One thing people always commit to in their spec is adding support for the feature in client libraries, tempest and docs, but once the nova server side change is merged those commitments end up getting dropped (not always, but more often than I'd like). -- Thanks, Matt From pc2929 at att.com Tue Jun 12 00:04:56 2018 From: pc2929 at att.com (CARVER, PAUL) Date: Tue, 12 Jun 2018 00:04:56 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <97908b85-4ac5-cec8-02cf-467d86cd6186@gmail.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <20180611202311.4btsydpvfaxslk5j@yuggoth.org> <97908b85-4ac5-cec8-02cf-467d86cd6186@gmail.com> Message-ID: Matt Riedemann wrote: >The specs thing was mentioned last week in IRC when talking about blueprints in launchpad and I just want to reiterate the specs are >more about high level designs and reviewing those designs in Gerrit which was / is a major drawback in the 'whiteboard' in launchpad for >working on blueprints - old blueprints that had a design (if they had a design at all) were usually linked from a wiki page. >Anyway, specs are design documents per release. Blueprints in launchpad, at least for nova, are the project management tracking tool for >that release. Not all blueprints require a spec, but all specs require a blueprint since specs are generally for API changes or other major >design changes or features. Just FYI. Matt is saying exactly what I've been saying in OpenContrail/Tungsten Fabric TSC meetings for a year. Launchpad Blueprints are very valuable for identifying what's likely to be in a given release, unambiguously indicating when the team has determined that something is going to miss a release (and therefore get bumped out to the future) and capturing the history of what was in a release. But they're lousy for reviewing and collaborating on technical details of what the thing actually is and how it is planned to work. On the other hand, spec documents in Gerrit are pretty good for iteratively refining a design document and ultimately agreeing to a finalized version, but not really all that good at reflecting status and progress to people who are not down in the weeds of discussing the implementation details of the feature. If Storyboard can find a way to improve on one or both of these activities, that's great. But abandoning Launchpad series and milestones functionality without a good replacement isn't a good idea for projects that are using them effectively. And for projects that aren't using them, I have to ask whether it's because they have a better way of communicating release plans to their user/operator communities or if it's because they simply aren't communicating release plans. Generally somebody somewhere is paying for almost all development, so most likely somebody wants to know if and when it is/will-be/was done. The simpler and more consistent the tooling for communicating that, the less time everyone has to spend answering questions from the people who just want to know if whatever thing they're waiting on is in progress, on the backlog, or already complete. From david.ames at canonical.com Tue Jun 12 03:24:09 2018 From: david.ames at canonical.com (David Ames) Date: Mon, 11 Jun 2018 20:24:09 -0700 Subject: [openstack-dev] [charms] 18.05 OpenStack Charms release Message-ID: Announcing the 18.05 release of the OpenStack Charms. The 18.05 charms have full support for the Bionic Ubuntu series. Encryption at rest has been implemented in the storage charms. In addition, the vault and neutron-dynamic-routing charms have been introduced. 72 bugs have been fixed and released across the OpenStack charms. For full details of the release, please refer to the release notes: https://docs.openstack.org/charm-guide/latest/18.05.html Thanks go to the following contributors for this release: Alex Kavanagh Andrew McLeod Ante Karamatić Billy Olsen Chris MacNaughton Chris Sanders Corey Bryant Craige McWhirter David Ames Drew Freiberger Edward Hope-Morley Felipe Reyes Frode Nordahl Fulvio Galeazzi Jakub Rohovsky James Hebden James Page Liam Young Mario Splivalo Michael Skalka Peter Sabaini Ryan Beisner Sean Feole Seyeong Kim Tamas Erdei Tytus Kurek Xav Paice viswesuwara nathan From sileht at sileht.net Tue Jun 12 05:03:20 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Tue, 12 Jun 2018 07:03:20 +0200 Subject: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens In-Reply-To: References: Message-ID: <5b3ee158fbf3f213f5622d7b52903dc7@sileht.net> Hi, The tempest plugin error remember me something we got in telemetry gate a while back. We fix the telemetry tempest plugin with https://github.com/openstack/telemetry-tempest-plugin/commit/11277a8bee2b0ee0688ed32cc0e836872c24ee4b So I propose the same for heat tempest plugin: https://review.openstack.org/574550 Hope that helps, sileht Le 2018-06-11 21:53, Ken Giusti a écrit : > Updated subject to include [aodh] and [telemetry] > > On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann > wrote: >> Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400: >>> Hi, >>> >>> The telemetry integration test for oslo.messaging has started failing >>> on the stable/queens branch [0]. >>> >>> A quick review of the logs points to a change in heat-tempest-plugin >>> that is incompatible with the version of gabbi from queens upper >>> constraints (1.40.0) [1][2]. >>> >>> The job definition [3] includes required-projects that do not have >>> stable/queens branches - including heat-tempest-plugin. >>> >>> My question - how do I prevent this job from breaking when these >>> unbranched projects introduce changes that are incompatible with >>> upper-constrants for a particular branch? >> >> Aren't those projects co-gating on the oslo.messaging test job? >> >> How are the tests working for heat's stable/queens branch? Or >> telemetry? >> (whichever project is pulling in that tempest repo) >> > > I've run the stable/queens branches of both Aodh[1] and Heat[2] - both > failed. > > Though the heat failure is different from what we're seeing on > oslo.messaging [3], > the same warning about gabbi versions is there [4]. > > However the Aodh failure is exactly the same as the oslo.messaging one > [5] - this makes sense since the oslo.messaging test is basically > running the same telemetry-tempest-plugin test. > > So this isn't something unique to oslo.messaging - the telemetry > integration test is busted in stable/queens. > > I'm going to mark these tests as non-voting on oslo.messaging's queens > branch for now so we can land some pending patches. > > > [1] https://review.openstack.org/#/c/574306/ > [2] https://review.openstack.org/#/c/574311/ > [3] > http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223 > [4] > http://logs.openstack.org/11/574311/1/check/heat-functional-orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018-06-11_17_09_39_691 > [5] > http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-integration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143 > > > >>> >>> I've tried to use override-checkout in the job definition, but that >>> seems a bit hacky in this case since the tagged versions don't appear >>> to work and I've resorted to a hardcoded ref [4]. >>> >>> Advice appreciated, thanks! >>> >>> [0] https://review.openstack.org/#/c/567124/ >>> [1] >>> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate-post_test_hook.txt.gz#_2018-05-16_05_20_05_624 >>> [2] >>> http://logs.openstack.org/24/567124/1/check/oslo.messaging-telemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog.txt.gz#_2018-05-16_05_19_06_332 >>> [3] >>> https://git.openstack.org/cgit/openstack/oslo.messaging/tree/.zuul.yaml?h=stable/queens#n250 >>> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht From ramishra at redhat.com Tue Jun 12 05:41:45 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Jun 2018 11:11:45 +0530 Subject: [openstack-dev] [heat][ci][infra][aodh][telemetry] telemetry test broken on oslo.messaging stable/queens In-Reply-To: <5b3ee158fbf3f213f5622d7b52903dc7@sileht.net> References: <5b3ee158fbf3f213f5622d7b52903dc7@sileht.net> Message-ID: On Tue, Jun 12, 2018 at 10:33 AM, Mehdi Abaakouk wrote: > > Hi, > > The tempest plugin error remember me something we got in telemetry gate a > while back. > > We fix the telemetry tempest plugin with https://github.com/openstack/t > elemetry-tempest-plugin/commit/11277a8bee2b0ee0688ed32cc0e836872c24ee4b > > So I propose the same for heat tempest plugin: > https://review.openstack.org/574550 > > After https://github.com/cdent/gabbi/pull/243/commits/01993966c179791186977e27c64b9e525a566408 (gabbi === 1.42.0) it just checks for host is not None and we pass empty string here. So it should not fail. However, I think the issue is that quuens upper-constraints have gabbi===1.40.0. Unless we can bump that we've to go with this workaround. > Hope that helps, > sileht > > > Le 2018-06-11 21:53, Ken Giusti a écrit : > >> Updated subject to include [aodh] and [telemetry] >> >> On Tue, Jun 5, 2018 at 11:41 AM, Doug Hellmann >> wrote: >> >>> Excerpts from Ken Giusti's message of 2018-06-05 10:47:17 -0400: >>> >>>> Hi, >>>> >>>> The telemetry integration test for oslo.messaging has started failing >>>> on the stable/queens branch [0]. >>>> >>>> A quick review of the logs points to a change in heat-tempest-plugin >>>> that is incompatible with the version of gabbi from queens upper >>>> constraints (1.40.0) [1][2]. >>>> >>>> The job definition [3] includes required-projects that do not have >>>> stable/queens branches - including heat-tempest-plugin. >>>> >>>> My question - how do I prevent this job from breaking when these >>>> unbranched projects introduce changes that are incompatible with >>>> upper-constrants for a particular branch? >>>> >>> >>> Aren't those projects co-gating on the oslo.messaging test job? >>> >>> How are the tests working for heat's stable/queens branch? Or telemetry? >>> (whichever project is pulling in that tempest repo) >>> >>> >> I've run the stable/queens branches of both Aodh[1] and Heat[2] - both >> failed. >> >> Though the heat failure is different from what we're seeing on >> oslo.messaging [3], >> the same warning about gabbi versions is there [4]. >> >> However the Aodh failure is exactly the same as the oslo.messaging one >> [5] - this makes sense since the oslo.messaging test is basically >> running the same telemetry-tempest-plugin test. >> >> So this isn't something unique to oslo.messaging - the telemetry >> integration test is busted in stable/queens. >> >> I'm going to mark these tests as non-voting on oslo.messaging's queens >> branch for now so we can land some pending patches. >> >> >> [1] https://review.openstack.org/#/c/574306/ >> [2] https://review.openstack.org/#/c/574311/ >> [3] >> http://logs.openstack.org/11/574311/1/check/heat-functional- >> orig-mysql-lbaasv2/21cce1d/job-output.txt.gz#_2018-06-11_17_30_51_106223 >> [4] >> http://logs.openstack.org/11/574311/1/check/heat-functional- >> orig-mysql-lbaasv2/21cce1d/logs/devstacklog.txt.gz#_2018- >> 06-11_17_09_39_691 >> [5] >> http://logs.openstack.org/06/574306/1/check/telemetry-dsvm-i >> ntegration/0a9620a/job-output.txt.gz#_2018-06-11_16_53_33_982143 >> >> >> >> >>>> I've tried to use override-checkout in the job definition, but that >>>> seems a bit hacky in this case since the tagged versions don't appear >>>> to work and I've resorted to a hardcoded ref [4]. >>>> >>>> Advice appreciated, thanks! >>>> >>>> [0] https://review.openstack.org/#/c/567124/ >>>> [1] http://logs.openstack.org/24/567124/1/check/oslo.messaging-t >>>> elemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstack-gate- >>>> post_test_hook.txt.gz#_2018-05-16_05_20_05_624 >>>> [2] http://logs.openstack.org/24/567124/1/check/oslo.messaging-t >>>> elemetry-dsvm-integration-rabbit/e7fdc7d/logs/devstacklog. >>>> txt.gz#_2018-05-16_05_19_06_332 >>>> [3] https://git.openstack.org/cgit/openstack/oslo.messaging/tree >>>> /.zuul.yaml?h=stable/queens#n250 >>>> [4] https://review.openstack.org/#/c/572193/2/.zuul.yaml >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > -- > Mehdi Abaakouk > mail: sileht at sileht.net > irc: sileht > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jun 12 08:00:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 12 Jun 2018 10:00:16 +0200 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528747807-sup-187@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> Message-ID: <0989a538-cc78-fb49-2fe9-612af429738e@openstack.org> Doug Hellmann wrote: > [...] > The release team used to (help) manage the launchpad series data. > We stopped doing that a long time ago, as Jeremy pointed out, because > it was not useful to *the release team* in the way we were managing > the releases. We stopped tracking blueprints and bug fixes to try > to predict which release they would land in and built tools to make > it easier for teams to declare what they had completed through > release notes instead. > [...] A bit more historical context around that. Launchpad has a design flaw in how it uses milestones and series. Those are used both for pre-milestone planning (what you planned to do) and post-milestone reporting (what actually landed). Since what you plan to do never ends up being what you actually do, using the same fields to track both creates subtle issues. Trust me, I spent my early OpenStack years fighting that discrepancy and trying to provide a "release manager" view of OpenStack with it. As OpenStack grew, the amount of work needed went up and the quality of the result went down. The solution is to use separate tools. Git history and reno are the only accurate way to track what landed. The task tracker should only do pre-milestone planning. Then, what's the best way to track progress toward a milestone ? Launchpad was clearly not the best tool, otherwise we would not have random etherpads with lists of Launchpad links around release candidate time, or people tracking progress in external Trellos. A lot of people wanted more than just binary indicators like tags and milestone targeting. Storyboard is designed to let you use tags, or lists, or boards, whatever the team finds convenient to organize the work. Don't get me wrong, it's not perfect, and it still has much more rough edges than I'd like. But at least it has the potential to become what we need. It doesn't try to do more than it should. It's also worth repeating it is a task tracker, not a product management tool. So yes, you are missing the consistent views of "progress" and "what's landed" across all of OpenStack. But as Jeremy and Doug mentioned, the reality is that we bailed on providing that view through Launchpad a long time ago. -- Thierry Carrez (ttx) From kennelson11 at gmail.com Tue Jun 12 08:38:53 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Jun 2018 10:38:53 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> Message-ID: Thanks for the patch Zane :) -Kendall (diablo_rojo) On Mon, Jun 11, 2018 at 3:50 PM Zane Bitter wrote: > On 04/06/18 10:13, Zane Bitter wrote: > > On 31/05/18 14:35, Julia Kreger wrote: > >> Back to the topic of nitpicking! > >> > >> I virtually sat down with Doug today and we hammered out the positive > >> aspects that we feel like are the things that we as a community want > >> to see as part of reviews coming out of this effort. The principles > >> change[1] in governance has been updated as a result. > >> > >> I think we are at a point where we have to state high level > >> principles, and then also update guidelines or other context providing > >> documentation to re-enforce some of items covered in this > >> discussion... not just to educate new contributors, but to serve as a > >> checkpoint for existing reviewers when making the decision as to how > >> to vote change set. The question then becomes where would such > >> guidelines or documentation best fit? > > > > I think the contributor guide is the logical place for it. Kendall > > pointed out this existing section: > > > > > https://docs.openstack.org/contributors/code-and-documentation/using-gerrit.html#reviewing-changes > > > > > > It could go in there, or perhaps we separate out the parts about when to > > use which review scores into a separate page from the mechanics of how > > to use Gerrit. > > > >> Should we explicitly detail the > >> cause/effect that occurs? Should we convey contributor perceptions, or > >> maybe even just link to this thread as there has been a massive amount > >> of feedback raising valid cases, points, and frustrations. > >> > >> Personally, I'd lean towards a blended approach, but the question of > >> where is one I'm unsure of. Thoughts? > > > > Let's crowdsource a set of heuristics that reviewers and contributors > > should keep in mind when they're reviewing or having their changes > > reviewed. I made a start on collecting ideas from this and past threads, > > as well as my own reviewing experience, into a document that I've > > presumptuously titled "How to Review Changes the OpenStack Way" (but > > might be more accurately called "The Frank Sinatra Guide to Code Review" > > at the moment): > > > > https://etherpad.openstack.org/p/review-the-openstack-way > > > > It's in an etherpad to make it easier for everyone to add their > > suggestions and comments (folks in #openstack-tc have made some tweaks > > already). After a suitable interval has passed to collect feedback, I'll > > turn this into a contributor guide change. > > It's had a week to percolate (and I've seen quite a few people viewing > the etherpad), so here is the review: > > https://review.openstack.org/574479 > > - ZB > > > Have at it! > > > > cheers, > > Zane. > > > >> -Julia > >> > >> [1]: https://review.openstack.org/#/c/570940/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongle.li at gmail.com Tue Jun 12 08:39:50 2018 From: yongle.li at gmail.com (Fred Li) Date: Tue, 12 Jun 2018 16:39:50 +0800 Subject: [openstack-dev] OpenStack Hackathon for Rocky release Message-ID: Hi all OpenStackers, Since April 2015, there have been 7 OpenStack Bug Smash events in China. If you are interested, please visit [1]. Now, the 8th OpenStack bug smash is coming. Intel, Huawei, Tecent and CESI will host this event. Just to highlight, this event is changed to Open Source Hackathon as more open source communities are joining. You can find Kata Containers, Ceph, Kubernetes, Cloud Foundry besides OpenStack. This event[2] will be on Jun 19 and 20 in Beijing, just prior to OpenInfra Days China 2018 [3]. To all the projects team leaders, you can discuss with your team in the project meeting and mark the bugs[4] you expect the attendees to work on. If you can arrange core reviewers to take care of the patches during the 2 days, that will be more efficient. [1] https://www.openstack.org/videos/sydney-2017/what-china-developers-brought-to-community-after-6-openstack-bug-smash-events [2] https://etherpad.openstack.org/p/OpenSource-Hackathon-8-beijing [3] http://china.openinfradays.org/ [4] https://etherpad.openstack.org/p/OpenSource-Hackathon-Rocky-Beijing-Bugs-List -- Regards Fred Li (李永乐) From kennelson11 at gmail.com Tue Jun 12 08:57:38 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Jun 2018 10:57:38 +0200 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528747807-sup-187@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> Message-ID: Another option for playing around things- I am happy to do a test migration and populate our storyboard-dev instance with your real data from lp. The last half a dozen teams we have migrated have been handled this way. Playing around with StoryBoard ahead of time is a really good idea because it does work differently from lp. I don't think its more complicated, it just takes some getting used to. It forces a lot less on its users in terms of constructs and gives users a lot more flexibility to use it in a way that is most effective for them. For a lot of people this involves a mental re-frame of task management and organization of work but its not a herculean effort. -Kendall (diablo_rojo) On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann wrote: > Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +0000: > > Jeremy Stanley wrote: > > > > >I'm just going to come out and call bullshit on this one. How many of > the >800 official OpenStack deliverable repos have a view like that with > any actual relevant detail? If it's "standard" then certainly more than > half, right? > > > > Well, that's a bit rude, so I'm not going to get in a swearing contest > over whether Nova, Neutron and Cinder are more "important" than 800+ other > projects. I picked a handful of projects that I'm most interested in and > which also happened to have really clear, accessible and easy to understand > information on what they have delivered in the past and are planning to > deliver in the future. If I slighted your favorite projects I apologize. > > > > So, are you saying the information shown in the examples I gave is not > useful? > > > > Or just that I've been lucky in the past that the projects I'm most > interested in do a better than typical job of managing releases but the > future is all downhill? > > > > If you're saying it's not useful info and we're better off without it > then I'll just have to disagree. If you're saying that it has been replaced > with something better, please share the URLs. > > > > I'm all for improvements, but saying "only a few people were doing > something useful so we should throw it out and nobody do it" isn't a path > to improvement. How about we discuss alternate (e.g. > better/easier/whatever) ways of making the information available. > > > > This thread isn't going in a very productive direction. Please > consider your tone as you reply. > > The release team used to (help) manage the launchpad series data. > We stopped doing that a long time ago, as Jeremy pointed out, because > it was not useful to *the release team* in the way we were managing > the releases. We stopped tracking blueprints and bug fixes to try > to predict which release they would land in and built tools to make > it easier for teams to declare what they had completed through > release notes instead. > > OpenStack does not have a bunch of project managers signed up to > help this kind of information, so it was left up to each project > team to track any planning information *they decided was useful* > to do their work. If that tracking information happens to be useful > to anyone other than contributors, I consider that a bonus. > > As we shift teams over to Storyboard, we have another opportunity > to review the processes and to decide how to use the new tool. Some > teams with lightweight processes will be able to move directly with > little impact. Other teams who are doing more tracking and planning > will need to think about how to do that. The new tool provides some > flexibility, and as with any other big change in our community, > we're likely to see a bit of divergence before we collectively > discover what works and teams converge back to a more consistent > approach. That's normal, expected, and desirable. > > I recommend that people spend a little time experimenting on their > own before passing judgement or trying to set standards. > > Start by looking at the features of the tool itself. Set up a work > list and add some stories to it. Set up a board and see how the > automatic work lists help keep it up to date as the story or task > states change. Do the same with a manually managed board. If you > need a project to assign a task to because yours hasn't migrated > yet, use openstack/release-test. > > Then think about the workflows you actually use -- not just the > ones you've been doing because that's the way the project has always > been managed. Think about how those workflows might translate over > to the new tool, based on its features. If you're not sure, ask and > we can see what other teams are doing or what people more familiar > with the tool suggest trying. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jun 12 09:44:31 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 12 Jun 2018 11:44:31 +0200 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag Message-ID: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Hi! We had a decently-sized thread on how to better track organizational diversity, which I think would benefit from a summary. The issue is that the current method (which uses a formula to apply single-vendor and diverse-affiliation tags) is not working so well anymore, with lots of low-activity projects quickly flapping between states. Suggestions included: - Drop tags, write a regular report instead that can account for the subtlety of each situation (ttx). One issue here is that it's obviously a lot more work than the current situation. - Creating a "low-activity" tag that would clearly exempt some teams from diversity tagging (mnaser). One issue is that this tag may drive contributors away from those teams. - Drop existing tags, and replace them by voluntary tagging on how organizationally-diverse core reviewing is in the team (zaneb). This suggestion triggered a sort of side thread on whether this is actually a current practice. It appears that vertical, vendor-sensitive teams are more likely to adopt such (generally unwritten) rule than horizontal teams where hats are much more invisible. One important thing to remember is that the diversity tags are supposed to inform deployers, so that they can make informed choices on which component they are comfortable to deploy. So whatever we come up with, it needs to be useful information for deployers, not just a badge of honor for developers, or a statement of team internal policy. Thoughts on those suggestions? Other suggestions? -- Thierry Carrez (ttx) From rico.lin.guanyu at gmail.com Tue Jun 12 10:00:46 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 12 Jun 2018 18:00:46 +0800 Subject: [openstack-dev] [heat] Not available for meeting this week Message-ID: Hi team As I'm not available to host our meeting this week, would like to ask if there Are any issues we should target or discuss this week? If there is none I suggest we skip meeting this week. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jun 12 10:24:46 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 12 Jun 2018 11:24:46 +0100 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Message-ID: FWIW, as an outside observer of this conversation: On Tue, Jun 12, 2018 at 10:46 AM Thierry Carrez wrote: > > The issue is that the current method (which uses a formula to apply > single-vendor and diverse-affiliation tags) is not working so well > anymore, with lots of low-activity projects quickly flapping between > states. > I think you need to explore and state much more explicitly why this is a problem, before you will be able to evaluate possible changes. > One important thing to remember is that the diversity tags are supposed > to inform deployers, so that they can make informed choices on which > component they are comfortable to deploy. So whatever we come up with, > it needs to be useful information for deployers, not just a badge of > honor for developers, or a statement of team internal policy. > It sounds like this might be part of that 'why'. How sure are you about it? Regards, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Jun 12 12:05:31 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 12 Jun 2018 14:05:31 +0200 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Message-ID: <02d58d67-4191-7f07-54a5-1293194371c3@openstack.org> Neil Jerram wrote: >> The issue is that the current method (which uses a formula to apply >> single-vendor and diverse-affiliation tags) is not working so well >> anymore, with lots of low-activity projects quickly flapping between >> states. > > I think you need to explore and state much more explicitly why this is a > problem, before you will be able to evaluate possible changes. Right, this was just a summary, we discussed it in more details in the thread and during that Forum session. For example, a single-vendor project would suddenly lose the tag due to a combination of low activity and infra people pushing boilerplate change to their test jobs. Yet the project is still very much single-vendor (and not seeing much activity). The way we ended up working around that in past cycles is by doing a human pass on the calculated results and assess whether the change is more of a data artifact due to low activity, or a real trends. If it was deemed an artifact, we'd not commit that change. But lately most of the changes had to be filtered by a human, which basically makes the calculation useless. >> One important thing to remember is that the diversity tags are supposed >> to inform deployers, so that they can make informed choices on which >> component they are comfortable to deploy. So whatever we come up with, >> it needs to be useful information for deployers, not just a badge of >> honor for developers, or a statement of team internal policy. > > It sounds like this might be part of that 'why'.  How sure are you about it? How sure am I about... what? That tags are meant to be useful to deployers and the rest of our downstream consumers ? That is part of the original definition[1] of a tag. The template to define tags even includes a "rationale" section that is meant to justify how the ecosystem benefits from having this tag defined. [1] https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.html#provide-a-precise-taxonomy-to-help-navigating-the-ecosystem -- Thierry Carrez (ttx) From fungi at yuggoth.org Tue Jun 12 12:23:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 12 Jun 2018 12:23:42 +0000 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <97908b85-4ac5-cec8-02cf-467d86cd6186@gmail.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <20180611202311.4btsydpvfaxslk5j@yuggoth.org> <97908b85-4ac5-cec8-02cf-467d86cd6186@gmail.com> Message-ID: <20180612122342.asz7urq2upw2l5e6@yuggoth.org> On 2018-06-11 17:55:24 -0500 (-0500), Matt Riedemann wrote: > On 6/11/2018 3:23 PM, Jeremy Stanley wrote: > > If you recall the "specs" experiment years > > ago, a few teams tried mildly different solutions for moving from LP > > blueprints with random wiki page links to tracking specifications in > > Git repositories, and over time they learned successful patterns > > from each other and mostly converged on similar solutions. There > > were similar cries back then about "how will users/operators find > > out what is being planned?" but I think the end result was far > > better than what it replaced. > > The specs thing was mentioned last week in IRC when talking about blueprints > in launchpad and I just want to reiterate the specs are more about high > level designs and reviewing those designs in Gerrit which was / is a major > drawback in the 'whiteboard' in launchpad for working on blueprints - old > blueprints that had a design (if they had a design at all) were usually > linked from a wiki page. > > Anyway, specs are design documents per release. Blueprints in launchpad, at > least for nova, are the project management tracking tool for that release. > Not all blueprints require a spec, but all specs require a blueprint since > specs are generally for API changes or other major design changes or > features. Just FYI. I agree, that highlights what I see as one of the four predominant patterns we have going on at present: * teams using a combination of blueprints with specs * teams just using blueprints * teams just using specs * teams using neither blueprints nor specs Some teams who dropped use of blueprints, or who never used them to begin with, may end up having a bug report or story they use like a blueprint to track tasks associated with their specs. Case in point: the Infra team has a story for each spec, and the idea is that you can populate that story with tasks corresponding to the implementation steps outlined in the spec itself, then reference those tasks in your commit messages so you get automatic tracking of progress for the spec, though we tended to not be very diligent with that because we're sort of laid back about our particular specs process. In our case, those spec stories can go in automatic lanes in a board so that you can see which are active vs merged and, if we followed release cycles, could have those lanes further specified by story tag for the targeted release. There's a lot of flexibility here and, much like the specs experiments of a few years ago, I expect to see teams coming up with effective patterns and then sharing those with each other allowing us to converge as a community toward some loose consistency. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tpb at dyncloud.net Tue Jun 12 12:52:46 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 12 Jun 2018 08:52:46 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> Message-ID: <20180612125246.7dkisnjri6v7spw4@barron.net> On 12/06/18 10:57 +0200, Kendall Nelson wrote: >Another option for playing around things- I am happy to do a test migration >and populate our storyboard-dev instance with your real data from lp. The >last half a dozen teams we have migrated have been handled this way. > Can we do this for manila? I believe you did a test migration already but not to a sandbox that we could play with? Or maybe you did the sandbox as well but I missed that and didn't play with it? Before we cutover I want to: * add some new sample bugs and blueprints * set up worklists for our release milestones * set up some useful worklists and boards and search queries for stuff that we track only ad-hoc today * figure a place to document these publicly -- Tom >Playing around with StoryBoard ahead of time is a really good idea >because >it does work differently from lp. I don't think its more complicated, it >just takes some getting used to. It forces a lot less on its users in terms >of constructs and gives users a lot more flexibility to use it in a way >that is most effective for them. For a lot of people this involves a mental >re-frame of task management and organization of work but its not a >herculean effort. > >-Kendall (diablo_rojo) > >On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann wrote: > >> Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +0000: >> > Jeremy Stanley wrote: >> > >> > >I'm just going to come out and call bullshit on this one. How many of >> the >800 official OpenStack deliverable repos have a view like that with >> any actual relevant detail? If it's "standard" then certainly more than >> half, right? >> > >> > Well, that's a bit rude, so I'm not going to get in a swearing contest >> over whether Nova, Neutron and Cinder are more "important" than 800+ other >> projects. I picked a handful of projects that I'm most interested in and >> which also happened to have really clear, accessible and easy to understand >> information on what they have delivered in the past and are planning to >> deliver in the future. If I slighted your favorite projects I apologize. >> > >> > So, are you saying the information shown in the examples I gave is not >> useful? >> > >> > Or just that I've been lucky in the past that the projects I'm most >> interested in do a better than typical job of managing releases but the >> future is all downhill? >> > >> > If you're saying it's not useful info and we're better off without it >> then I'll just have to disagree. If you're saying that it has been replaced >> with something better, please share the URLs. >> > >> > I'm all for improvements, but saying "only a few people were doing >> something useful so we should throw it out and nobody do it" isn't a path >> to improvement. How about we discuss alternate (e.g. >> better/easier/whatever) ways of making the information available. >> > >> >> This thread isn't going in a very productive direction. Please >> consider your tone as you reply. >> >> The release team used to (help) manage the launchpad series data. >> We stopped doing that a long time ago, as Jeremy pointed out, because >> it was not useful to *the release team* in the way we were managing >> the releases. We stopped tracking blueprints and bug fixes to try >> to predict which release they would land in and built tools to make >> it easier for teams to declare what they had completed through >> release notes instead. >> >> OpenStack does not have a bunch of project managers signed up to >> help this kind of information, so it was left up to each project >> team to track any planning information *they decided was useful* >> to do their work. If that tracking information happens to be useful >> to anyone other than contributors, I consider that a bonus. >> >> As we shift teams over to Storyboard, we have another opportunity >> to review the processes and to decide how to use the new tool. Some >> teams with lightweight processes will be able to move directly with >> little impact. Other teams who are doing more tracking and planning >> will need to think about how to do that. The new tool provides some >> flexibility, and as with any other big change in our community, >> we're likely to see a bit of divergence before we collectively >> discover what works and teams converge back to a more consistent >> approach. That's normal, expected, and desirable. >> >> I recommend that people spend a little time experimenting on their >> own before passing judgement or trying to set standards. >> >> Start by looking at the features of the tool itself. Set up a work >> list and add some stories to it. Set up a board and see how the >> automatic work lists help keep it up to date as the story or task >> states change. Do the same with a manually managed board. If you >> need a project to assign a task to because yours hasn't migrated >> yet, use openstack/release-test. >> >> Then think about the workflows you actually use -- not just the >> ones you've been doing because that's the way the project has always >> been managed. Think about how those workflows might translate over >> to the new tool, based on its features. If you're not sure, ask and >> we can see what other teams are doing or what people more familiar >> with the tool suggest trying. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tpb at dyncloud.net Tue Jun 12 13:04:49 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 12 Jun 2018 09:04:49 -0400 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Message-ID: <20180612130449.g5sh34pipquuamcs@barron.net> On 12/06/18 11:44 +0200, Thierry Carrez wrote: >Hi! > >We had a decently-sized thread on how to better track organizational >diversity, which I think would benefit from a summary. > >The issue is that the current method (which uses a formula to apply >single-vendor and diverse-affiliation tags) is not working so well >anymore, with lots of low-activity projects quickly flapping between >states. I wonder if there's a succint way to present the history rather than just the most recent tag value. As a deployer I can then tell the difference between a project that consistently lacks diverse-affiliation and a project that occasionally or only recently lacks diverse-affiliation. > >Suggestions included: > >- Drop tags, write a regular report instead that can account for the >subtlety of each situation (ttx). One issue here is that it's >obviously a lot more work than the current situation. > >- Creating a "low-activity" tag that would clearly exempt some teams >from diversity tagging (mnaser). One issue is that this tag may drive >contributors away from those teams. > >- Drop existing tags, and replace them by voluntary tagging on how >organizationally-diverse core reviewing is in the team (zaneb). This >suggestion triggered a sort of side thread on whether this is actually >a current practice. It appears that vertical, vendor-sensitive teams >are more likely to adopt such (generally unwritten) rule than >horizontal teams where hats are much more invisible. > >One important thing to remember is that the diversity tags are >supposed to inform deployers, so that they can make informed choices >on which component they are comfortable to deploy. So whatever we come >up with, it needs to be useful information for deployers, not just a >badge of honor for developers, or a statement of team internal policy. > >Thoughts on those suggestions? Other suggestions? > >-- >Thierry Carrez (ttx) > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From james.slagle at gmail.com Tue Jun 12 13:06:47 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 12 Jun 2018 09:06:47 -0400 Subject: [openstack-dev] [tripleo] scenario000-multinode-oooq-container-upgrades In-Reply-To: References: Message-ID: On Mon, Jun 11, 2018 at 3:34 PM, Wesley Hayutin wrote: > Greetings, > > I wanted to let everyone know that we have a keystone only deployment and > upgrade job in check non-voting. I'm asking everyone in TripleO to be > mindful of this job and to help make sure it continues to pass as we move it > from non-voting check to check and eventually gating. +1, nice work! > Upgrade jobs are particularly difficult to keep running successfully because > of the complex workflow itself, job run times and other factors. Your help > to ensure we don't merge w/o a pass on this job will go a long way in > helping the tripleo upgrades team. > > There is still work to be done here, however it's much easier to do it with > the check non-voting job in place. The job doesn't appear to be passing at all on stable/queens. I see this same failure on several patches: http://logs.openstack.org/59/571459/1/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/8bbd827/logs/undercloud/home/zuul/overcloud_upgrade_run_Controller.log.txt.gz Is this a known issue? -- -- James Slagle -- From kennelson11 at gmail.com Tue Jun 12 13:25:45 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Jun 2018 15:25:45 +0200 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <20180612125246.7dkisnjri6v7spw4@barron.net> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> <20180612125246.7dkisnjri6v7spw4@barron.net> Message-ID: Yes! I can definitely set Manila up in storyboard-dev. I'll get the imports done before the end of the week :) -Kendall (diablo_rojo) On Tue, 12 Jun 2018, 2:53 pm Tom Barron, wrote: > On 12/06/18 10:57 +0200, Kendall Nelson wrote: > >Another option for playing around things- I am happy to do a test > migration > >and populate our storyboard-dev instance with your real data from lp. The > >last half a dozen teams we have migrated have been handled this way. > > > > Can we do this for manila? I believe you did a test migration already > but not to a sandbox that we could play with? Or maybe you did the > sandbox as well but I missed that and didn't play with it? > > Before we cutover I want to: > * add some new sample bugs and blueprints > * set up worklists for our release milestones > * set up some useful worklists and boards and search queries for > stuff that we track only ad-hoc today > * figure a place to document these publicly > > -- Tom > > >Playing around with StoryBoard ahead of time is a really good idea > >because > >it does work differently from lp. I don't think its more complicated, it > >just takes some getting used to. It forces a lot less on its users in > terms > >of constructs and gives users a lot more flexibility to use it in a way > >that is most effective for them. For a lot of people this involves a > mental > >re-frame of task management and organization of work but its not a > >herculean effort. > > > >-Kendall (diablo_rojo) > > > >On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann > wrote: > > > >> Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +0000: > >> > Jeremy Stanley wrote: > >> > > >> > >I'm just going to come out and call bullshit on this one. How many of > >> the >800 official OpenStack deliverable repos have a view like that with > >> any actual relevant detail? If it's "standard" then certainly more than > >> half, right? > >> > > >> > Well, that's a bit rude, so I'm not going to get in a swearing contest > >> over whether Nova, Neutron and Cinder are more "important" than 800+ > other > >> projects. I picked a handful of projects that I'm most interested in and > >> which also happened to have really clear, accessible and easy to > understand > >> information on what they have delivered in the past and are planning to > >> deliver in the future. If I slighted your favorite projects I apologize. > >> > > >> > So, are you saying the information shown in the examples I gave is not > >> useful? > >> > > >> > Or just that I've been lucky in the past that the projects I'm most > >> interested in do a better than typical job of managing releases but the > >> future is all downhill? > >> > > >> > If you're saying it's not useful info and we're better off without it > >> then I'll just have to disagree. If you're saying that it has been > replaced > >> with something better, please share the URLs. > >> > > >> > I'm all for improvements, but saying "only a few people were doing > >> something useful so we should throw it out and nobody do it" isn't a > path > >> to improvement. How about we discuss alternate (e.g. > >> better/easier/whatever) ways of making the information available. > >> > > >> > >> This thread isn't going in a very productive direction. Please > >> consider your tone as you reply. > >> > >> The release team used to (help) manage the launchpad series data. > >> We stopped doing that a long time ago, as Jeremy pointed out, because > >> it was not useful to *the release team* in the way we were managing > >> the releases. We stopped tracking blueprints and bug fixes to try > >> to predict which release they would land in and built tools to make > >> it easier for teams to declare what they had completed through > >> release notes instead. > >> > >> OpenStack does not have a bunch of project managers signed up to > >> help this kind of information, so it was left up to each project > >> team to track any planning information *they decided was useful* > >> to do their work. If that tracking information happens to be useful > >> to anyone other than contributors, I consider that a bonus. > >> > >> As we shift teams over to Storyboard, we have another opportunity > >> to review the processes and to decide how to use the new tool. Some > >> teams with lightweight processes will be able to move directly with > >> little impact. Other teams who are doing more tracking and planning > >> will need to think about how to do that. The new tool provides some > >> flexibility, and as with any other big change in our community, > >> we're likely to see a bit of divergence before we collectively > >> discover what works and teams converge back to a more consistent > >> approach. That's normal, expected, and desirable. > >> > >> I recommend that people spend a little time experimenting on their > >> own before passing judgement or trying to set standards. > >> > >> Start by looking at the features of the tool itself. Set up a work > >> list and add some stories to it. Set up a board and see how the > >> automatic work lists help keep it up to date as the story or task > >> states change. Do the same with a manually managed board. If you > >> need a project to assign a task to because yours hasn't migrated > >> yet, use openstack/release-test. > >> > >> Then think about the workflows you actually use -- not just the > >> ones you've been doing because that's the way the project has always > >> been managed. Think about how those workflows might translate over > >> to the new tool, based on its features. If you're not sure, ask and > >> we can see what other teams are doing or what people more familiar > >> with the tool suggest trying. > >> > >> Doug > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >__________________________________________________________________________ > >OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Jun 12 13:37:34 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 12 Jun 2018 14:37:34 +0100 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: <02d58d67-4191-7f07-54a5-1293194371c3@openstack.org> References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> <02d58d67-4191-7f07-54a5-1293194371c3@openstack.org> Message-ID: On Tue, Jun 12, 2018 at 1:07 PM Thierry Carrez wrote: > Neil Jerram wrote: > >> The issue is that the current method (which uses a formula to apply > >> single-vendor and diverse-affiliation tags) is not working so well > >> anymore, with lots of low-activity projects quickly flapping between > >> states. > > > > I think you need to explore and state much more explicitly why this is a > > problem, before you will be able to evaluate possible changes. > > Right, this was just a summary, we discussed it in more details in the > thread and during that Forum session. > Honestly - FWIW - I do not recall much (any?) discussion of what the actual problem is, in the recent message thread. (But I wasn't at the Forum, and I may well have missed or forgotten some of the discussion, of course.) > For example, a single-vendor project would suddenly lose the tag due to > a combination of low activity and infra people pushing boilerplate > change to their test jobs. Yet the project is still very much > single-vendor (and not seeing much activity). > That is just a restatement of the presumed-problematic observation. It doesn't take us any further in understanding whether it's an actual problem for anyone. > > The way we ended up working around that in past cycles is by doing a > human pass on the calculated results and assess whether the change is > more of a data artifact due to low activity, or a real trends. If it was > deemed an artifact, we'd not commit that change. But lately most of the > changes had to be filtered by a human, which basically makes the > calculation useless. > > >> One important thing to remember is that the diversity tags are > supposed > >> to inform deployers, so that they can make informed choices on which > >> component they are comfortable to deploy. So whatever we come up > with, > >> it needs to be useful information for deployers, not just a badge of > >> honor for developers, or a statement of team internal policy. > > > > It sounds like this might be part of that 'why'. How sure are you about > it? > > How sure am I about... what? That tags are meant to be useful to > deployers and the rest of our downstream consumers ? That is part of the > original definition[1] of a tag. The template to define tags even > includes a "rationale" section that is meant to justify how the > ecosystem benefits from having this tag defined. > I meant: how sure are you that your intended audience (deployers) substantially cares about this organizational diversity tag? Enough to justify all the cycles that OpenStack's core brains are spending on this topic. I'm sorry that I'm probably sounding so negative; please consider that I'm taking a devil's advocate position here in an attempt to clarify the real problem. > [1] > > https://governance.openstack.org/tc/resolutions/20141202-project-structure-reform-spec.hsubstantially tml#provide-a-precise-taxonomy-to-help-navigating-the-ecosystem > Please note: that governance text on its own is not sufficient to mandate concern about this particular organizational diversity tag, if I am reading it correctly. Or to put it another way, it looks like it would be entirely consistent with that text if you said "we're not sure if anyone actually cares much about this particular tag; let's stop generating it and see if any of our deployers complain." Regards, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Jun 12 13:44:34 2018 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 12 Jun 2018 09:44:34 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: References: <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> <20180612125246.7dkisnjri6v7spw4@barron.net> Message-ID: <20180612134434.pob67awuf36nk6xf@barron.net> On 12/06/18 15:25 +0200, Kendall Nelson wrote: >Yes! I can definitely set Manila up in storyboard-dev. I'll get the imports >done before the end of the week :) > >-Kendall (diablo_rojo) Thanks much! > >On Tue, 12 Jun 2018, 2:53 pm Tom Barron, wrote: > >> On 12/06/18 10:57 +0200, Kendall Nelson wrote: >> >Another option for playing around things- I am happy to do a test >> migration >> >and populate our storyboard-dev instance with your real data from lp. The >> >last half a dozen teams we have migrated have been handled this way. >> > >> >> Can we do this for manila? I believe you did a test migration already >> but not to a sandbox that we could play with? Or maybe you did the >> sandbox as well but I missed that and didn't play with it? >> >> Before we cutover I want to: >> * add some new sample bugs and blueprints >> * set up worklists for our release milestones >> * set up some useful worklists and boards and search queries for >> stuff that we track only ad-hoc today >> * figure a place to document these publicly >> >> -- Tom >> >> >Playing around with StoryBoard ahead of time is a really good idea >> >because >> >it does work differently from lp. I don't think its more complicated, it >> >just takes some getting used to. It forces a lot less on its users in >> terms >> >of constructs and gives users a lot more flexibility to use it in a way >> >that is most effective for them. For a lot of people this involves a >> mental >> >re-frame of task management and organization of work but its not a >> >herculean effort. >> > >> >-Kendall (diablo_rojo) >> > >> >On Mon, Jun 11, 2018 at 1:31 PM Doug Hellmann >> wrote: >> > >> >> Excerpts from CARVER, PAUL's message of 2018-06-11 19:53:47 +0000: >> >> > Jeremy Stanley wrote: >> >> > >> >> > >I'm just going to come out and call bullshit on this one. How many of >> >> the >800 official OpenStack deliverable repos have a view like that with >> >> any actual relevant detail? If it's "standard" then certainly more than >> >> half, right? >> >> > >> >> > Well, that's a bit rude, so I'm not going to get in a swearing contest >> >> over whether Nova, Neutron and Cinder are more "important" than 800+ >> other >> >> projects. I picked a handful of projects that I'm most interested in and >> >> which also happened to have really clear, accessible and easy to >> understand >> >> information on what they have delivered in the past and are planning to >> >> deliver in the future. If I slighted your favorite projects I apologize. >> >> > >> >> > So, are you saying the information shown in the examples I gave is not >> >> useful? >> >> > >> >> > Or just that I've been lucky in the past that the projects I'm most >> >> interested in do a better than typical job of managing releases but the >> >> future is all downhill? >> >> > >> >> > If you're saying it's not useful info and we're better off without it >> >> then I'll just have to disagree. If you're saying that it has been >> replaced >> >> with something better, please share the URLs. >> >> > >> >> > I'm all for improvements, but saying "only a few people were doing >> >> something useful so we should throw it out and nobody do it" isn't a >> path >> >> to improvement. How about we discuss alternate (e.g. >> >> better/easier/whatever) ways of making the information available. >> >> > >> >> >> >> This thread isn't going in a very productive direction. Please >> >> consider your tone as you reply. >> >> >> >> The release team used to (help) manage the launchpad series data. >> >> We stopped doing that a long time ago, as Jeremy pointed out, because >> >> it was not useful to *the release team* in the way we were managing >> >> the releases. We stopped tracking blueprints and bug fixes to try >> >> to predict which release they would land in and built tools to make >> >> it easier for teams to declare what they had completed through >> >> release notes instead. >> >> >> >> OpenStack does not have a bunch of project managers signed up to >> >> help this kind of information, so it was left up to each project >> >> team to track any planning information *they decided was useful* >> >> to do their work. If that tracking information happens to be useful >> >> to anyone other than contributors, I consider that a bonus. >> >> >> >> As we shift teams over to Storyboard, we have another opportunity >> >> to review the processes and to decide how to use the new tool. Some >> >> teams with lightweight processes will be able to move directly with >> >> little impact. Other teams who are doing more tracking and planning >> >> will need to think about how to do that. The new tool provides some >> >> flexibility, and as with any other big change in our community, >> >> we're likely to see a bit of divergence before we collectively >> >> discover what works and teams converge back to a more consistent >> >> approach. That's normal, expected, and desirable. >> >> >> >> I recommend that people spend a little time experimenting on their >> >> own before passing judgement or trying to set standards. >> >> >> >> Start by looking at the features of the tool itself. Set up a work >> >> list and add some stories to it. Set up a board and see how the >> >> automatic work lists help keep it up to date as the story or task >> >> states change. Do the same with a manually managed board. If you >> >> need a project to assign a task to because yours hasn't migrated >> >> yet, use openstack/release-test. >> >> >> >> Then think about the workflows you actually use -- not just the >> >> ones you've been doing because that's the way the project has always >> >> been managed. Think about how those workflows might translate over >> >> to the new tool, based on its features. If you're not sure, ask and >> >> we can see what other teams are doing or what people more familiar >> >> with the tool suggest trying. >> >> >> >> Doug >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >__________________________________________________________________________ >> >OpenStack Development Mailing List (not for usage questions) >> >Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Tue Jun 12 14:00:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 12 Jun 2018 09:00:47 -0500 Subject: [openstack-dev] [nova] Confusion over how enable_certificate_validation is meant to be used Message-ID: <79e80f0a-88cf-461f-c9ee-52f47966dc90@gmail.com> Sylvain and I were reviewing https://review.openstack.org/#/c/479949/ today and I'm at least a bit confused over how the enable_certificate_validation config option is meant to be used. The current logic during driver.spawn() on the compute is going to be: if the user supplied trusted certs or verify_glance_signatures: ... if user supplied trusted certs: # validate the user supplied trusted certs elif enable_certificate_validation: raise error because the user did not provide certs else: noop I realize from the API change later in the series that if the user does not provide trusted certs when creating or rebuilding a server, and verify_glance_signatures=True, enable_certificate_validation=True and default_trusted_certificate_ids=[something], we use the configured default_trusted_certificate_ids so once we get down to the compute to verify the image signature it will look like the user supplied trusted certs (even if we are using the default_trusted_certificate_ids from config). Is the point that, as an operator, I can say: verify_glance_signatures=True - yes verify image signatures (at least on a subset of compute hosts) enable_certificate_validation - yes verify certs (at least on a subset of compute hosts) default_trusted_certificate_ids=[] - but I don't want to provide default trusted cert IDs, which forces my users to provider their own certs (at least on a subset of compute hosts) In this scenario, if the user fails to provide trusted certs when creating or rebuilding a server, it's going to result in a build abort exception (NoValidHost) from the compute. Is that the intention? Also, the enable_certificate_validation option is deprecated and meant for "transition" but what transition is that? And when will we drop the enable_certificate_validation option? I'm trying to understand some of the upgrade flow here. -- Thanks, Matt From jungleboyj at gmail.com Tue Jun 12 14:05:43 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 12 Jun 2018 09:05:43 -0500 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> Message-ID: <61372de2-6752-b6fb-edc4-f5c5c3363246@gmail.com> On 6/11/2018 6:00 PM, Matt Riedemann wrote: > On 6/11/2018 3:31 PM, Doug Hellmann wrote: >> As we shift teams over to Storyboard, we have another opportunity >> to review the processes and to decide how to use the new tool. Some >> teams with lightweight processes will be able to move directly with >> little impact. Other teams who are doing more tracking and planning >> will need to think about how to do that. The new tool provides some >> flexibility, and as with any other big change in our community, >> we're likely to see a bit of divergence before we collectively >> discover what works and teams converge back to a more consistent >> approach.  That's normal, expected, and desirable. >> >> I recommend that people spend a little time experimenting on their >> own before passing judgement or trying to set standards. >> >> Start by looking at the features of the tool itself.  Set up a work >> list and add some stories to it. Set up a board and see how the >> automatic work lists help keep it up to date as the story or task >> states change. Do the same with a manually managed board. If you >> need a project to assign a task to because yours hasn't migrated >> yet, use openstack/release-test. >> >> Then think about the workflows you actually use -- not just the >> ones you've been doing because that's the way the project has always >> been managed. Think about how those workflows might translate over >> to the new tool, based on its features. If you're not sure, ask and >> we can see what other teams are doing or what people more familiar >> with the tool suggest trying. > > I'm reminded of something we talked about in IRC last week wrt > tracking blueprint-type changes over a given series / release in > storyboard. It was mentioned that storyboard has a not-yet-implemented > epics feature which is really how we'd probably do this (nested > stories is another way of thinking about this). So nova could, for > example, have an epic for Stein and then track a story for each > blueprint, with the old launchpad blueprint 'work items' (which we > don't use, but we do have a list of work items in our specs template) > tracked as tasks - which would also be nice since you can track tasks > like documentation, CLIs (nova and OSC) and tempest testing (if > required). One thing people always commit to in their spec is adding > support for the feature in client libraries, tempest and docs, but > once the nova server side change is merged those commitments end up > getting dropped (not always, but more often than I'd like). > Being able to use epics to organize the stories would be very helpful.  Maybe it is something that can be done with tags but if Storyboard has something more purposefully created to track epics like Jira has that would help make Storyboard a bit more useful. From fungi at yuggoth.org Tue Jun 12 14:11:58 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 12 Jun 2018 14:11:58 +0000 Subject: [openstack-dev] [nova] Confusion over how enable_certificate_validation is meant to be used In-Reply-To: <79e80f0a-88cf-461f-c9ee-52f47966dc90@gmail.com> References: <79e80f0a-88cf-461f-c9ee-52f47966dc90@gmail.com> Message-ID: <20180612141157.relmoqbkmgcae5uu@yuggoth.org> On 2018-06-12 09:00:47 -0500 (-0500), Matt Riedemann wrote: [...] > In this scenario, if the user fails to provide trusted certs when > creating or rebuilding a server, it's going to result in a build > abort exception (NoValidHost) from the compute. Is that the > intention? [...] It's at least one way to keep a user from accidentally booting an untrusted/unsigned image, assuming that's a goal for this feature. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue Jun 12 14:16:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 10:16:26 -0400 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> Message-ID: <1528812949-sup-2713@lrrr.local> Excerpts from Matt Riedemann's message of 2018-06-11 18:00:12 -0500: > On 6/11/2018 3:31 PM, Doug Hellmann wrote: > > As we shift teams over to Storyboard, we have another opportunity > > to review the processes and to decide how to use the new tool. Some > > teams with lightweight processes will be able to move directly with > > little impact. Other teams who are doing more tracking and planning > > will need to think about how to do that. The new tool provides some > > flexibility, and as with any other big change in our community, > > we're likely to see a bit of divergence before we collectively > > discover what works and teams converge back to a more consistent > > approach. That's normal, expected, and desirable. > > > > I recommend that people spend a little time experimenting on their > > own before passing judgement or trying to set standards. > > > > Start by looking at the features of the tool itself. Set up a work > > list and add some stories to it. Set up a board and see how the > > automatic work lists help keep it up to date as the story or task > > states change. Do the same with a manually managed board. If you > > need a project to assign a task to because yours hasn't migrated > > yet, use openstack/release-test. > > > > Then think about the workflows you actually use -- not just the > > ones you've been doing because that's the way the project has always > > been managed. Think about how those workflows might translate over > > to the new tool, based on its features. If you're not sure, ask and > > we can see what other teams are doing or what people more familiar > > with the tool suggest trying. > > I'm reminded of something we talked about in IRC last week wrt tracking > blueprint-type changes over a given series / release in storyboard. It > was mentioned that storyboard has a not-yet-implemented epics feature > which is really how we'd probably do this (nested stories is another way > of thinking about this). So nova could, for example, have an epic for > Stein and then track a story for each blueprint, with the old launchpad > blueprint 'work items' (which we don't use, but we do have a list of > work items in our specs template) tracked as tasks - which would also be > nice since you can track tasks like documentation, CLIs (nova and OSC) > and tempest testing (if required). One thing people always commit to in > their spec is adding support for the feature in client libraries, > tempest and docs, but once the nova server side change is merged those > commitments end up getting dropped (not always, but more often than I'd > like). > If an epic is just a list of stories, doesn't that make it a worklist? Doug From dtroyer at gmail.com Tue Jun 12 14:28:48 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 12 Jun 2018 09:28:48 -0500 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> <1528214025-sup-920@lrrr.local> Message-ID: On Mon, Jun 11, 2018 at 5:02 PM, Emilien Macchi wrote: > While I agree with Doug that we assume good faith and hope for the best, I > personally think we should help them (what we're doing now) but also make > sure we DO NOT set a precedent. We could probably learn from this situation > and document in our governance what the TC expects when companies have a > fork and need to contribute back at some point. We all know StarlingX isn't > alone and I'm pretty sure there are a lot of deployments out there who are > in the same situation. /me pus on ex-TC hat for a minute Emilien, I totally agree with you here but would word it differently: we should absolutely set a precedent, but one that exhibits how we want to handle what ttx calls 'convergent' forks. These already exist, like it or not. What I hope can be established is some guidelines and boundaries on how to deal with these rather than just reject them out-of-hand. > I guess my point is, yes for helping StarlingX now but no for incubating > future forks if that happens. Like Graham, I think these methods shouldn't > be what we encourage in our position. Again, I agree, we have said that sort of thing all along: "don't fork". Many have had to learn that lesson the hard way. This is another opportunity to show _why_ it can be a bad idea. dt -- Dean Troyer dtroyer at gmail.com From doug at doughellmann.com Tue Jun 12 14:46:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 10:46:27 -0400 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> <1528214025-sup-920@lrrr.local> Message-ID: <1528814742-sup-5981@lrrr.local> Excerpts from Dean Troyer's message of 2018-06-12 09:28:48 -0500: > On Mon, Jun 11, 2018 at 5:02 PM, Emilien Macchi wrote: > > While I agree with Doug that we assume good faith and hope for the best, I > > personally think we should help them (what we're doing now) but also make > > sure we DO NOT set a precedent. We could probably learn from this situation > > and document in our governance what the TC expects when companies have a > > fork and need to contribute back at some point. We all know StarlingX isn't > > alone and I'm pretty sure there are a lot of deployments out there who are > > in the same situation. > > /me pus on ex-TC hat for a minute > > Emilien, I totally agree with you here but would word it differently: > we should absolutely set a precedent, but one that exhibits how we > want to handle what ttx calls 'convergent' forks. These already > exist, like it or not. What I hope can be established is some > guidelines and boundaries on how to deal with these rather than just > reject them out-of-hand. Yes, well said. > > > I guess my point is, yes for helping StarlingX now but no for incubating > > future forks if that happens. Like Graham, I think these methods shouldn't > > be what we encourage in our position. > > Again, I agree, we have said that sort of thing all along: "don't > fork". Many have had to learn that lesson the hard way. This is > another opportunity to show _why_ it can be a bad idea. > > dt > From quickconvey at gmail.com Tue Jun 12 14:49:00 2018 From: quickconvey at gmail.com (Quick Convey) Date: Tue, 12 Jun 2018 20:19:00 +0530 Subject: [openstack-dev] [neutron][ovs] How to Backup and Restore OVSDB Message-ID: HI, I am using OpenvSwitch 2.6.1 I would like to know, How we can backup and restore ovsdb in the controller and compute nodes. I think, ovsdb not in cluster mode, So we have to take backup from all controller and compute nodes. Please let me know if you have any idea. $ ovsdb-server -V ovsdb-server (Open vSwitch) 2.6.1 I also tried with following files, but don't know how to backup and restore it /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema If I copy above files back to same location and try to start ovs, then it is not listening to port 6640.I also observed that in ovsdb IDs of the Port and Interfaces are chaining when I do this steps. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brianna.Poulos at jhuapl.edu Tue Jun 12 14:55:03 2018 From: Brianna.Poulos at jhuapl.edu (Poulos, Brianna L.) Date: Tue, 12 Jun 2018 14:55:03 +0000 Subject: [openstack-dev] [nova] Confusion over how enable_certificate_validation is meant to be used In-Reply-To: <79e80f0a-88cf-461f-c9ee-52f47966dc90@gmail.com> References: <79e80f0a-88cf-461f-c9ee-52f47966dc90@gmail.com> Message-ID: Matt, The end goal is that certificate validation will always occur alongside signature validation, but we wanted there to be an upgrade path that would allow signature validation to continue to work until certificate validation was set up. See the first paragraph of the proposed change in the spec [1]. We also wanted the user to be able to use certificate validation even if the operator has not enabled it across a deployment. This is why the user supplying trusted certs overrides the enable_certificate_validation (and the verify_glance_signatures) option. We wanted to avoid a "silent fail" scenario where a user provided trusted certs but certificate validation (and/or signature verification) wasn't enabled in the configuration, which would lead to the user thinking that certificate validation was performed when it wasn't. We provided additional clarification about the options and how they interact in the documentation for the feature [2]. [...] In this scenario, if the user fails to provide trusted certs when creating or rebuilding a server, it's going to result in a build abort exception (NoValidHost) from the compute. Is that the intention? [...] Yes, the intention is that if both signature verification and certificate validation are enabled in the configuration file, trusted certificates must be provided (either by the user or by the default list of trusted certs) or the build will be aborted. [...] Also, the enable_certificate_validation option is deprecated and meant for "transition" but what transition is that? And when will we drop the enable_certificate_validation option? [...] The enable_certificate_validation option is meant to help transition from a deployment (or subset) that requires signature verification but does not yet have trusted certs configured, to a deployment (or subset) that can support both signature verification and certificate validation. The enable_certificate_validation option would be dropped once sufficient time was given to set up trusted certs (and then verify_glance_signatures would always cause certificate validation as well), since the end goal is that certificate validation would always be performed along signature verification. [1] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html#proposed-change [2] https://review.openstack.org/#/c/560158/40/doc/source/user/certificate-validation.rst at 19 Thanks, ~Brianna On 6/12/18, 10:01, "Matt Riedemann" wrote: Sylvain and I were reviewing https://review.openstack.org/#/c/479949/ today and I'm at least a bit confused over how the enable_certificate_validation config option is meant to be used. The current logic during driver.spawn() on the compute is going to be: if the user supplied trusted certs or verify_glance_signatures: ... if user supplied trusted certs: # validate the user supplied trusted certs elif enable_certificate_validation: raise error because the user did not provide certs else: noop I realize from the API change later in the series that if the user does not provide trusted certs when creating or rebuilding a server, and verify_glance_signatures=True, enable_certificate_validation=True and default_trusted_certificate_ids=[something], we use the configured default_trusted_certificate_ids so once we get down to the compute to verify the image signature it will look like the user supplied trusted certs (even if we are using the default_trusted_certificate_ids from config). Is the point that, as an operator, I can say: verify_glance_signatures=True - yes verify image signatures (at least on a subset of compute hosts) enable_certificate_validation - yes verify certs (at least on a subset of compute hosts) default_trusted_certificate_ids=[] - but I don't want to provide default trusted cert IDs, which forces my users to provider their own certs (at least on a subset of compute hosts) In this scenario, if the user fails to provide trusted certs when creating or rebuilding a server, it's going to result in a build abort exception (NoValidHost) from the compute. Is that the intention? Also, the enable_certificate_validation option is deprecated and meant for "transition" but what transition is that? And when will we drop the enable_certificate_validation option? I'm trying to understand some of the upgrade flow here. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Tue Jun 12 14:54:59 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 12 Jun 2018 07:54:59 -0700 Subject: [openstack-dev] [tc] StarlingX project status update In-Reply-To: <1528814742-sup-5981@lrrr.local> References: <78c82ec8-58fc-38ce-8f59-f3beb7dfbbad@ham.ie> <1528214025-sup-920@lrrr.local> <1528814742-sup-5981@lrrr.local> Message-ID: On Tue, Jun 12, 2018 at 7:46 AM, Doug Hellmann wrote: > Excerpts from Dean Troyer's message of 2018-06-12 09:28:48 -0500: > > On Mon, Jun 11, 2018 at 5:02 PM, Emilien Macchi > wrote: > > > While I agree with Doug that we assume good faith and hope for the > best, I > > > personally think we should help them (what we're doing now) but also > make > > > sure we DO NOT set a precedent. We could probably learn from this > situation > > > and document in our governance what the TC expects when companies have > a > > > fork and need to contribute back at some point. We all know StarlingX > isn't > > > alone and I'm pretty sure there are a lot of deployments out there who > are > > > in the same situation. > > > > /me pus on ex-TC hat for a minute > > > > Emilien, I totally agree with you here but would word it differently: > > we should absolutely set a precedent, but one that exhibits how we > > want to handle what ttx calls 'convergent' forks. These already > > exist, like it or not. What I hope can be established is some > > guidelines and boundaries on how to deal with these rather than just > > reject them out-of-hand. > > Yes, well said. Indeed, thanks Dean. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Tue Jun 12 15:03:35 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Tue, 12 Jun 2018 17:03:35 +0200 Subject: [openstack-dev] [tripleo] scenario000-multinode-oooq-container-upgrades In-Reply-To: References: Message-ID: On 12.6.2018 15:06, James Slagle wrote: > On Mon, Jun 11, 2018 at 3:34 PM, Wesley Hayutin wrote: >> Greetings, >> >> I wanted to let everyone know that we have a keystone only deployment and >> upgrade job in check non-voting. I'm asking everyone in TripleO to be >> mindful of this job and to help make sure it continues to pass as we move it >> from non-voting check to check and eventually gating. > > +1, nice work! > >> Upgrade jobs are particularly difficult to keep running successfully because >> of the complex workflow itself, job run times and other factors. Your help >> to ensure we don't merge w/o a pass on this job will go a long way in >> helping the tripleo upgrades team. >> >> There is still work to be done here, however it's much easier to do it with >> the check non-voting job in place. > > The job doesn't appear to be passing at all on stable/queens. I see > this same failure on several patches: > http://logs.openstack.org/59/571459/1/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/8bbd827/logs/undercloud/home/zuul/overcloud_upgrade_run_Controller.log.txt.gz > > Is this a known issue? I think so, or to put it precisely, i only ever looked into making the job work for master (and beyond). We could look into making it work on Queens too, but personally i think effort would be better spent elsewhere at this point. E.g. upd+upg jobs with more complete of services utilizing containerized undercloud (those would not validate OC workflow at all, but would give coverage for update_tasks/upgrade_tasks), user and dev docs around all lifecycle ops (upd, upg, ffwd), upgrade work in the area of TLS by default, upgrade handling for external_deploy_tasks (= "how do we upgrade Ceph in Rocky"), also perhaps trying to DRY repeated parts of upgrade templates, etc. If someone wants to step up to iron out Queens issues with that job then we can do it, but my 2 cents would be just to disable the job on Queens and focus on the future. Jirka From james.slagle at gmail.com Tue Jun 12 15:20:51 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 12 Jun 2018 11:20:51 -0400 Subject: [openstack-dev] [tripleo] scenario000-multinode-oooq-container-upgrades In-Reply-To: References: Message-ID: On Tue, Jun 12, 2018 at 11:03 AM, Jiří Stránský wrote: > On 12.6.2018 15:06, James Slagle wrote: >> >> On Mon, Jun 11, 2018 at 3:34 PM, Wesley Hayutin >> wrote: >>> >>> Greetings, >>> >>> I wanted to let everyone know that we have a keystone only deployment and >>> upgrade job in check non-voting. I'm asking everyone in TripleO to be >>> mindful of this job and to help make sure it continues to pass as we move >>> it >>> from non-voting check to check and eventually gating. >> >> >> +1, nice work! >> >>> Upgrade jobs are particularly difficult to keep running successfully >>> because >>> of the complex workflow itself, job run times and other factors. Your >>> help >>> to ensure we don't merge w/o a pass on this job will go a long way in >>> helping the tripleo upgrades team. >>> >>> There is still work to be done here, however it's much easier to do it >>> with >>> the check non-voting job in place. >> >> >> The job doesn't appear to be passing at all on stable/queens. I see >> this same failure on several patches: >> >> http://logs.openstack.org/59/571459/1/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/8bbd827/logs/undercloud/home/zuul/overcloud_upgrade_run_Controller.log.txt.gz >> >> Is this a known issue? > > > I think so, or to put it precisely, i only ever looked into making the job > work for master (and beyond). > > We could look into making it work on Queens too, but personally i think > effort would be better spent elsewhere at this point. E.g. upd+upg jobs with > more complete of services utilizing containerized undercloud (those would > not validate OC workflow at all, but would give coverage for > update_tasks/upgrade_tasks), user and dev docs around all lifecycle ops > (upd, upg, ffwd), upgrade work in the area of TLS by default, upgrade > handling for external_deploy_tasks (= "how do we upgrade Ceph in Rocky"), > also perhaps trying to DRY repeated parts of upgrade templates, etc. > > If someone wants to step up to iron out Queens issues with that job then we > can do it, but my 2 cents would be just to disable the job on Queens and > focus on the future. Sure, I'm just trying to figure out what can safely be ignored. The tone of the original email was encouraging reviewers not to ignore the job. Let's remove it from queens then, as right now it's just noise. -- -- James Slagle -- From persia at shipstone.jp Tue Jun 12 15:26:26 2018 From: persia at shipstone.jp (Emmet Hikory) Date: Tue, 12 Jun 2018 17:26:26 +0200 Subject: [openstack-dev] use of storyboard (was [TC] Stein Goal Selection) In-Reply-To: <1528812949-sup-2713@lrrr.local> References: <143b397e-91cb-103e-9d7d-6834313fde4a@redhat.com> <1528726300-sup-9083@lrrr.local> <67de2c95-5f9e-8ae1-a8a2-87a9c9fa3da1@gmail.com> <1528735833-sup-4611@lrrr.local> <20180611185729.mtxgluskq3rztdtp@yuggoth.org> <1528747807-sup-187@lrrr.local> <13ee87bb-fb18-21e4-62ff-d29d9e61eb7a@gmail.com> <1528812949-sup-2713@lrrr.local> Message-ID: <23138c01-f596-44d3-9073-6db1b78a8da9@Spark> Doug Hellmann wrote: > If an epic is just a list of stories, doesn't that make it a worklist?     Worklists are one way to do epics.  In prior discussion about epics, a number of people have raised the idea that an epic contains more than just a list of stories, but also connective narrative (think about all the “I’ll kill you in the morning” parts of Arabian Nights, for example).  In that case, a wiki page may be a better solution, where the wiki page has the connective narrative, perhaps a list of personae, or whatever is appropriate for the specific requirements analysis procedure being used, and links to individual stories that together will enable the epic (but may also appear in different epics with different personae, etc.).  Such a textual document with links could also be implemented using the specs infrastructure, rather than a wiki page.  I suspect there are other ways that I do not recall at the moment, or I have not heard proposed.     At a high level, I think epics are important, but I think that more experimentation with ways to connect collections of stories is necessary before we can collectively understand what would make sense.  We should try several things (likely adoptions from the various ways we are doing similar tracking now), and I expect us to slowly converge on two or three solutions that are known to work well for our teams, depending on the individual characteristics of the teams. — Emmet HIKORY -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Jun 12 15:41:57 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 12 Jun 2018 08:41:57 -0700 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <1528728119-sup-6948@lrrr.local> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> <1528728119-sup-6948@lrrr.local> Message-ID: I think we should continue with option 1. It is an indicator that a project is active in OpenStack and is explicit about which code should be used together. Both of those statements hold no technical water, but address the "human" factor of "What is OpenStack?", "What do I need to deploy?", "What is an active project and what is not?", and "How do we advertise what OpenStack can provide?". I think 2 and 3 will just lead to confusion and frustration for folks. Michael On Mon, Jun 11, 2018 at 7:50 AM Doug Hellmann wrote: > > Excerpts from Thierry Carrez's message of 2018-06-11 11:53:52 +0200: > > Hi everyone, > > > > As some of the OpenStack deliverables get more mature, we need to adjust > > our release policies to best handle the case of deliverables that do not > > need to be updated that much. This discussion started with how to handle > > those "stable" libraries, but is actually also relevant for "stable" > > services. > > > > Our current models include cycle-tied models (with-intermediary, > > with-milestones, trailing) and a purely cycle-independent model. Main > > OpenStack deliverables (the service components that you can deploy to > > build an OpenStack cloud) are all "released" on a cycle. Libraries are > > typically maintained per-cycle as well. What happens if no change is > > pushed to a service or library during a full cycle ? What should we do > > then ? > > > > Options include: > > > > 1/ Force artificial releases, even if there are no changes > > > > This is the current state. It allows to reuse the exact same process, > > but creates useless churn and version number confusion. > > > > 2/ Do not force releases, but still create branches from latest releases > > > > In this variant we would not force an artificial re-release, but we > > would still create a branch from the last available release, in order to > > be able to land future patches and do bugfix or security releases as needed. > > > > 2bis/ Like 2, but only create the branch when needed > > > > Same as the previous one, except that rather than proactively create the > > stable branch around release time, we'd wait until the branch is > > actually needed to create it. > > > > 3/ Do not force releases, and reuse stable branches from cycle to cycle > > > > In this model, if there is no change in a library in Rocky, stable/rocky > > would never be created, and stable/queens would be used instead. Only > > one branch would get maintained for the 2 cycles. While this reduces the > > churn, it's a bit complex to wrap your head around the consequences, and > > measure how confusing this could be in practice... > > > > 4/ Stop worrying about stable branches at all for those "stable" things > > > > The idea here would be to stop doing stable branches for those things > > that do not release that much anymore. This could be done by switching > > them to the "independent" release model, or to a newly-created model. > > While good for "finished" deliverables, this option could create issues > > for things that are inactive for a couple cycles and then pick up > > activity again -- switching back to being cycle-tied would likely be > > confusing. > > > > > > My current preference is option 2. > > > > It's a good trade-off which reduces churn while keeping a compatibility > > with the system used for more active components. Compared to 2bis, it's > > a bit more work (although done in one patch during the release process), > > but creating the branches in advance means they are ready to be used > > when someone wants to backport something there, likely reducing process > > pain. > > > > One caveat with this model is that we need to be careful with version > > numbers. Imagine a library that did a 1.18.0 release for queens (which > > stable/queens is created from). Nothing happens in Rocky, so we create > > stable/rocky from the same 1.18.0 release. Same in Stein, so we create > > stable/stein from the same 1.18.0 release. During the Telluride[1] cycle > > some patches land and we want to release that. In order to leave room > > for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, > > and go directly to 1.20.0. I think we can build release checks to ensure > > that, but that's something to keep in mind. > > > > Thoughts ? > > > > [1] It's never too early to campaign for your favorite T name > > Although I originally considered it separate, reviewing your summary > I suspect option 2bis is most likely to turn into option 3, in > practice. > > I think having the choice between options 2 and switching to an > independent release model (maybe only for libraries) is going to > be best, at least to start out. > > Stein will be the first series where we have to actually deal with > this, so we can see how it goes and discuss alternatives if we run > into issues. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Tue Jun 12 16:29:55 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 12 Jun 2018 10:29:55 -0600 Subject: [openstack-dev] [tripleo] scenario000-multinode-oooq-container-upgrades In-Reply-To: References: Message-ID: On Tue, Jun 12, 2018 at 11:21 AM James Slagle wrote: > On Tue, Jun 12, 2018 at 11:03 AM, Jiří Stránský wrote: > > On 12.6.2018 15:06, James Slagle wrote: > >> > >> On Mon, Jun 11, 2018 at 3:34 PM, Wesley Hayutin > >> wrote: > >>> > >>> Greetings, > >>> > >>> I wanted to let everyone know that we have a keystone only deployment > and > >>> upgrade job in check non-voting. I'm asking everyone in TripleO to be > >>> mindful of this job and to help make sure it continues to pass as we > move > >>> it > >>> from non-voting check to check and eventually gating. > >> > >> > >> +1, nice work! > >> > >>> Upgrade jobs are particularly difficult to keep running successfully > >>> because > >>> of the complex workflow itself, job run times and other factors. Your > >>> help > >>> to ensure we don't merge w/o a pass on this job will go a long way in > >>> helping the tripleo upgrades team. > >>> > >>> There is still work to be done here, however it's much easier to do it > >>> with > >>> the check non-voting job in place. > >> > >> > >> The job doesn't appear to be passing at all on stable/queens. I see > >> this same failure on several patches: > >> > >> > http://logs.openstack.org/59/571459/1/check/tripleo-ci-centos-7-scenario000-multinode-oooq-container-upgrades/8bbd827/logs/undercloud/home/zuul/overcloud_upgrade_run_Controller.log.txt.gz > >> > >> Is this a known issue? > > > > > > I think so, or to put it precisely, i only ever looked into making the > job > > work for master (and beyond). > > > > We could look into making it work on Queens too, but personally i think > > effort would be better spent elsewhere at this point. E.g. upd+upg jobs > with > > more complete of services utilizing containerized undercloud (those would > > not validate OC workflow at all, but would give coverage for > > update_tasks/upgrade_tasks), user and dev docs around all lifecycle ops > > (upd, upg, ffwd), upgrade work in the area of TLS by default, upgrade > > handling for external_deploy_tasks (= "how do we upgrade Ceph in Rocky"), > > also perhaps trying to DRY repeated parts of upgrade templates, etc. > > > > If someone wants to step up to iron out Queens issues with that job then > we > > can do it, but my 2 cents would be just to disable the job on Queens and > > focus on the future. > > Sure, I'm just trying to figure out what can safely be ignored. The > tone of the original email was encouraging reviewers not to ignore the > job. Let's remove it from queens then, as right now it's just noise. > I think we missed a patch [1] to correctly set the release for the job. I'll take a look at the results. I may have jumped the gun w/ the tone of the email w/ regards to keeping it running. I'll make the adjustment on queens for now [2]. Thanks for catching that James, Jirka! [1] https://review.openstack.org/#/c/574417/ [2] https://review.openstack.org/574794 > > > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Tue Jun 12 16:38:03 2018 From: s at cassiba.com (Samuel Cassiba) Date: Tue, 12 Jun 2018 09:38:03 -0700 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Message-ID: On Mon, Jun 11, 2018 at 2:53 AM, Thierry Carrez wrote: > > 2bis/ Like 2, but only create the branch when needed > > Same as the previous one, except that rather than proactively create the > stable branch around release time, we'd wait until the branch is actually > needed to create it. > This is basically openstack-chef right now, from a natural progression over time. In ye olden dayes, we were able to branch pretty soon after the RDO and Ubuntu packages stabilized. Now, due to time needed and engagement, it's an informal poll of the developer team to see who objects or sees something showstopping, then carrying on with creating the stable branch and releasing the artifacts to Supermarket. From james.slagle at gmail.com Tue Jun 12 17:04:24 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 12 Jun 2018 13:04:24 -0400 Subject: [openstack-dev] [TripleO] config-download/ansible next steps Message-ID: I wanted to provide an update on some next steps around config-download/Ansible and TripleO. Now that we've completed transitioning to config-download by default in Rocky, some might be wondering where we're going next. 1. Standalone roles. The idea here is to refactor the ansible tasks lists into standalone ansible roles. From the tripleo-heat-templates side, we then just update the service templates to apply those roles (possibly with a specific task file). Since not all of the interfaces in tripleo-heat-templates are pure ansible tasks lists (docker_config, puppet_config), there is some exploratory work here to determine how we can use those inputs in both a standalone ansible role and tripleo-heat-templates. David Peacock sent out a POC of some inital work[1]. 2. Standalone playbooks. Similar to standalone roles, the idea here is to refactor some of the playbooks into their own proper ansible project directories. These would probably be new git repositories. Again, since some of our playbooks are rendered by jinja2, there is some exploratory work here to see how we can make these more re-usable and not as tightly coupled with tripleo-heat-templates. 3. Native ansible tasks for the per-server deployments in tripleo-heat-templates. Presently we are using a generic ansible task(s) that acts as a shim around the heat-config hooks for the per-server deployments. This is necessary for backwards compatibility. Going forward, we want to take a closer look at how we can use more native ansible tasks for these (e.g., os-net-config ansible module). This will improve our ansible playbook interfaces and make the playbooks more friendly for manual interactions. 4. Ansible driven baremetal deployment Dmitry Tantsur has indicated he's going to be looking at driving TripleO baremetal provisioning with Ironic and ansible directly. This would remove Heat+Nova from the baremetal provisioning workflows we currently use. Obviously we have things to consider here such as backwards compatibility and upgrades, but overall, I think this would be a great simplification to our overall deployment workflow. 5. Other deployment architectures There are various ongoing efforts continuing and spinning up related to the: - all-in-one/standalone installer[2] - the zero footprint installer[3] - split-controlplane[4] I think config-download with ansible is going to drive a lot of these use cases, particularly as it relates to edge deployments. If any of this is an area of interest, please reach out. You can find contacts on the provided links. There may be some upstream squads forming around some of this work in the near future. If you have other ideas about improvements/direction, please chime in. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128887.html [2] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131135.html [3] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131192.html [4] https://specs.openstack.org/openstack/tripleo-specs/specs/rocky/split-controlplane.html -- -- James Slagle -- From emilien at redhat.com Tue Jun 12 17:06:59 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 12 Jun 2018 10:06:59 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 24th Edition Message-ID: Welcome to the twenty-fourth edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131184.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky milestone 3 cycle just started, for a bit less than 1.5 month. +------------------------------+ | Continuous Integration | +------------------------------+ +--> rlandy is our rover and arxcruz is our ruck. Let them know any CI issues! +--> Promotion status: Master: 8d, Queens: 7d, Pike: 5d and Ocata: 3d. +--> Gate is backed-up today, such is RDO CI. Status can be checked on http://zuul.openstack.org and https://review.rdoproject.org/zuul/. +--> Sprint 14 is in flight and focus is on Upgrades testing +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Standalone documented (first iteration): https://docs.openstack.org/tripleo-docs/latest/install/ +--> containers_deployment/standalone.html +--> Still working on enabling the containerized undercloud everywhere in CI jobs +--> Containerized undercloud upgrade problems were fixed, working on post-upgrade cleanup feature now +--> Good progress on working on updating containers in CI when deploying a containerized undercloud so we can test changes in all repos (need review) +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Skydive and Octavia integration are now ready for review. +--> UI integration blocked by under-review patches in tripleo-common. +--> the squad is looking at the next steps, which might lead to a new squad. +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> More validations, need review. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls can eat owls. Not only do owls eat surprisingly large prey (some species, like the eagle owl, can even grab small deer), they also eat other species of owls. Great horned owls, for example, will attack the barred owl. The barred owl, in turn, sometimes eats the Western screech-owl. In fact, owl-on-owl predation may be a reason why Western screech-owl numbers have declined. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jun 12 18:31:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 14:31:36 -0400 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> <1528728119-sup-6948@lrrr.local> Message-ID: <1528828215-sup-8341@lrrr.local> Excerpts from Michael Johnson's message of 2018-06-12 08:41:57 -0700: > I think we should continue with option 1. > > It is an indicator that a project is active in OpenStack and is > explicit about which code should be used together. > > Both of those statements hold no technical water, but address the > "human" factor of "What is OpenStack?", "What do I need to deploy?", > "What is an active project and what is not?", and "How do we advertise > what OpenStack can provide?". I don't expect for oslo.i18n to see any patches during Stein, but it is still maintained. Is it still part of OpenStack under this definition? > > I think 2 and 3 will just lead to confusion and frustration for folks. > > Michael > On Mon, Jun 11, 2018 at 7:50 AM Doug Hellmann wrote: > > > > Excerpts from Thierry Carrez's message of 2018-06-11 11:53:52 +0200: > > > Hi everyone, > > > > > > As some of the OpenStack deliverables get more mature, we need to adjust > > > our release policies to best handle the case of deliverables that do not > > > need to be updated that much. This discussion started with how to handle > > > those "stable" libraries, but is actually also relevant for "stable" > > > services. > > > > > > Our current models include cycle-tied models (with-intermediary, > > > with-milestones, trailing) and a purely cycle-independent model. Main > > > OpenStack deliverables (the service components that you can deploy to > > > build an OpenStack cloud) are all "released" on a cycle. Libraries are > > > typically maintained per-cycle as well. What happens if no change is > > > pushed to a service or library during a full cycle ? What should we do > > > then ? > > > > > > Options include: > > > > > > 1/ Force artificial releases, even if there are no changes > > > > > > This is the current state. It allows to reuse the exact same process, > > > but creates useless churn and version number confusion. > > > > > > 2/ Do not force releases, but still create branches from latest releases > > > > > > In this variant we would not force an artificial re-release, but we > > > would still create a branch from the last available release, in order to > > > be able to land future patches and do bugfix or security releases as needed. > > > > > > 2bis/ Like 2, but only create the branch when needed > > > > > > Same as the previous one, except that rather than proactively create the > > > stable branch around release time, we'd wait until the branch is > > > actually needed to create it. > > > > > > 3/ Do not force releases, and reuse stable branches from cycle to cycle > > > > > > In this model, if there is no change in a library in Rocky, stable/rocky > > > would never be created, and stable/queens would be used instead. Only > > > one branch would get maintained for the 2 cycles. While this reduces the > > > churn, it's a bit complex to wrap your head around the consequences, and > > > measure how confusing this could be in practice... > > > > > > 4/ Stop worrying about stable branches at all for those "stable" things > > > > > > The idea here would be to stop doing stable branches for those things > > > that do not release that much anymore. This could be done by switching > > > them to the "independent" release model, or to a newly-created model. > > > While good for "finished" deliverables, this option could create issues > > > for things that are inactive for a couple cycles and then pick up > > > activity again -- switching back to being cycle-tied would likely be > > > confusing. > > > > > > > > > My current preference is option 2. > > > > > > It's a good trade-off which reduces churn while keeping a compatibility > > > with the system used for more active components. Compared to 2bis, it's > > > a bit more work (although done in one patch during the release process), > > > but creating the branches in advance means they are ready to be used > > > when someone wants to backport something there, likely reducing process > > > pain. > > > > > > One caveat with this model is that we need to be careful with version > > > numbers. Imagine a library that did a 1.18.0 release for queens (which > > > stable/queens is created from). Nothing happens in Rocky, so we create > > > stable/rocky from the same 1.18.0 release. Same in Stein, so we create > > > stable/stein from the same 1.18.0 release. During the Telluride[1] cycle > > > some patches land and we want to release that. In order to leave room > > > for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, > > > and go directly to 1.20.0. I think we can build release checks to ensure > > > that, but that's something to keep in mind. > > > > > > Thoughts ? > > > > > > [1] It's never too early to campaign for your favorite T name > > > > Although I originally considered it separate, reviewing your summary > > I suspect option 2bis is most likely to turn into option 3, in > > practice. > > > > I think having the choice between options 2 and switching to an > > independent release model (maybe only for libraries) is going to > > be best, at least to start out. > > > > Stein will be the first series where we have to actually deal with > > this, so we can see how it goes and discuss alternatives if we run > > into issues. > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Tue Jun 12 20:09:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 16:09:03 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs Message-ID: <1528833992-sup-8052@lrrr.local> I would like to create a version of the jobs that run as part of lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under python 3. I'm not sure the best way to proceed, since that's a legacy job. I'm not sure I'm familiar enough with the job to port it to be zuulv3 native and allow us to drop the "legacy". Should I just duplicate that job and modify it and keep the new one as "legacy" too? Is there a different job I should base the work on? I don't see anything obvious in the tempest repo's .zuul.yaml file. Thanks, Doug From zbitter at redhat.com Tue Jun 12 21:35:49 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 12 Jun 2018 17:35:49 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> Message-ID: <2d6b64ac-ca85-d947-136f-78b288e35ab6@redhat.com> On 11/06/18 18:49, Zane Bitter wrote: > It's had a week to percolate (and I've seen quite a few people viewing > the etherpad), so here is the review: > > https://review.openstack.org/574479 In response to comments, I moved the change to the Project Team Guide instead of the Contributor Guide (since the latter is aimed only at new contributors, but this is for everyone). The new review is here: https://review.openstack.org/574888 The first review is still up, but it's now just adding links from the Contributor Guide to this new doc. cheers, Zane. From zbitter at redhat.com Tue Jun 12 21:52:24 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 12 Jun 2018 17:52:24 -0400 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> <1528728119-sup-6948@lrrr.local> Message-ID: <6b008f26-0e0c-ddbf-3956-fe340c6419d4@redhat.com> On 12/06/18 11:41, Michael Johnson wrote: > I think we should continue with option 1. > > It is an indicator that a project is active in OpenStack and is > explicit about which code should be used together. > > Both of those statements hold no technical water, but address the > "human" factor of "What is OpenStack?", "What do I need to deploy?", > "What is an active project and what is not?", and "How do we advertise > what OpenStack can provide?". There's a strong argument that that makes sense for services. Although in practice I'm doubtful that very many services could get through a whole cycle without _any_ patches and still be working at the end of it. (Incidentally, does the release tooling check that the gate still passes at the time of release, even if it has been months since the last patch merged?) It's not clear that it still makes sense for libraries though, and in practice that's what this process will mostly apply to. (I tend to agree with others in favouring 2, although the release numbering required to account for possible future backports does leave something to be desired.) >>> One caveat with this model is that we need to be careful with version >>> numbers. Imagine a library that did a 1.18.0 release for queens (which >>> stable/queens is created from). Nothing happens in Rocky, so we create >>> stable/rocky from the same 1.18.0 release. Same in Stein, so we create >>> stable/stein from the same 1.18.0 release. During the Telluride[1] cycle >>> some patches land and we want to release that. In order to leave room >>> for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, >>> and go directly to 1.20.0. I think we can build release checks to ensure >>> that, but that's something to keep in mind. Would another option be to release T as 1.19.0 and use 1.18.1.0 and 1.18.2.0 for stable/rocky and stable/stein, respectively? There's no *law* that says version numbers can only have 3 components, right? ;) cheers, Zane. From cdent+os at anticdent.org Tue Jun 12 22:19:39 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 12 Jun 2018 23:19:39 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-24 Message-ID: HTML: https://anticdent.org/tc-report-18-24.html Here's TC Report 24. The last one was 4 weeks ago, itself following a gap. In the intervening time new TC members (Graham Hayes, Mohammed Naser, and Zane Bitter) and a new Chair (Doug Hellman) have found their feet very well, we had a successful and moderately dramatic OpenStack Summit in Vancouver, and there have been some tweaks to how the TC reports its actions and engages with the community. There's a refreshed commitment to using email to give regular updates to in-progress TC activity, including a somewhat more detailed [official weekly status](http://lists.openstack.org/pipermail/openstack-dev/2018-June/131359.html), as well as increased use of [StoryBoard](https://storyboard.openstack.org/#!/project/923) to track goverance changes. The [Technical Committee Tracker](https://wiki.openstack.org/wiki/Technical_Committee_Tracker) wiki page now includes expanded information on in-progress [initiatives](https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Other_Initiatives) and there's a [health tracker page](https://wiki.openstack.org/wiki/OpenStack_health_tracker) listing liaisons for working groups, SIGs, and project teams and outstanding issues those groups are facing. The thrice-weekly office hours are now [logged by meetbot](http://eavesdrop.openstack.org/meetings/tc/2018/) to make them easier to find. Here, for example, is [today's](http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-12-09.02.log.html). All of these things are designed to increase the visibility and accessibility of the work being done by the TC and many other people. This is all _great_. Since becoming aware of the TC, enhancing visibility and engagement has been one of my main goals so this is something like success. But it does present a bit of a problem for what to do with these reports; there's a lot of other reporting going on. My intention was always to provide a subjective insight into the activity of the TC and by reflection the entire OpenStack community. Over the course of the year of my first term the subjectivity ebbed and flowed as the value of simply pointing at stuff was made clear. Things are different, lots of people are pointing to lots of things, so perhaps now is a good time [to restate](https://www.youtube.com/watch?v=ShdmErv5jvs) [my assumptions](https://www.youtube.com/watch?v=4UuCgIi3Dgk). 1. An important, if not the most important, job of the TC is to represent and improve the situation of the people who elected them, the so-called "technical" contributors. 2. Representing people means listening to, observing, and engaging with people. Improving things means _change_. 3. In either activity, with any group of any size, the possibilities for different understandings of meaning of events, actions and terms are legion. Resolving those differences to some kind of consensual reality, a shared understanding, is what the TC must do with much of its time. It is why we (or any group) chat so much. 4. It is exceptionally easy to believe that a group has reached a shared understanding and for that not to be the case. Earlier today four TC members were struggling to agree on what "cloud" means despite working to build it for several years. 5. The differences of understanding can be incredibly nuanced, but remarkably important. 6. Resolving those differences leads to better representation and better action so highlighting them, though often contentious, can lead to better results. With those in mind, where I imagine these reports can continue to provide some value is subjectively interpreting the meaning emerging from the various TC-related discussions (in all media) to provide a focal point (one of many, I hope) for me and anyone else to speculate on where those meanings fall short, collide or woot, nailed it. We can do a little recursive dance of feedback and refinement. I'm not fully back in the loop, and this is long enough already, so look for something of substance next week. In the meantime, here are a few links to discussions in progress where there's been some emerging understanding: * [recent office hours logs](http://eavesdrop.openstack.org/meetings/tc/2018/) * [review of requirements for affiliation diversity](https://review.openstack.org/#/c/567944/) * [review principles, including "less nitpicking"](https://review.openstack.org/#/c/570940/) * [making Castellan a base service](https://review.openstack.org/#/c/572656/) * [ongoing updates on the status of StarlingX](http://lists.openstack.org/pipermail/openstack-dev/2018-June/131159.html) _Acknowledgement: Thanks [to persia](http://p.anticdent.org/31Ik) for helping to crystallize some of these thoughts._ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Jun 12 22:31:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 18:31:44 -0400 Subject: [openstack-dev] [Release-job-failures][cyborg][release] Pre-release of openstack/cyborg failed In-Reply-To: References: Message-ID: <1528842648-sup-2071@lrrr.local> Excerpts from zuul's message of 2018-06-12 21:57:02 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/2c/2ca19224a22541ceccf74c6f760ee40a2c90fed2/pre-release/release-openstack-python/124b5be/ : FAILURE in 6m 45s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > The cyborg milestone 2 release failed due to a packaging error. error: can't copy 'etc/cyborg/rootwrap.d/*': doesn't exist or not a regular file http://logs.openstack.org/2c/2ca19224a22541ceccf74c6f760ee40a2c90fed2/pre-release/release-openstack-python/124b5be/job-output.txt.gz#_2018-06-12_21_55_33_679298 From doug at doughellmann.com Tue Jun 12 22:35:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 12 Jun 2018 18:35:58 -0400 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <6b008f26-0e0c-ddbf-3956-fe340c6419d4@redhat.com> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> <1528728119-sup-6948@lrrr.local> <6b008f26-0e0c-ddbf-3956-fe340c6419d4@redhat.com> Message-ID: <1528842747-sup-6384@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-12 17:52:24 -0400: > On 12/06/18 11:41, Michael Johnson wrote: > > I think we should continue with option 1. > > > > It is an indicator that a project is active in OpenStack and is > > explicit about which code should be used together. > > > > Both of those statements hold no technical water, but address the > > "human" factor of "What is OpenStack?", "What do I need to deploy?", > > "What is an active project and what is not?", and "How do we advertise > > what OpenStack can provide?". > > There's a strong argument that that makes sense for services. Although > in practice I'm doubtful that very many services could get through a > whole cycle without _any_ patches and still be working at the end of it. > (Incidentally, does the release tooling check that the gate still passes > at the time of release, even if it has been months since the last patch > merged?) > > It's not clear that it still makes sense for libraries though, and in > practice that's what this process will mostly apply to. (I tend to agree > with others in favouring 2, although the release numbering required to > account for possible future backports does leave something to be desired.) > > >>> One caveat with this model is that we need to be careful with version > >>> numbers. Imagine a library that did a 1.18.0 release for queens (which > >>> stable/queens is created from). Nothing happens in Rocky, so we create > >>> stable/rocky from the same 1.18.0 release. Same in Stein, so we create > >>> stable/stein from the same 1.18.0 release. During the Telluride[1] cycle > >>> some patches land and we want to release that. In order to leave room > >>> for rocky and stein point releases, we need to skip 1.18.1 and 1.19.0, > >>> and go directly to 1.20.0. I think we can build release checks to ensure > >>> that, but that's something to keep in mind. > > Would another option be to release T as 1.19.0 and use 1.18.1.0 and > 1.18.2.0 for stable/rocky and stable/stein, respectively? There's no > *law* that says version numbers can only have 3 components, right? ;) We have interpreted PEP 440 and SemVer together to mean that regular versions have 3 parts and pre-release versions have a 4th part. That's up for debate, but my counter argument to changing it is that it will require quite a bit of tooling changes to support something other than what we have today, and I think it's going to be easier to build the thing that verifies we're using the right version numbers by counting stable branches. We have to do that verification anyway, so we might as well just do it for 3 part version numbers. Doug From zhipengh512 at gmail.com Tue Jun 12 23:01:41 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 13 Jun 2018 07:01:41 +0800 Subject: [openstack-dev] [Release-job-failures][cyborg][release] Pre-release of openstack/cyborg failed In-Reply-To: <1528842648-sup-2071@lrrr.local> References: <1528842648-sup-2071@lrrr.local> Message-ID: Hi Doug, Thanks for raising this, we will check it out On Wed, Jun 13, 2018 at 6:32 AM Doug Hellmann wrote: > Excerpts from zuul's message of 2018-06-12 21:57:02 +0000: > > Build failed. > > > > - release-openstack-python > http://logs.openstack.org/2c/2ca19224a22541ceccf74c6f760ee40a2c90fed2/pre-release/release-openstack-python/124b5be/ > : FAILURE in 6m 45s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > The cyborg milestone 2 release failed due to a packaging error. > > error: can't copy 'etc/cyborg/rootwrap.d/*': doesn't exist or not a > regular file > > > http://logs.openstack.org/2c/2ca19224a22541ceccf74c6f760ee40a2c90fed2/pre-release/release-openstack-python/124b5be/job-output.txt.gz#_2018-06-12_21_55_33_679298 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Jun 12 23:59:04 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 12 Jun 2018 18:59:04 -0500 Subject: [openstack-dev] [Release-job-failures][cyborg][release] Pre-release of openstack/cyborg failed In-Reply-To: References: <1528842648-sup-2071@lrrr.local> Message-ID: <96b14128-0503-f457-403f-9bce4feb7072@gmx.com> On 06/12/2018 06:01 PM, Zhipeng Huang wrote: > Hi Doug, > > Thanks for raising this, we will check it out > Clark was sharped eyed enough to point out that setuptools does not accept regexes for data files. So it would appear this: https://git.openstack.org/cgit/openstack/cyborg/tree/setup.cfg#n31 Needs to be changed to explicitly list each file that is needed. Hope that helps. Sean > > The cyborg milestone 2 release failed due to a packaging error. > >   error: can't copy 'etc/cyborg/rootwrap.d/*': doesn't exist or > not a regular file > > http://logs.openstack.org/2c/2ca19224a22541ceccf74c6f760ee40a2c90fed2/pre-release/release-openstack-python/124b5be/job-output.txt.gz#_2018-06-12_21_55_33_679298 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jun 13 00:08:37 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 12 Jun 2018 17:08:37 -0700 Subject: [openstack-dev] [Release-job-failures][cyborg][release] Pre-release of openstack/cyborg failed In-Reply-To: <96b14128-0503-f457-403f-9bce4feb7072@gmx.com> References: <1528842648-sup-2071@lrrr.local> <96b14128-0503-f457-403f-9bce4feb7072@gmx.com> Message-ID: <1528848517.2151846.1406026992.41FF4783@webmail.messagingengine.com> On Tue, Jun 12, 2018, at 4:59 PM, Sean McGinnis wrote: > On 06/12/2018 06:01 PM, Zhipeng Huang wrote: > > Hi Doug, > > > > Thanks for raising this, we will check it out > > > > Clark was sharped eyed enough to point out that setuptools does not > accept regexes for > data files. So it would appear this: > > https://git.openstack.org/cgit/openstack/cyborg/tree/setup.cfg#n31 > > Needs to be changed to explicitly list each file that is needed. > > Hope that helps. > > Sean I thought data_files was a setuptools thing but on further reading it appears to be PBR. And reading the PBR docs trailing globs should be supported. Reading the code I think PBR expects the entire data_file spec on one line so that you end up with 'foo = bar/*' instead of 'foo =\n bar/*'. This is likely a bug that should be fixed. To workaround this you can use 'etc/cyborg/rootwrap.d = etc/cyborg/rootwrap.d/*' all on one line in your config. Clark From zhipengh512 at gmail.com Wed Jun 13 01:00:55 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 13 Jun 2018 09:00:55 +0800 Subject: [openstack-dev] [Release-job-failures][cyborg][release] Pre-release of openstack/cyborg failed In-Reply-To: <1528848517.2151846.1406026992.41FF4783@webmail.messagingengine.com> References: <1528842648-sup-2071@lrrr.local> <96b14128-0503-f457-403f-9bce4feb7072@gmx.com> <1528848517.2151846.1406026992.41FF4783@webmail.messagingengine.com> Message-ID: thx guys , patch has been submitted at https://review.openstack.org/574927 On Wed, Jun 13, 2018 at 8:09 AM Clark Boylan wrote: > On Tue, Jun 12, 2018, at 4:59 PM, Sean McGinnis wrote: > > On 06/12/2018 06:01 PM, Zhipeng Huang wrote: > > > Hi Doug, > > > > > > Thanks for raising this, we will check it out > > > > > > > Clark was sharped eyed enough to point out that setuptools does not > > accept regexes for > > data files. So it would appear this: > > > > https://git.openstack.org/cgit/openstack/cyborg/tree/setup.cfg#n31 > > > > Needs to be changed to explicitly list each file that is needed. > > > > Hope that helps. > > > > Sean > > I thought data_files was a setuptools thing but on further reading it > appears to be PBR. And reading the PBR docs trailing globs should be > supported. Reading the code I think PBR expects the entire data_file spec > on one line so that you end up with 'foo = bar/*' instead of 'foo =\n > bar/*'. This is likely a bug that should be fixed. > > To workaround this you can use 'etc/cyborg/rootwrap.d = > etc/cyborg/rootwrap.d/*' all on one line in your config. > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From tommylikehu at gmail.com Wed Jun 13 02:32:04 2018 From: tommylikehu at gmail.com (TommyLike Hu) Date: Tue, 12 Jun 2018 22:32:04 -0400 Subject: [openstack-dev] Fwd: [cinder] OpenStack Hackathon for Rocky release In-Reply-To: References: Message-ID: Forward the next hackathon session and add cinder tag :) ---------- Forwarded message --------- From: Fred Li Date: 2018年6月12日周二 下午4:40 Subject: [openstack-dev] OpenStack Hackathon for Rocky release To: OpenStack Development Mailing List (not for usage questions) < openstack-dev at lists.openstack.org> Hi all OpenStackers, Since April 2015, there have been 7 OpenStack Bug Smash events in China. If you are interested, please visit [1]. Now, the 8th OpenStack bug smash is coming. Intel, Huawei, Tecent and CESI will host this event. Just to highlight, this event is changed to Open Source Hackathon as more open source communities are joining. You can find Kata Containers, Ceph, Kubernetes, Cloud Foundry besides OpenStack. This event[2] will be on Jun 19 and 20 in Beijing, just prior to OpenInfra Days China 2018 [3]. To all the projects team leaders, you can discuss with your team in the project meeting and mark the bugs[4] you expect the attendees to work on. If you can arrange core reviewers to take care of the patches during the 2 days, that will be more efficient. [1] https://www.openstack.org/videos/sydney-2017/what-china-developers-brought-to-community-after-6-openstack-bug-smash-events [2] https://etherpad.openstack.org/p/OpenSource-Hackathon-8-beijing [3] http://china.openinfradays.org/ [4] https://etherpad.openstack.org/p/OpenSource-Hackathon-Rocky-Beijing-Bugs-List -- Regards Fred Li (李永乐) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Wed Jun 13 07:34:09 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 13 Jun 2018 16:34:09 +0900 Subject: [openstack-dev] [tap-as-a-service] core reviewer update In-Reply-To: References: Message-ID: i just made the change as i haven't got any concerns. welcome, kaz! On Thu, May 31, 2018 at 2:36 PM, Takashi Yamamoto wrote: > hi, > > i plan to add Kazuhiro Suzuki to tap-as-a-service-core group. [1] > he is one of active members of the project. > he is also the original author of tap-as-a-service-dashboard. > i'll make the change after a week unless i hear any objections/concerns. > > [1] https://review.openstack.org/#/admin/groups/957,members > http://stackalytics.com/report/contribution/tap-as-a-service/120 From zhang.lei.fly at gmail.com Wed Jun 13 07:38:57 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 13 Jun 2018 15:38:57 +0800 Subject: [openstack-dev] [kolla] Propose move the weekly meeting one hour earlier Message-ID: As we have more and more developer located in Asia and Europe timezone rather than Americas'. Current weekly meeting time is not proper. This was discussed at the last meeting and as a result, seems one hour earlier then now is better than now. So I propose to move the weekly meeting from UTC 16:00 to UTC 15:00 on Wednesday. Feel free to vote on the patch[0] This patch will be opened until next weekly meeting, 20 June. [0] https://review.openstack.org/575011 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jun 13 07:52:33 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Wed, 13 Jun 2018 16:52:33 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1528833992-sup-8052@lrrr.local> References: <1528833992-sup-8052@lrrr.local> Message-ID: <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > I would like to create a version of the jobs that run as part of > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > python 3. I'm not sure the best way to proceed, since that's a legacy > job. > > I'm not sure I'm familiar enough with the job to port it to be > zuulv3 native and allow us to drop the "legacy". Should I just > duplicate that job and modify it and keep the new one as "legacy" > too? > > Is there a different job I should base the work on? I don't see anything > obvious in the tempest repo's .zuul.yaml file. I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- Now LIBS_FROM_GIT is automatically set based on required projects [2]) . Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. But I am not sure which repo should own this new job. [1] https://github.com/openstack/tempest/blob/67e99b5b45d18f8fd28dbe3b09bd75008267176e/.zuul.yaml#L61 [2] https://review.openstack.org/#/c/549252/ -gmann > > Thanks, > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From hid-nakamura at vf.jp.nec.com Wed Jun 13 08:18:04 2018 From: hid-nakamura at vf.jp.nec.com (Hidekazu Nakamura) Date: Wed, 13 Jun 2018 08:18:04 +0000 Subject: [openstack-dev] [watcher] Nominating suzhengwei as Watcher core In-Reply-To: <11B68DFD-B14E-4A3C-BAA0-3D6182DB90E5@sbcloud.ru> References: <11B68DFD-B14E-4A3C-BAA0-3D6182DB90E5@sbcloud.ru> Message-ID: <8FC2060F93794D44942D588B8B54728712370932@BPXM03GP.gisp.nec.co.jp> +1 Sorry for the long delay. > -----Original Message----- > From: Чадин Александр Сергеевич > [mailto:aschadin at sbcloud.ru] > Sent: Tuesday, June 5, 2018 5:27 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [watcher] Nominating suzhengwei as Watcher core > > Hi Watchers, > > I’d like to nominate suzhengwei for Watcher Core team. > > suzhengwei makes great contribution to the Watcher project including code > reviews and implementations. > > Please vote +1/-1. > > > Best Regards, > ____ > Alex From witold.bedyk at est.fujitsu.com Wed Jun 13 08:28:05 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 13 Jun 2018 08:28:05 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer Message-ID: <31f945fc69a648c39792bcecc1b98fb6@R01UKEXCASM126.r01.fujitsu.local> Hello Telemetry Team, We would like to contribute a Monasca publisher to Ceilometer project [1] and add it to the list of currently supported transports [2]. The goal of the plugin is to send Ceilometer samples to Monasca API. I understand Gordon's concerns about adding maintenance overhead for Ceilometer team which he expressed in review but the code is pretty much self-contained and does not affect Ceilometer core. It's not our intention to shift the maintenance effort and Monasca team should still be responsible for this code. Adding this plugin will help in terms of interoperability of both projects and can be useful for wider parts of the OpenStack community. Please let me know your thoughts. I hope we can get this code merged. Cheers Witek [1] https://review.openstack.org/562400 [2] https://docs.openstack.org/ceilometer/latest/contributor/architecture.html#processing-the-data From jean-philippe at evrard.me Wed Jun 13 08:44:19 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 13 Jun 2018 10:44:19 +0200 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Message-ID: Option 2 for me. And the option switch to independant is IMO just fine. From aditi.s at india.nec.com Wed Jun 13 08:56:54 2018 From: aditi.s at india.nec.com (Aditi Sharma) Date: Wed, 13 Jun 2018 08:56:54 +0000 Subject: [openstack-dev] [watcher] Nominating suzhengwei as Watcher core In-Reply-To: <8FC2060F93794D44942D588B8B54728712370932@BPXM03GP.gisp.nec.co.jp> References: <11B68DFD-B14E-4A3C-BAA0-3D6182DB90E5@sbcloud.ru> <8FC2060F93794D44942D588B8B54728712370932@BPXM03GP.gisp.nec.co.jp> Message-ID: +1 Thanks and Regards, Aditi IRC: adisky -----Original Message----- From: Hidekazu Nakamura [mailto:hid-nakamura at vf.jp.nec.com] Sent: 13 June 2018 13:48 To: OpenStack Development Mailing List (not for usage questions) Cc: Masahiko Hayashi Subject: Re: [openstack-dev] [watcher] Nominating suzhengwei as Watcher core +1 Sorry for the long delay. > -----Original Message----- > From: Чадин Александр Сергеевич > [mailto:aschadin at sbcloud.ru] > Sent: Tuesday, June 5, 2018 5:27 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [watcher] Nominating suzhengwei as Watcher > core > > Hi Watchers, > > I’d like to nominate suzhengwei for Watcher Core team. > > suzhengwei makes great contribution to the Watcher project including > code reviews and implementations. > > Please vote +1/-1. > > > Best Regards, > ____ > Alex __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jean-philippe at evrard.me Wed Jun 13 09:04:51 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 13 Jun 2018 11:04:51 +0200 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Message-ID: > - Drop tags, write a regular report instead that can account for the > subtlety of each situation (ttx). One issue here is that it's obviously a > lot more work than the current situation. That's what I'd prefer personally. We have a website with a nice project navigator now [1]. This is somewhat a reference IMO, and should always be up to date for the published releases. The information is generated, but it could now as well be a static file/source of truth that we update and peer review manually after a cycle. It could allow more flexible and case by case data. This also make the information easy to find: that's naturally where I'd go (if I was an end-user) to see information about a project, not the governance repo. Although now I have learnt, and I am not sure this is worth spending much effort. [1]: https://www.openstack.org/software/project-navigator/ From jean-philippe at evrard.me Wed Jun 13 09:53:52 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 13 Jun 2018 11:53:52 +0200 Subject: [openstack-dev] [openstack-ansible] Restarting our very own "SIG" teams Message-ID: Hello, TL:DR; If you have spare cycles, join one of our interest groups! In the Queens cycle, I have formalised the "liaisons" work, making them an integral part of the Thursday's meeting agenda. Sadly, that initiative didn't work, as almost no liaison worked/reported on those meetings, and I stopped the initiative. Upon common agreement that we now need to change how we scale the team, we will now start our "liaison 2.0" work. So, I have started an etherpad [1], where you could see all the "groups" of people that OSA need, and where you could help. Please don't hesitate to edit this etherpad, adding your new special interest group, or simply joining an existing one if you have spare cycles! Thank you! Jean-Philippe Evrard (evrardjp) [1]: https://etherpad.openstack.org/p/osa-liaisons From dtantsur at redhat.com Wed Jun 13 10:49:48 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 13 Jun 2018 12:49:48 +0200 Subject: [openstack-dev] [TripleO] config-download/ansible next steps In-Reply-To: References: Message-ID: Slightly hijacking the thread to provide a status update on one of the items :) On 06/12/2018 07:04 PM, James Slagle wrote: > I wanted to provide an update on some next steps around config-download/Ansible > and TripleO. Now that we've completed transitioning to config-download by > default in Rocky, some might be wondering where we're going next. > > > 4. Ansible driven baremetal deployment > > Dmitry Tantsur has indicated he's going to be looking at driving TripleO > baremetal provisioning with Ironic and ansible directly. This would remove > Heat+Nova from the baremetal provisioning workflows we currently use. I'm actually already looking, my efforts just have not become visible yet. I started with reviving my old metalsmith project [1] to host the code we need to make this happen. This now has a CLI tool and a very dump (for now) ansible role [2] to drive it. Why a new tool? First, I want it to be reusable outside of TripleO (and outside of ansible modules), thus I don't want to put the code directly into, say, tripleo-common. Second, the current OpenStack Ansible modules are not quite sufficient for the task: 1. Both the os_ironic_node module and the underlying openstacksdk library lack support for the critically important VIF attachment API. I'm working on addressing that, but it will take substantial time (e.g. we need to stabilize the microversion support in openstacksdk). 2. Missing support for building configdrive. Again, can probably be added to openstacksdk, and I'll get to it one day. 3. No bulk operations. There is no way, to my best knowledge (please tell me I'm wrong), to provision several nodes in parallel via the current ansible modules. It is probably solvable via a new ansible module, but also see the next points. 4. No scheduling. That is, there is no way out-of-box to pick a suitable node for deployment. It can be done in pure ansible in the simplest case, but our case is not the simplest. Particularly, I don't want to end up parsing capabilities in ansible :) Also one of the goals of this work is to provide better messages than "No valid hosts found". 5. On top of #3 and #4, it is not possible to operate at the deployment level, not on the node level. From the current Heat stack we're going to receive a list of overcloud instances with their roles and other parameters. Some code has to take this input and make a decision on whether to deploy/undeploy something. It's currently done by Heat+Nova together, but they're not doing a great job in some corner cases. Particularly, replacing a node may be painful. So, while I do plan to solve #1 and #2 eventually, #3 - #5 require some place to put the logic. Putting it to TripleO or to ansible itself will preclude reusing it outside of TripleO and ansible accordingly. So, metalsmith is this place for now. I think in the far future I will try proposing a module to ansible itself that will handle #3 - #5 and will be backed by metalsmith. It will probably have a similar interface to the current PoC role [2]. The immediate plan right now is to wait for metalsmith 0.4.0 to hit the repositories, then start experimenting. I need to find a way to 1. make creating nova instances no-op 2. collect the required information from the created stack (I need networks, ports, hostnames, initial SSH keys, capabilities, images) 3. update the config-download code to optionally include the role [2] I'm not entirely sure where to start, so any hints are welcome. [1] https://github.com/openstack/metalsmith [2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html > > Obviously we have things to consider here such as backwards compatibility and > upgrades, but overall, I think this would be a great simplification to our > overall deployment workflow. > Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe by re-defining them to OS::Heat::None? Dmitry From pkovar at redhat.com Wed Jun 13 11:57:19 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 13 Jun 2018 13:57:19 +0200 Subject: [openstack-dev] [docs] Documentation meeting canceled Message-ID: <20180613135719.c7c305fc94ce5b0641ce03c6@redhat.com> Hi all, Apologies but have to cancel today's docs meeting due to a meeting conflict. Have questions for the docs team? We hang out at #openstack-doc. Thanks, pk From zigo at debian.org Wed Jun 13 12:11:16 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 13 Jun 2018 14:11:16 +0200 Subject: [openstack-dev] [all] [release] How to handle "stable" deliverables releases In-Reply-To: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> References: <3857c99a-b25c-e5c7-c553-929b49d0186e@openstack.org> Message-ID: <4b7ae1cf-a908-392f-f2e0-1c5bb4eb73b7@debian.org> On 06/11/2018 11:53 AM, Thierry Carrez wrote: > Hi everyone, > > As some of the OpenStack deliverables get more mature, we need to adjust > our release policies to best handle the case of deliverables that do not > need to be updated that much. This discussion started with how to handle > those "stable" libraries, but is actually also relevant for "stable" > services. > > Our current models include cycle-tied models (with-intermediary, > with-milestones, trailing) and a purely cycle-independent model. Main > OpenStack deliverables (the service components that you can deploy to > build an OpenStack cloud) are all "released" on a cycle. Libraries are > typically maintained per-cycle as well. What happens if no change is > pushed to a service or library during a full cycle ? What should we do > then ? > > Options include: > > 1/ Force artificial releases, even if there are no changes > 2/ Do not force releases, but still create branches from latest releases > 2bis/ Like 2, but only create the branch when needed > 3/ Do not force releases, and reuse stable branches from cycle to cycle > 4/ Stop worrying about stable branches at all for those "stable" things FYI, for downstream distribution maintainers, any evolution from 1/ is fine: it's a bit silly for us to just rebuild a new package when there's no need for it. It's a waste of time for package maintainer, and users who have to download the new version, etc. We're not really concerned by branches, all we care is if there's a new tag to be packaged. Cheers, Thomas Goirand (zigo) From aaronzhu1121 at gmail.com Wed Jun 13 12:53:06 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Wed, 13 Jun 2018 20:53:06 +0800 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][solum][tatu][trove] pycrypto is dead and insecure, you should migrate part 2 In-Reply-To: <20180604190624.tjki5sydsoj45sgo@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> <20180604190624.tjki5sydsoj45sgo@gentoo.org> Message-ID: Hi, Matthew Solum removed pycryto dependency in [0] [0]: https://review.openstack.org/#/c/574244/ -- Thanks, Rong Zhu On Tue, Jun 5, 2018 at 3:07 AM Matthew Thode wrote: > On 18-05-13 12:22:06, Matthew Thode wrote: > > This is a reminder to the projects called out that they are using old, > > unmaintained and probably insecure libraries (it's been dead since > > 2014). Please migrate off to use the cryptography library. We'd like > > to drop pycrypto from requirements for rocky. > > > > See also, the bug, which has most of you cc'd already. > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename > | Line | Text > | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | daisycloud-core | code/daisy/requirements.txt > | 17 | pycrypto>=2.6 # Public > Domain | > | freezer | requirements.txt > | 21 | pycrypto>=2.6 # Public Domain > | > | fuel-dev-tools | > contrib/fuel-setup/requirements.txt | 5 > | pycrypto==2.6.1 | > | fuel-web | nailgun/requirements.txt > | 24 | pycrypto>=2.6.1 > | > | solum | requirements.txt > | 24 | pycrypto # Public Domain > | > | tatu | requirements.txt > | 7 | pycrypto>=2.6.1 > | > | tatu | test-requirements.txt > | 7 | pycrypto>=2.6.1 > | > | trove | > integration/scripts/files/requirements/fedora-requirements.txt | 30 > | pycrypto>=2.6 # Public Domain | > | trove | > integration/scripts/files/requirements/ubuntu-requirements.txt | 29 > | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt > | 47 | pycrypto>=2.6 # Public Domain > | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > In order by name, notes follow. > > daisycloud-core - looks like AES / random functions are used > freezer - looks like AES / random functions are used > solum - looks like AES / RSA functions are used > trove - has a review!!! https://review.openstack.org/#/c/560292/ > > The following projects are not tracked so we won't wait on them. > fuel-dev-tools, fuel-web, tatu > > so it looks like progress is being made, so we have that going for us, > which is nice. What can I do to help move this forward? > > -- > Matthew Thode (prometheanfire) > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Rong Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Wed Jun 13 13:17:55 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 13 Jun 2018 09:17:55 -0400 Subject: [openstack-dev] [TripleO] config-download/ansible next steps In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018 at 6:49 AM, Dmitry Tantsur wrote: > Slightly hijacking the thread to provide a status update on one of the items > :) Thanks for jumping in. > The immediate plan right now is to wait for metalsmith 0.4.0 to hit the > repositories, then start experimenting. I need to find a way to > 1. make creating nova instances no-op > 2. collect the required information from the created stack (I need networks, > ports, hostnames, initial SSH keys, capabilities, images) > 3. update the config-download code to optionally include the role [2] > I'm not entirely sure where to start, so any hints are welcome. Here are a couple of possibilities. We could reuse the OS::TripleO::{{role.name}}Server mappings that we already have in place for pre-provisioned nodes (deployed-server). This could be mapped to a template that exposes some Ansible tasks as outputs that drives metalsmith to do the deployment. When config-download runs, it would execute these ansible tasks to provision the nodes with Ironic. This has the advantage of maintaining compatibility with our existing Heat parameter interfaces. It removes Nova from the deployment so that from the undercloud perspective you'd roughly have: Mistral -> Heat -> config-download -> Ironic (driven via ansible/metalsmith) A further (or completely different) iteration might look like: Step 1: Mistral -> Ironic (driven via ansible/metalsmith) Step 2: Heat -> config-download Step 2 would use the pre-provisioned node (deployed-server) feature already existing in TripleO and treat the just provisioned by Ironic nodes, as pre-provisioned from the Heat stack perspective. Step 1 and Step 2 would also probably be driven by a higher level Mistral workflow. This has the advantage of minimal impact to tripleo-heat-templates, and also removes Heat from the baremetal provisioning step. However, we'd likely need some python compatibility libraries that could translate Heat parameter values such as HostnameMap to ansible vars for some basic backwards compatibility. > > [1] https://github.com/openstack/metalsmith > [2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html > >> >> Obviously we have things to consider here such as backwards compatibility >> and >> upgrades, but overall, I think this would be a great simplification to our >> overall deployment workflow. >> > > Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe > by re-defining them to OS::Heat::None? Not exactly, as Heat would delete the previous versions of the resources. We'd need some special migrations, or could support the existing method forever for upgrades, and only deprecate it for new deployments. I'd like to help with this work. I'll start by taking a look at what you've got so far. Feel free to reach out if you'd like some additional dev assistance or testing. -- -- James Slagle -- From sgolovat at redhat.com Wed Jun 13 13:32:39 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Wed, 13 Jun 2018 15:32:39 +0200 Subject: [openstack-dev] [TripleO] config-download/ansible next steps In-Reply-To: References: Message-ID: Hi, On Wed, Jun 13, 2018 at 3:17 PM, James Slagle wrote: > On Wed, Jun 13, 2018 at 6:49 AM, Dmitry Tantsur wrote: >> Slightly hijacking the thread to provide a status update on one of the items >> :) > > Thanks for jumping in. > > >> The immediate plan right now is to wait for metalsmith 0.4.0 to hit the >> repositories, then start experimenting. I need to find a way to >> 1. make creating nova instances no-op >> 2. collect the required information from the created stack (I need networks, >> ports, hostnames, initial SSH keys, capabilities, images) >> 3. update the config-download code to optionally include the role [2] >> I'm not entirely sure where to start, so any hints are welcome. > > Here are a couple of possibilities. > > We could reuse the OS::TripleO::{{role.name}}Server mappings that we > already have in place for pre-provisioned nodes (deployed-server). > This could be mapped to a template that exposes some Ansible tasks as > outputs that drives metalsmith to do the deployment. When > config-download runs, it would execute these ansible tasks to > provision the nodes with Ironic. This has the advantage of maintaining > compatibility with our existing Heat parameter interfaces. It removes > Nova from the deployment so that from the undercloud perspective you'd > roughly have: > > Mistral -> Heat -> config-download -> Ironic (driven via ansible/metalsmith) > > A further (or completely different) iteration might look like: > > Step 1: Mistral -> Ironic (driven via ansible/metalsmith) > Step 2: Heat -> config-download I really like this approach. It decouples provisioning level from deployment. As a result we may use better level of parallelism. For instance, when we have 3 provisioned servers that match controller roles we may start controller deployment without waiting other nodes provisioning. For Compute role the strategy may be different such as deploy Compute server when at least one node provisioned. > > Step 2 would use the pre-provisioned node (deployed-server) feature > already existing in TripleO and treat the just provisioned by Ironic > nodes, as pre-provisioned from the Heat stack perspective. Step 1 and > Step 2 would also probably be driven by a higher level Mistral > workflow. This has the advantage of minimal impact to > tripleo-heat-templates, and also removes Heat from the baremetal > provisioning step. However, we'd likely need some python compatibility > libraries that could translate Heat parameter values such as > HostnameMap to ansible vars for some basic backwards compatibility. > >> >> [1] https://github.com/openstack/metalsmith >> [2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html >> >>> >>> Obviously we have things to consider here such as backwards compatibility >>> and >>> upgrades, but overall, I think this would be a great simplification to our >>> overall deployment workflow. >>> >> >> Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe >> by re-defining them to OS::Heat::None? > > Not exactly, as Heat would delete the previous versions of the > resources. We'd need some special migrations, or could support the > existing method forever for upgrades, and only deprecate it for new > deployments. > > I'd like to help with this work. I'll start by taking a look at what > you've got so far. Feel free to reach out if you'd like some > additional dev assistance or testing. > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk From zhipengh512 at gmail.com Wed Jun 13 13:34:28 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 13 Jun 2018 21:34:28 +0800 Subject: [openstack-dev] [cyborg]Team Weekly Meeting 2018.06.13 Message-ID: Hi Team, Kind reminder for the team meeting today about 30 mins later at #openstack-cyborg . -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jun 13 14:27:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 10:27:12 -0400 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> Message-ID: <1528900004-sup-3478@lrrr.local> Excerpts from Jean-Philippe Evrard's message of 2018-06-13 11:04:51 +0200: > > - Drop tags, write a regular report instead that can account for the > > subtlety of each situation (ttx). One issue here is that it's obviously a > > lot more work than the current situation. > > That's what I'd prefer personally. Who would do that work? Doug From doug at doughellmann.com Wed Jun 13 14:31:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 10:31:00 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> Message-ID: <1528900141-sup-6518@lrrr.local> Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900: > ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > > I would like to create a version of the jobs that run as part of > > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > > python 3. I'm not sure the best way to proceed, since that's a legacy > > job. > > > > I'm not sure I'm familiar enough with the job to port it to be > > zuulv3 native and allow us to drop the "legacy". Should I just > > duplicate that job and modify it and keep the new one as "legacy" > > too? > > > > Is there a different job I should base the work on? I don't see anything > > obvious in the tempest repo's .zuul.yaml file. > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services > > You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- > Now LIBS_FROM_GIT is automatically set based on required projects [2]) . > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. > > But I am not sure which repo should own this new job. Could it be as simple as adding tempest-full-py3 with the required-projects list updated to include the current repository? So there isn't a special separate job, and we would just reuse tempest-full-py3 for this? It would be less "automatic" than the current project-template and job, but still relatively simple to set up. Am I missing something? This feels too easy... Doug > > [1] https://github.com/openstack/tempest/blob/67e99b5b45d18f8fd28dbe3b09bd75008267176e/.zuul.yaml#L61 > [2] https://review.openstack.org/#/c/549252/ > > -gmann > > > > > Thanks, > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From dstanek at dstanek.com Wed Jun 13 14:44:24 2018 From: dstanek at dstanek.com (David Stanek) Date: Wed, 13 Jun 2018 10:44:24 -0400 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: <20180613144424.GB15585@x1.localdomain> On 30-May-2018 08:45, Henry Nash wrote: > Hi > > It is with a somewhat heavy heart that I have decided that it is time to hang > up my keystone core status. Having been involved since the closing stages of > Folsom, I've had a good run! When I look at how far keystone has come since the > v2 days, it is remarkable - and we should all feel a sense of pride in that. > > Thanks to all the hard work, commitment, humour and support from all the > keystone folks over the years - I am sure we will continue to interact and meet > among the many other open source projects that many of us are becoming involved > with. Ad astra! > Hey Henry! It's good to hear from you! You were always fun to work with and I got a lot out of our chats about crazy, enterprisey things. I guess the world with have to wait for another episode of "Stanek & Nash". -- david stanek web: https://dstanek.com twitter: https://twitter.com/dstanek From e0ne at e0ne.info Wed Jun 13 15:01:26 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 13 Jun 2018 18:01:26 +0300 Subject: [openstack-dev] [horizon][plugins] Introduce horizonlib (again) Message-ID: Hi team, Last week on the Horizon meeting we discussed [1] possible options for Horizon release model to address current issues for plugins maintainers. Some background could be found here [2]. The main issue is that we should have some stable API for plugins and be able to release it as needed. We're trying to cover several use cases with this effort. E.g: - do not break plugins with Horizon changes (cross-project CI would help with some issues here too) - provide an easy way to develop plugins which require specific Horizon version and features For now, most of the plugins use 'horizon' package to implement dashboard extensions. Some plugins use parts of 'openstack_dashboard' package. In such case, it becomes complicated to develop plugins based on current master and have CI up and running. The idea is to introduce something like 'horizonlib' or 'horizon-sdk' with a stable API for plugin development. We're going to collect everything needed for this library, so plugins developers could consume only it and do not relate on any internal Horizon things. We'd got horizonlib in the past. Unfortunately, we missed information about what was good or bad but we'll do our best to succeed in this. If you have any comments or questions, please do not hesitate to drop few words into this conversation or ping me in IRC. We're going to collect as much feedback as we can before we'll discuss it in details during the next PTG. [1] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-06-06-15.01.log.html#l-29 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128310.html Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Jun 13 15:14:32 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 13 Jun 2018 10:14:32 -0500 Subject: [openstack-dev] Reminder to add "nova-status upgrade check" to deployment tooling Message-ID: I was going through some recently reported nova bugs and came across [1] which I opened at the Summit during one of the FFU sessions where I realized the nova upgrade docs don't mention the nova-status upgrade check CLI [2] (added in Ocata). As a result, I was wondering how many deployment tools out there support upgrades and from those, which are actually integrating that upgrade status check command. I'm not really familiar with most of them, but I've dabbled in OSA enough to know where the code lived for nova upgrades, so I posted a patch [3]. I'm hoping this can serve as a template for other deployment projects to integrate similar checks into their upgrade (and install verification) flows. [1] https://bugs.launchpad.net/nova/+bug/1772973 [2] https://docs.openstack.org/nova/latest/cli/nova-status.html [3] https://review.openstack.org/#/c/575125/ -- Thanks, Matt From prometheanfire at gentoo.org Wed Jun 13 15:23:45 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 13 Jun 2018 10:23:45 -0500 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> <20180604190624.tjki5sydsoj45sgo@gentoo.org> Message-ID: <20180613152345.k32ph2qvfysiczlf@gentoo.org> On 18-06-13 20:53:06, Rong Zhu wrote: > Hi, Matthew > > Solum removed pycryto dependency in [0] > > [0]: https://review.openstack.org/#/c/574244/ > > -- > Thanks, > Rong Zhu Yep, just in time for the next reminder email too :D > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ Reverse order this time :D trove has https://review.openstack.org/#/c/573070 which is making good progress The rest (tatu, fuel, freezer, daisycloud-core) I don't see any reviews, starting to wonder if they watch the list. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Wed Jun 13 15:38:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 11:38:41 -0400 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: <20180613152345.k32ph2qvfysiczlf@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> <20180604190624.tjki5sydsoj45sgo@gentoo.org> <20180613152345.k32ph2qvfysiczlf@gentoo.org> Message-ID: <1528904171-sup-6955@lrrr.local> Excerpts from Matthew Thode's message of 2018-06-13 10:23:45 -0500: > On 18-06-13 20:53:06, Rong Zhu wrote: > > Hi, Matthew > > > > Solum removed pycryto dependency in [0] > > > > [0]: https://review.openstack.org/#/c/574244/ > > > > -- > > Thanks, > > Rong Zhu > > Yep, just in time for the next reminder email too :D > > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | Repository | Filename | Line | Text | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > > | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | > > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > Reverse order this time :D > > trove has https://review.openstack.org/#/c/573070 which is making good > progress > > The rest (tatu, fuel, freezer, daisycloud-core) I don't see any reviews, > starting to wonder if they watch the list. > Given the requirements team's limited resources, I would focus on freezer and trove. The other projects aren't official, and we can address any issues they have if they apply to become official. Doug From doug at doughellmann.com Wed Jun 13 15:43:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 11:43:00 -0400 Subject: [openstack-dev] [horizon][plugins] Introduce horizonlib (again) In-Reply-To: References: Message-ID: <1528904409-sup-6615@lrrr.local> Excerpts from Ivan Kolodyazhny's message of 2018-06-13 18:01:26 +0300: > Hi team, > > Last week on the Horizon meeting we discussed [1] possible options for > Horizon release model to address current issues for plugins maintainers. > Some background could be found here [2]. > > The main issue is that we should have some stable API for plugins and be > able to release it as needed. We're trying to cover several use cases with > this effort. E.g: > - do not break plugins with Horizon changes (cross-project CI would help > with some issues here too) > - provide an easy way to develop plugins which require specific Horizon > version and features > > For now, most of the plugins use 'horizon' package to implement > dashboard extensions. Some plugins use parts of 'openstack_dashboard' > package. In such case, it becomes complicated to develop plugins based on > current master and have CI up and running. > > The idea is to introduce something like 'horizonlib' or 'horizon-sdk' with > a stable API for plugin development. We're going to collect everything > needed for this library, so plugins developers could consume only it and do > not relate on any internal Horizon things. > > We'd got horizonlib in the past. Unfortunately, we missed information about > what was good or bad but we'll do our best to succeed in this. > > > If you have any comments or questions, please do not hesitate to drop few > words into this conversation or ping me in IRC. We're going to collect as > much feedback as we can before we'll discuss it in details during the next > PTG. > > > [1] > http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-06-06-15.01.log.html#l-29 > [2] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128310.html > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ Another solution that may end up being less work is to release Horizon using the cycle-with-intermediary model and publish the releases to PyPI. Those two changes would let you release changes at any point in the cycle, to support your plugin authors, and would not require reorganizing the code in Horizon to build a new release artifact. The release team would be happy to offer advice about how to make the changes, if you want to talk about it. Doug From emilien at redhat.com Wed Jun 13 15:50:23 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 13 Jun 2018 08:50:23 -0700 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits Message-ID: Alan Bishop has been highly involved in the Storage backends integration in TripleO and Puppet modules, always here to update with new features, fix (nasty and untestable third-party backends) bugs and manage all the backports for stable releases: https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 He's also well knowledgeable of how TripleO works and how containers are integrated, I would like to propose him as core on TripleO projects for patches related to storage things (Cinder, Glance, Swift, Manila, and backends). Please vote -1/+1, Thanks! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Wed Jun 13 15:57:09 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 13 Jun 2018 17:57:09 +0200 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On 06/13/2018 05:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration > in TripleO and Puppet modules, always here to update with new features, > fix (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 :D -- Giulio Fidente GPG KEY: 08D733BA From marios at redhat.com Wed Jun 13 16:03:57 2018 From: marios at redhat.com (Marios Andreou) Date: Wed, 13 Jun 2018 19:03:57 +0300 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018 at 6:57 PM, Giulio Fidente wrote: > On 06/13/2018 05:50 PM, Emilien Macchi wrote: > > Alan Bishop has been highly involved in the Storage backends integration > > in TripleO and Puppet modules, always here to update with new features, > > fix (nasty and untestable third-party backends) bugs and manage all the > > backports for stable releases: > > https://review.openstack.org/#/q/owner:%22Alan+Bishop+% > 253Cabishop%2540redhat.com%253E%22 > > > > He's also well knowledgeable of how TripleO works and how containers are > > integrated, I would like to propose him as core on TripleO projects for > > patches related to storage things (Cinder, Glance, Swift, Manila, and > > backends). > > > > Please vote -1/+1, > > +1 :D > +1 > > > -- > Giulio Fidente > GPG KEY: 08D733BA > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jun 13 16:19:18 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 12:19:18 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1528900141-sup-6518@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> Message-ID: <1528906598-sup-3505@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400: > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900: > > ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > > > I would like to create a version of the jobs that run as part of > > > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > > > python 3. I'm not sure the best way to proceed, since that's a legacy > > > job. > > > > > > I'm not sure I'm familiar enough with the job to port it to be > > > zuulv3 native and allow us to drop the "legacy". Should I just > > > duplicate that job and modify it and keep the new one as "legacy" > > > too? > > > > > > Is there a different job I should base the work on? I don't see anything > > > obvious in the tempest repo's .zuul.yaml file. > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services > > > > You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- > > Now LIBS_FROM_GIT is automatically set based on required projects [2]) . > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. > > > > But I am not sure which repo should own this new job. > > Could it be as simple as adding tempest-full-py3 with the > required-projects list updated to include the current repository? So > there isn't a special separate job, and we would just reuse > tempest-full-py3 for this? > > It would be less "automatic" than the current project-template and job, > but still relatively simple to set up. Am I missing something? This > feels too easy... I think I could define a job with a name like tempest-full-py3-src based on tempest-full-py3 and set LIBS_FROM_GIT to include {{zuul.project.name}} in the devstack_localrc vars section. If I understand correctly, that would automatically set LIBS_FROM_GIT to refer to the project that the job is attached to, which would make it easier to use from a project-template (I would also create a lib-forward-testing-py3 project template to supplement lib-forward-testing). Does that sound right? Doug From johfulto at redhat.com Wed Jun 13 16:22:53 2018 From: johfulto at redhat.com (John Fulton) Date: Wed, 13 Jun 2018 12:22:53 -0400 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018, 12:04 PM Marios Andreou wrote: > > On Wed, Jun 13, 2018 at 6:57 PM, Giulio Fidente > wrote: > >> On 06/13/2018 05:50 PM, Emilien Macchi wrote: >> > Alan Bishop has been highly involved in the Storage backends integration >> > in TripleO and Puppet modules, always here to update with new features, >> > fix (nasty and untestable third-party backends) bugs and manage all the >> > backports for stable releases: >> > >> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 >> > >> > He's also well knowledgeable of how TripleO works and how containers are >> > integrated, I would like to propose him as core on TripleO projects for >> > patches related to storage things (Cinder, Glance, Swift, Manila, and >> > backends). >> > >> > Please vote -1/+1, >> >> +1 :D >> > > +1 > > +1 >> >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Jun 13 17:06:48 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 13 Jun 2018 11:06:48 -0600 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: +1 On Wed, Jun 13, 2018 at 9:50 AM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, > Thanks! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Arkady.Kanevsky at dell.com Wed Jun 13 17:51:56 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 13 Jun 2018 17:51:56 +0000 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: <936e708812854222a398679edc0ff398@AUSX13MPS308.AMER.DELL.COM> +1 From: John Fulton [mailto:johfulto at redhat.com] Sent: Wednesday, June 13, 2018 11:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits On Wed, Jun 13, 2018, 12:04 PM Marios Andreou > wrote: On Wed, Jun 13, 2018 at 6:57 PM, Giulio Fidente > wrote: On 06/13/2018 05:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration > in TripleO and Puppet modules, always here to update with new features, > fix (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 :D +1 +1 -- Giulio Fidente GPG KEY: 08D733BA __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Wed Jun 13 18:29:54 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 13 Jun 2018 13:29:54 -0500 Subject: [openstack-dev] PTG Denver 2018 Registration & Hotel Info Message-ID: The fourth Project Teams Gathering will be held September 10-14th back at the Renaissance Stapleton Hotel in Denver, Colorado (3801 Quebec Street, Denver, Colorado 80207). REGISTRATION AND HOTEL Registration is now available here: https://denver2018ptg.eventbrite.com The price is currently USD $399 until August 23 at 6:59 UTC. After that date, the price will be USD $599 so buy your pass before the price increases! We've reserved a very limited block of discounted hotel rooms at $149/night USD (does not include breakfast) with the Renaissance Denver Stapleton Hotel where the event will be held. Please move quickly to reserve a room by August 20th or until they sell out! TRAIN NEAR HOTEL The hotel has informed us that the RTD is anticipating the area near the Renaissance Denver Stapleton Hotel being deemed a quiet zone by end of July, with a more realistic completion date of August 15th. This means there should not be any train horns during the week of the PTG! HELPFUL LINKS: Registration: https://denver2018ptg.eventbrite.com Visa Invitation Letter (deadline August 24): https://openstackfoundation.formstack.com/forms/visa_form_denver_2018_ptg Travel Support Program (first round deadline July 1): https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 Sponsorship: https://www.openstack.org/ptg#tab_sponsor Book a Hotel Room (deadline August 20): https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes Feel free to reach out to me directly with any questions, looking forward to seeing everyone in Denver! Cheers, Kendall -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Wed Jun 13 19:11:22 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 13 Jun 2018 14:11:22 -0500 Subject: [openstack-dev] PTG Denver 2018 Registration & Hotel Info In-Reply-To: References: Message-ID: <20180613191122.fktqx3ylpyxqyc3z@gentoo.org> On 18-06-13 13:29:54, Kendall Waters wrote: > The fourth Project Teams Gathering will be held September 10-14th back at the Renaissance Stapleton Hotel in Denver, Colorado (3801 Quebec Street, Denver, Colorado 80207). > > REGISTRATION AND HOTEL > Registration is now available here: https://denver2018ptg.eventbrite.com > > The price is currently USD $399 until August 23 at 6:59 UTC. After that date, the price will be USD $599 so buy your pass before the price increases! > > We've reserved a very limited block of discounted hotel rooms at $149/night USD (does not include breakfast) with the Renaissance Denver Stapleton Hotel where the event will be held. Please move quickly to reserve a room by August 20th or until they sell out! > > TRAIN NEAR HOTEL > The hotel has informed us that the RTD is anticipating the area near the Renaissance Denver Stapleton Hotel being deemed a quiet zone by end of July, with a more realistic completion date of August 15th. This means there should not be any train horns during the week of the PTG! > > HELPFUL LINKS: > Registration: https://denver2018ptg.eventbrite.com > Visa Invitation Letter (deadline August 24): https://openstackfoundation.formstack.com/forms/visa_form_denver_2018_ptg > Travel Support Program (first round deadline July 1): https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 > Sponsorship: https://www.openstack.org/ptg#tab_sponsor > Book a Hotel Room (deadline August 20): https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes > Feel free to reach out to me directly with any questions, looking forward to seeing everyone in Denver! > What if we want that train experience. I feel like there will be something missing without it. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From remo at rm.ht Wed Jun 13 19:20:40 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 13 Jun 2018 12:20:40 -0700 Subject: [openstack-dev] TripleO NeutronNetworkVLANRanges Message-ID: <3D0D0130-B17C-448A-9A65-98D9B4316BCA@rm.ht> Hello guys, just want to double check and make sure that this option can be ignored if using vxlan. NeutronNetworkVLANRanges (used in the network isolation template) Thanks, Remo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rajini.Karthik at Dell.com Wed Jun 13 19:43:17 2018 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Wed, 13 Jun 2018 19:43:17 +0000 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: <936e708812854222a398679edc0ff398@AUSX13MPS308.AMER.DELL.COM> References: <936e708812854222a398679edc0ff398@AUSX13MPS308.AMER.DELL.COM> Message-ID: Dell - Internal Use - Confidential +1 From: Kanevsky, Arkady Sent: Wednesday, June 13, 2018 12:52 PM To: fulton at redhat.com; openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits +1 From: John Fulton [mailto:johfulto at redhat.com] Sent: Wednesday, June 13, 2018 11:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits On Wed, Jun 13, 2018, 12:04 PM Marios Andreou > wrote: On Wed, Jun 13, 2018 at 6:57 PM, Giulio Fidente > wrote: On 06/13/2018 05:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration > in TripleO and Puppet modules, always here to update with new features, > fix (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 :D +1 +1 -- Giulio Fidente GPG KEY: 08D733BA __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 13 19:56:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jun 2018 19:56:51 +0000 Subject: [openstack-dev] PTG Denver 2018 Registration & Hotel Info In-Reply-To: <20180613191122.fktqx3ylpyxqyc3z@gentoo.org> References: <20180613191122.fktqx3ylpyxqyc3z@gentoo.org> Message-ID: <20180613195650.r4hed55ur7e3ital@yuggoth.org> On 2018-06-13 14:11:22 -0500 (-0500), Matthew Thode wrote: [...] > What if we want that train experience. I feel like there will be > something missing without it. Sounds like it may require us to bring our own train whistles. -- Jeremy Stanley From nusiddiq at redhat.com Wed Jun 13 19:58:15 2018 From: nusiddiq at redhat.com (Numan Siddique) Date: Thu, 14 Jun 2018 01:28:15 +0530 Subject: [openstack-dev] TripleO NeutronNetworkVLANRanges In-Reply-To: <3D0D0130-B17C-448A-9A65-98D9B4316BCA@rm.ht> References: <3D0D0130-B17C-448A-9A65-98D9B4316BCA@rm.ht> Message-ID: On Thu, Jun 14, 2018 at 12:50 AM, Remo Mattei wrote: > Hello guys, just want to double check and make sure that this option can > be ignored if using vxlan. > > NeutronNetworkVLANRanges (used in the network isolation template) > > Hi Remo, this parameter maps to the neutron config 'network_vlan_range' defined here - https://github.com/openstack/neutron/blob/master/neutron/conf/plugins/ml2/drivers/driver_type.py#L71 You probably need it if your provider network is VLAN. Thanks Numan > Thanks, > Remo > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Jun 13 20:33:13 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 13 Jun 2018 13:33:13 -0700 Subject: [openstack-dev] [nova] review runways check-in and feedback Message-ID: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> Howdy everyone, We've been experimenting with a new process this cycle, Review Runways [1] and we're about at the middle of the cycle now as we had the r-2 milestone last week June 7. I wanted to start a thread and gather thoughts and feedback from the nova community about how they think runways have been working or not working and lend any suggestions to change or improve as we continue on in the rocky cycle. We decided to try the runways process to increase the chances of core reviewers converging on the same changes and thus increasing reviews and merges on approved blueprint work. As of today, we have 69 blueprints approved and 28 blueprints completed, we just passed r-2 June 7 and r-3 is July 26 and rc1 is August 9 [2]. Do people feel like they've been receiving more review on their blueprints? Does it seem like we're completing more blueprints earlier? Is there feedback or suggestions for change that you can share? Thanks all, -melanie [1] https://etherpad.openstack.org/p/nova-runways-rocky [2] https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule From doug at doughellmann.com Wed Jun 13 20:55:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 13 Jun 2018 16:55:55 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1528906598-sup-3505@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> Message-ID: <1528923244-sup-2628@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-13 12:19:18 -0400: > Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400: > > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900: > > > ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > > > > I would like to create a version of the jobs that run as part of > > > > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > > > > python 3. I'm not sure the best way to proceed, since that's a legacy > > > > job. > > > > > > > > I'm not sure I'm familiar enough with the job to port it to be > > > > zuulv3 native and allow us to drop the "legacy". Should I just > > > > duplicate that job and modify it and keep the new one as "legacy" > > > > too? > > > > > > > > Is there a different job I should base the work on? I don't see anything > > > > obvious in the tempest repo's .zuul.yaml file. > > > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services > > > > > > You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- > > > Now LIBS_FROM_GIT is automatically set based on required projects [2]) . > > > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. > > > > > > But I am not sure which repo should own this new job. > > > > Could it be as simple as adding tempest-full-py3 with the > > required-projects list updated to include the current repository? So > > there isn't a special separate job, and we would just reuse > > tempest-full-py3 for this? > > > > It would be less "automatic" than the current project-template and job, > > but still relatively simple to set up. Am I missing something? This > > feels too easy... > > I think I could define a job with a name like tempest-full-py3-src based > on tempest-full-py3 and set LIBS_FROM_GIT to include > {{zuul.project.name}} in the devstack_localrc vars section. If I > understand correctly, that would automatically set LIBS_FROM_GIT to > refer to the project that the job is attached to, which would make it > easier to use from a project-template (I would also create a > lib-forward-testing-py3 project template to supplement > lib-forward-testing). > > Does that sound right? > > Doug This appears to be working. https://review.openstack.org/575164 adds a job to oslo.config and the log shows LIBS_FROM_GIT set to oslo.config's repository: http://logs.openstack.org/64/575164/1/check/tempest-full-py3-src/7a193fa/job-output.txt.gz#_2018-06-13_19_01_22_742338 How does the QA team feel about hosting the job definition in the tempest repository with the tempest-full-py3 job? If you think that will work, I can propose the patch tomorrow. Doug From mriedemos at gmail.com Wed Jun 13 21:33:34 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 13 Jun 2018 16:33:34 -0500 Subject: [openstack-dev] [nova] review runways check-in and feedback In-Reply-To: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> References: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> Message-ID: On 6/13/2018 3:33 PM, melanie witt wrote: > > We've been experimenting with a new process this cycle, Review Runways > [1] and we're about at the middle of the cycle now as we had the r-2 > milestone last week June 7. > > I wanted to start a thread and gather thoughts and feedback from the > nova community about how they think runways have been working or not > working and lend any suggestions to change or improve as we continue on > in the rocky cycle. > > We decided to try the runways process to increase the chances of core > reviewers converging on the same changes and thus increasing reviews and > merges on approved blueprint work. As of today, we have 69 blueprints > approved and 28 blueprints completed, we just passed r-2 June 7 and r-3 > is July 26 and rc1 is August 9 [2]. > > Do people feel like they've been receiving more review on their > blueprints? Does it seem like we're completing more blueprints earlier? > Is there feedback or suggestions for change that you can share? Lots of cores are not reviewing stuff in the current runways slots, which defeats the purpose of runways for the most part if the majority of the core team aren't going to review what's in a slot. Lots of people have ready-for-runways blueprint series that aren't queued up in the runways etherpad, and then ask for reviews on those series and I have to tell them, "throw it in the runways queue". I'm not sure if people are thinking subteams need to review series that are ready for wider review first, but especially for the placement stuff, I think those things need to be slotted up if they are ready. Just because it's in the queue doesn't mean interested parties can't review it. I've seen things in queue get merged before they hit a slot, and that's fine. I personally would also like to see stuff in the queue so I can get a better idea of what is being worked on and what's ready, especially as we wind down into the 3rd milestone and we're going to start crunching for major deliverables that aren't yet done. Speaking just for myself, I've got a bit of anxiety going on right now because I have a feeling we have several major efforts that still need to get done for Rocky and they aren't getting the proper focus from the majority of the team (the nested resource provider migration stuff and handling a down cell are the major ones on my list). Having said that, it's clear from the list of things in the runways etherpad that there are some lower priority efforts that have been completed probably because they leveraged runways (there are a few xenapi blueprints for example, and the powervm driver changes). -- Thanks, Matt From mriedemos at gmail.com Wed Jun 13 22:06:11 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 13 Jun 2018 17:06:11 -0500 Subject: [openstack-dev] [cinder] Enabling tempest test for in-use volume extending In-Reply-To: References: Message-ID: <46396bf4-9b27-da76-656b-7da57615d451@gmail.com> On 6/7/2018 8:33 AM, Lucio Seki wrote: > Since Pike release, Cinder supports in-use volume extending [1]. > By default, it assumes that every storage backend is able to perform > this operation. Actually, by default, Tempest assumes that no backends support it, which is why it's disabled by default in Tempest: https://review.openstack.org/#/c/480746/7/tempest/config.py And then only enabled by default in devstack if you're using lvm and libvirt since at the time those were the only backends that supported it. > Thus, the tempest test for this feature should be enabled by default. A > patch was submitted to enable it [2]. > > Please note that, after this patch being merged, the 3rd party CI > maintainers may need to override this configuration, if the backend > being tested does not support in-use volume extending. > > [1] Add ability to extend 'in-use' volume: > https://review.openstack.org/#/c/454287/ > [2] Enable tempest tests for attached volume extending: > https://review.openstack.org/#/c/572188 -- Thanks, Matt From melwittt at gmail.com Wed Jun 13 22:47:33 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 13 Jun 2018 15:47:33 -0700 Subject: [openstack-dev] [nova] nova-cells-v1 intermittent gate failure Message-ID: <48472d08-39a5-307f-0361-9fb2d92d9781@gmail.com> Hi everybody, Just a heads up that we have an intermittent gate failure of the nova-cells-v1 job happening right now [1] and a revert of the tempest change related to it has been approved [2] and will be making its way through the gate. The nova-cells-v1 job will be failing until [2] merges. -melanie [1] https://bugs.launchpad.net/nova/+bug/1776684 [2] https://review.openstack.org/575132 From kaz at jp.fujitsu.com Thu Jun 14 00:07:46 2018 From: kaz at jp.fujitsu.com (SUZUKI, Kazuhiro) Date: Thu, 14 Jun 2018 09:07:46 +0900 (JST) Subject: [openstack-dev] [tap-as-a-service] core reviewer update In-Reply-To: References: Message-ID: <20180614.090746.580701269509113008.kaz@jp.fujitsu.com> Hi, Thank you. I'm glad to be able to support the TaaS community. Regards, Kaz From: Takashi Yamamoto Subject: Re: [openstack-dev] [tap-as-a-service] core reviewer update Date: Wed, 13 Jun 2018 16:34:09 +0900 > i just made the change as i haven't got any concerns. > welcome, kaz! > > On Thu, May 31, 2018 at 2:36 PM, Takashi Yamamoto wrote: >> hi, >> >> i plan to add Kazuhiro Suzuki to tap-as-a-service-core group. [1] >> he is one of active members of the project. >> he is also the original author of tap-as-a-service-dashboard. >> i'll make the change after a week unless i hear any objections/concerns. >> >> [1] https://review.openstack.org/#/admin/groups/957,members >> http://stackalytics.com/report/contribution/tap-as-a-service/120 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Thu Jun 14 00:50:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 13 Jun 2018 17:50:13 -0700 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read Message-ID: TL;DR: gate queue was 25h+, we put all patches from gate on standby, do not restore/recheck until further announcement. We recently enabled the containerized undercloud for multinode jobs and we believe this was a bit premature as the container download process wasn't optimized so it's not pulling the mirrors for the same containers multiple times yet. It caused the job runtime to increase and probably the load on docker.io mirrors hosted by OpenStack Infra to be a bit slower to provide the same containers multiple times. The time taken to prepare containers on the undercloud and then for the overcloud caused the jobs to randomly timeout therefore the gate to fail in a high amount of times, so we decided to remove all jobs from the gate by abandoning the patches temporarily (I have them in my browser and will restore when things are stable again, please do not touch anything). Steve Baker has been working on a series of patches that optimize the way we prepare the containers but basically the workflow will be: - pull containers needed for the undercloud into a local registry, using infra mirror if available - deploy the containerized undercloud - pull containers needed for the overcloud minus the ones already pulled for the undercloud, using infra mirror if available - update containers on the overcloud - deploy the containerized undercloud With that process, we hope to reduce the runtime of the deployment and therefore reduce the timeouts in the gate. To enable it, we need to land in that order: https://review.openstack.org/#/c/571613/, https://review.openstack.org/#/c/574485/, https://review.openstack.org/#/c/571631/ and https://review.openstack.org/#/c/568403. In the meantime, we are disabling the containerized undercloud recently enabled on all scenarios: https://review.openstack.org/#/c/575264/ for mitigation with the hope to stabilize things until Steve's patches land. Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the containerized undercloud on scenarios after checking that we don't have timeouts and reasonable deployment runtimes. That's the plan we came with, if you have any question / feedback please share it. -- Emilien, Steve and Wes -------------- next part -------------- An HTML attachment was scrubbed... URL: From zh.f at outlook.com Thu Jun 14 01:28:14 2018 From: zh.f at outlook.com (Zhang Fan) Date: Thu, 14 Jun 2018 01:28:14 +0000 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead andinsecure, you should migrate Message-ID: Hi, Matthew Sorry for the late updates on patches. Trove team members recently are sort of busy with daily work. And it takes me awhile to get back focusing the upstream. Fortunately, we are still there, and trove is still alive :) About removing pycryto dependency, there are two patches, as [0] is merged and [1] is on the way, thanks Zhao Chao for working on [0]: [0].https://review.openstack.org/#/c/560292/ [1].https://review.openstack.org/#/c/573070/ Thanks anyone who helps us on this improments and looks forward to have more contributors joining us in OpenStack/Trove ! Best wishes. Fan Zhang Original Message Sender: Matthew Thode Recipient: openstack-dev at lists.openstack.org Date: Wednesday, Jun 13, 2018 23:23 Subject: Re: [openstack-dev][requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead andinsecure, you should migrate On 18-06-13 20:53:06, Rong Zhu wrote: > Hi, Matthew > > Solum removed pycryto dependency in [0] > > [0]: https://review.openstack.org/#/c/574244/ > > -- > Thanks, > Rong Zhu Yep, just in time for the next reminder email too :D > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ Reverse order this time :D trove has https://review.openstack.org/#/c/573070 which is making good progress The rest (tatu, fuel, freezer, daisycloud-core) I don't see any reviews, starting to wonder if they watch the list. -- Matthew Thode (prometheanfire) -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Jun 14 02:08:02 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 13 Jun 2018 20:08:02 -0600 Subject: [openstack-dev] [tripleo] zuul change gating repo name change Message-ID: Greetings, Please be aware the yum repo created in tripleo ci jobs is going to change names to include the release [1]. This is done to ensure that only the appropriate patches are installed when patches from multiple branches are in play. This is especially important to upgrade jobs. If you are working on a patch that uses the gating.repo, this patch [1] will impact your work. Thank you!! [1] https://review.openstack.org/#/c/572736/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Jun 14 04:20:22 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 13 Jun 2018 23:20:22 -0500 Subject: [openstack-dev] [requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead andinsecure, you should migrate In-Reply-To: References: Message-ID: <20180614042022.b5dthx43wa5eguds@gentoo.org> On 18-06-14 01:28:14, Zhang Fan wrote: > Hi, Matthew > > > Sorry for the late updates on patches. Trove team members recently are sort of busy with daily work. And it takes me awhile to get back focusing the upstream. Fortunately, we are still there, and trove is still alive :) > > > About removing pycryto dependency, there are two patches, as [0] is merged and [1] is on the way, thanks Zhao Chao for working on [0]: > > > [0].https://review.openstack.org/#/c/560292/ > > [1].https://review.openstack.org/#/c/573070/ > > Thanks anyone who helps us on this improments and looks forward to have more contributors joining us in OpenStack/Trove ! > > > Best wishes. > Fan Zhang > > Original Message > Sender: Matthew Thode > Recipient: openstack-dev at lists.openstack.org > Date: Wednesday, Jun 13, 2018 23:23 > Subject: Re: [openstack-dev][requirements][daisycloud][freezer][fuel][tatu][trove] pycrypto is dead andinsecure, you should migrate > > > On 18-06-13 20:53:06, Rong Zhu wrote: > > Hi, Matthew > > > > Solum removed pycryto dependency in [0] > > > > [0]: https://review.openstack.org/#/c/574244/ > > > > -- > > Thanks, > > Rong Zhu > > Yep, just in time for the next reminder email too :D > > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | Repository | Filename | Line | Text | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > > | fuel-dev-tools | contrib/fuel-setup/requirements.txt | 5 | pycrypto==2.6.1 | > > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > Reverse order this time :D > > trove has https://review.openstack.org/#/c/573070 which is making good > progress > > The rest (tatu, fuel, freezer, daisycloud-core) I don't see any reviews, > starting to wonder if they watch the list. > Thanks for the work on it :D On a related note I've created https://review.openstack.org/575163 for freezer (depends on Doug's work though). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From skramaja at redhat.com Thu Jun 14 05:29:55 2018 From: skramaja at redhat.com (Saravanan KR) Date: Thu, 14 Jun 2018 10:59:55 +0530 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: +1 Regards, Saravanan KR On Wed, Jun 13, 2018 at 9:20 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, > Thanks! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Thu Jun 14 05:39:31 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 13 Jun 2018 22:39:31 -0700 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: https://review.openstack.org/575264 just landed (and didn't timeout in check nor gate without recheck, so good sigh it helped to mitigate). I've restore and rechecked some patches that I evacuated from the gate, please do not restore others or recheck or approve anything for now, and see how it goes with a few patches. We're still working with Steve on his patches to optimize the way we deploy containers on the registry and are investigating how we could make it faster with a proxy. Stay tuned and thanks for your patience. On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi wrote: > TL;DR: gate queue was 25h+, we put all patches from gate on standby, do > not restore/recheck until further announcement. > > We recently enabled the containerized undercloud for multinode jobs and we > believe this was a bit premature as the container download process wasn't > optimized so it's not pulling the mirrors for the same containers multiple > times yet. > It caused the job runtime to increase and probably the load on docker.io > mirrors hosted by OpenStack Infra to be a bit slower to provide the same > containers multiple times. The time taken to prepare containers on the > undercloud and then for the overcloud caused the jobs to randomly timeout > therefore the gate to fail in a high amount of times, so we decided to > remove all jobs from the gate by abandoning the patches temporarily (I have > them in my browser and will restore when things are stable again, please do > not touch anything). > > Steve Baker has been working on a series of patches that optimize the way > we prepare the containers but basically the workflow will be: > - pull containers needed for the undercloud into a local registry, using > infra mirror if available > - deploy the containerized undercloud > - pull containers needed for the overcloud minus the ones already pulled > for the undercloud, using infra mirror if available > - update containers on the overcloud > - deploy the containerized undercloud > > With that process, we hope to reduce the runtime of the deployment and > therefore reduce the timeouts in the gate. > To enable it, we need to land in that order: https://review. > openstack.org/#/c/571613/, https://review.openstack.org/#/c/574485/, > https://review.openstack.org/#/c/571631/ and https://review.openstack. > org/#/c/568403. > > In the meantime, we are disabling the containerized undercloud recently > enabled on all scenarios: https://review.openstack.org/#/c/575264/ for > mitigation with the hope to stabilize things until Steve's patches land. > Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the > containerized undercloud on scenarios after checking that we don't have > timeouts and reasonable deployment runtimes. > > That's the plan we came with, if you have any question / feedback please > share it. > -- > Emilien, Steve and Wes > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.andre at redhat.com Thu Jun 14 06:32:01 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Thu, 14 Jun 2018 08:32:01 +0200 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 From gmann at ghanshyammann.com Thu Jun 14 07:54:33 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Thu, 14 Jun 2018 16:54:33 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1528923244-sup-2628@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> Message-ID: <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> ---- On Thu, 14 Jun 2018 05:55:55 +0900 Doug Hellmann wrote ---- > Excerpts from Doug Hellmann's message of 2018-06-13 12:19:18 -0400: > > Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400: > > > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900: > > > > ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > > > > > I would like to create a version of the jobs that run as part of > > > > > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > > > > > python 3. I'm not sure the best way to proceed, since that's a legacy > > > > > job. > > > > > > > > > > I'm not sure I'm familiar enough with the job to port it to be > > > > > zuulv3 native and allow us to drop the "legacy". Should I just > > > > > duplicate that job and modify it and keep the new one as "legacy" > > > > > too? > > > > > > > > > > Is there a different job I should base the work on? I don't see anything > > > > > obvious in the tempest repo's .zuul.yaml file. > > > > > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services > > > > > > > > You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- > > > > Now LIBS_FROM_GIT is automatically set based on required projects [2]) . > > > > > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. > > > > > > > > But I am not sure which repo should own this new job. > > > > > > Could it be as simple as adding tempest-full-py3 with the > > > required-projects list updated to include the current repository? So > > > there isn't a special separate job, and we would just reuse > > > tempest-full-py3 for this? This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. And if anyone needed cross lib/project testing (like i mentioned above) then, it will be very easy by defining a new job derived from tempest-full-py3 and add required lib in required_projects list. > > > > > > It would be less "automatic" than the current project-template and job, > > > but still relatively simple to set up. Am I missing something? This > > > feels too easy... > > > > I think I could define a job with a name like tempest-full-py3-src based > > on tempest-full-py3 and set LIBS_FROM_GIT to include > > {{zuul.project.name}} in the devstack_localrc vars section. If I > > understand correctly, that would automatically set LIBS_FROM_GIT to > > refer to the project that the job is attached to, which would make it > > easier to use from a project-template (I would also create a > > lib-forward-testing-py3 project template to supplement > > lib-forward-testing). > > > > Does that sound right? > > > > Doug > > This appears to be working. > > https://review.openstack.org/575164 adds a job to oslo.config and the > log shows LIBS_FROM_GIT set to oslo.config's repository: > > http://logs.openstack.org/64/575164/1/check/tempest-full-py3-src/7a193fa/job-output.txt.gz#_2018-06-13_19_01_22_742338 > > How does the QA team feel about hosting the job definition in the > tempest repository with the tempest-full-py3 job? If you think that will > work, I can propose the patch tomorrow. > [1] https://review.openstack.org/#/c/125433 [2] https://review.openstack.org/#/c/575324 -gmann > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bdobreli at redhat.com Thu Jun 14 09:08:20 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 14 Jun 2018 12:08:20 +0300 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On 6/13/18 6:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration > in TripleO and Puppet modules, always here to update with new features, > fix (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 > Thanks! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From zigo at debian.org Thu Jun 14 09:13:22 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 14 Jun 2018 11:13:22 +0200 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? Message-ID: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> Hi, When using cinder-backup, it first makes a snapshot, then sends the backup wherever it's configured. The issue is, to perform a backup, one needs to make a snapshot of a volume, meaning that one needs the size of the volume as empty space to be able to make the snapshot. So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as empty space on the volume node so I can do a backup of that volume. My question is: is there a way to tell cinder to reserve an amount of space for this kind of operation? The only thing I saw was reserved_percentage, but this looks like for thin provisioning only. If this doesn't exist, would such new option be accepted by the Cinder community, as a per volume node option? Or should we do it as a global setting? Cheers, Thomas Goirand (zigo) From sgolovat at redhat.com Thu Jun 14 09:22:58 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Thu, 14 Jun 2018 11:22:58 +0200 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: +1. Well deserved. On Thu, Jun 14, 2018 at 11:08 AM, Bogdan Dobrelya wrote: > On 6/13/18 6:50 PM, Emilien Macchi wrote: >> >> Alan Bishop has been highly involved in the Storage backends integration >> in TripleO and Puppet modules, always here to update with new features, fix >> (nasty and untestable third-party backends) bugs and manage all the >> backports for stable releases: >> >> https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 >> >> He's also well knowledgeable of how TripleO works and how containers are >> integrated, I would like to propose him as core on TripleO projects for >> patches related to storage things (Cinder, Glance, Swift, Manila, and >> backends). >> >> Please vote -1/+1, > > > +1 > >> Thanks! >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk From jistr at redhat.com Thu Jun 14 09:39:37 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 14 Jun 2018 11:39:37 +0200 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: +1 On 13.6.2018 17:50, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, > Thanks! > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bdobreli at redhat.com Thu Jun 14 09:47:32 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 14 Jun 2018 12:47:32 +0300 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: <16059f50-2cbc-c95f-7ca1-607412ece8ae@redhat.com> On 6/14/18 3:50 AM, Emilien Macchi wrote: > TL;DR: gate queue was 25h+, we put all patches from gate on standby, do > not restore/recheck until further announcement. > > We recently enabled the containerized undercloud for multinode jobs and > we believe this was a bit premature as the container download process > wasn't optimized so it's not pulling the mirrors for the same containers > multiple times yet. > It caused the job runtime to increase and probably the load on docker.io > mirrors hosted by OpenStack Infra to be a bit slower > to provide the same containers multiple times. The time taken to prepare > containers on the undercloud and then for the overcloud caused the jobs > to randomly timeout therefore the gate to fail in a high amount of > times, so we decided to remove all jobs from the gate by abandoning the > patches temporarily (I have them in my browser and will restore when > things are stable again, please do not touch anything). > > Steve Baker has been working on a series of patches that optimize the > way we prepare the containers but basically the workflow will be: > - pull containers needed for the undercloud into a local registry, using > infra mirror if available > - deploy the containerized undercloud > - pull containers needed for the overcloud minus the ones already pulled > for the undercloud, using infra mirror if available > - update containers on the overcloud > - deploy the containerized undercloud Let me also note that it's may be time to introduce jobs dependencies [0]. Dependencies might somewhat alleviate registries/mirrors DoS issues, like that one we have currently, by running jobs in batches, and not firing of all at once. We still have options to think of. The undercloud deployment takes longer than standalone, but provides better coverage therefore better extrapolates (and cuts off) future overcloud failures for the dependent jobs. Standalone is less stable yet though. The containers update check may be also an option for the step 1, or step 2, before the remaining multinode jobs execute. Making those dependent jobs skipped, in turn, reduces DoS effects caused to registries and mirrors. [0] https://review.openstack.org/#/q/status:open+project:openstack-infra/tripleo-ci+topic:ci_pipelines > > With that process, we hope to reduce the runtime of the deployment and > therefore reduce the timeouts in the gate. > To enable it, we need to land in that order: > https://review.openstack.org/#/c/571613/, > https://review.openstack.org/#/c/574485/, > https://review.openstack.org/#/c/571631/ and > https://review.openstack.org/#/c/568403. > > In the meantime, we are disabling the containerized undercloud recently > enabled on all scenarios: https://review.openstack.org/#/c/575264/ for > mitigation with the hope to stabilize things until Steve's patches land. > Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the > containerized undercloud on scenarios after checking that we don't have > timeouts and reasonable deployment runtimes. > > That's the plan we came with, if you have any question / feedback please > share it. > -- > Emilien, Steve and Wes > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sombrafam at gmail.com Thu Jun 14 11:10:56 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Thu, 14 Jun 2018 08:10:56 -0300 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? In-Reply-To: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> References: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> Message-ID: Hi Thomas, The reserved_percentage *is* taken in account for non thin provisoning backends. So you can use it to spare the space you need for backups. It is a per backend configuration. If you have already tried to used it and that is not working, please let us know what release you are using, because despite this being the current (and proper) behavior, it might not being like this in the past. Erlon Em qui, 14 de jun de 2018 às 06:13, Thomas Goirand escreveu: > Hi, > > When using cinder-backup, it first makes a snapshot, then sends the > backup wherever it's configured. The issue is, to perform a backup, one > needs to make a snapshot of a volume, meaning that one needs the size of > the volume as empty space to be able to make the snapshot. > > So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as > empty space on the volume node so I can do a backup of that volume. > > My question is: is there a way to tell cinder to reserve an amount of > space for this kind of operation? The only thing I saw was > reserved_percentage, but this looks like for thin provisioning only. If > this doesn't exist, would such new option be accepted by the Cinder > community, as a per volume node option? Or should we do it as a global > setting? > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamzy at us.ibm.com Thu Jun 14 12:48:37 2018 From: hamzy at us.ibm.com (Mark Hamzy) Date: Thu, 14 Jun 2018 07:48:37 -0500 Subject: [openstack-dev] [tripleo][heat][jinja] resources.RedisVirtualIP: Property error: resources.VipPort.properties.network: Error validating value 'internal_api': Unable to find network with name or id 'internal_api' Message-ID: I am trying to delete the Storage, StorageMgmt, Tenant, and Management networks and trying to deploy using TripleO. The following patch https://hamzy.fedorapeople.org/0001-RedisVipPort-error-internal_api.patch applied on top of /usr/share/openstack-tripleo-heat-templates from openstack-tripleo-heat-templates-8.0.2-14.el7ost.noarch yields the following error: (undercloud) [stack at oscloud5 ~]$ openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e ~/templates/overcloud_images.yaml -e ~/templates/environments/network-environment.yaml -e ~/templates/environments/network-isolation.yaml -e ~/templates/environments/config-debug.yaml --ntp-server pool.ntp.org --control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute 2>&1 | tee output.overcloud.deploy ... overcloud.RedisVirtualIP: resource_type: OS::TripleO::Network::Ports::RedisVipPort physical_resource_id: status: CREATE_FAILED status_reason: | resources.RedisVirtualIP: Property error: resources.VipPort.properties.network: Error validating value 'internal_api': Unable to find network with name or id 'internal_api' ... The following patch seems to fix it: 8<-----8<-----8<-----8<-----8<----- diff --git a/environments/network-isolation.j2.yaml b/environments/network-isolation.j2.yaml index 3d4f59b..07cb748 100644 --- a/environments/network-isolation.j2.yaml +++ b/environments/network-isolation.j2.yaml @@ -20,7 +20,13 @@ resource_registry: {%- for network in networks if network.vip and network.enabled|default(true) %} OS::TripleO::Network::Ports::{{network.name}}VipPort: ../network/ports/{{network.name_lower|default(network.name.lower())}}.yaml {%- endfor %} +{%- for role in roles -%} + {%- if internal_api in role.networks|default([]) and internal_api.enabled|default(true) %} OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml + {%- else %} + # Avoid weird jinja2 bugs that don't output a newline... + {%- endif %} +{%- endfor -%} # Port assignments by role, edit role definition to assign networks to roles. {%- for role in roles %} 8<-----8<-----8<-----8<-----8<----- Note that I had to do an else clause because jinja2 would not output the newline that was outside of the for block. Am I following the correct path to fix this issue? -- Mark You must be the change you wish to see in the world. -- Mahatma Gandhi Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present. -- Marcus Aurelius -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Jun 14 13:03:26 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 14 Jun 2018 08:03:26 -0500 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: <37286fef-15fd-9edc-7e7a-cde6f3f18aed@inaugust.com> On 06/13/2018 07:50 PM, Emilien Macchi wrote: > TL;DR: gate queue was 25h+, we put all patches from gate on standby, do > not restore/recheck until further announcement. > > We recently enabled the containerized undercloud for multinode jobs and > we believe this was a bit premature as the container download process > wasn't optimized so it's not pulling the mirrors for the same containers > multiple times yet. > It caused the job runtime to increase and probably the load on docker.io > mirrors hosted by OpenStack Infra to be a bit slower > to provide the same containers multiple times. The time taken to prepare > containers on the undercloud and then for the overcloud caused the jobs > to randomly timeout therefore the gate to fail in a high amount of > times, so we decided to remove all jobs from the gate by abandoning the > patches temporarily (I have them in my browser and will restore when > things are stable again, please do not touch anything). > > Steve Baker has been working on a series of patches that optimize the > way we prepare the containers but basically the workflow will be: > - pull containers needed for the undercloud into a local registry, using > infra mirror if available > - deploy the containerized undercloud > - pull containers needed for the overcloud minus the ones already pulled > for the undercloud, using infra mirror if available > - update containers on the overcloud > - deploy the containerized undercloud That sounds like a great improvement. Well done! > With that process, we hope to reduce the runtime of the deployment and > therefore reduce the timeouts in the gate. > To enable it, we need to land in that order: > https://review.openstack.org/#/c/571613/, > https://review.openstack.org/#/c/574485/, > https://review.openstack.org/#/c/571631/ and > https://review.openstack.org/#/c/568403. > > In the meantime, we are disabling the containerized undercloud recently > enabled on all scenarios: https://review.openstack.org/#/c/575264/ for > mitigation with the hope to stabilize things until Steve's patches land. > Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the > containerized undercloud on scenarios after checking that we don't have > timeouts and reasonable deployment runtimes. > > That's the plan we came with, if you have any question / feedback please > share it. > -- > Emilien, Steve and Wes > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Thu Jun 14 13:32:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 14 Jun 2018 09:32:39 -0400 Subject: [openstack-dev] [nova] review runways check-in and feedback In-Reply-To: References: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> Message-ID: <26c26698-c119-765a-03f7-b98aa10f212c@gmail.com> On 06/13/2018 05:33 PM, Matt Riedemann wrote: > On 6/13/2018 3:33 PM, melanie witt wrote: >> >> We've been experimenting with a new process this cycle, Review Runways >> [1] and we're about at the middle of the cycle now as we had the r-2 >> milestone last week June 7. >> >> I wanted to start a thread and gather thoughts and feedback from the >> nova community about how they think runways have been working or not >> working and lend any suggestions to change or improve as we continue >> on in the rocky cycle. >> >> We decided to try the runways process to increase the chances of core >> reviewers converging on the same changes and thus increasing reviews >> and merges on approved blueprint work. As of today, we have 69 >> blueprints approved and 28 blueprints completed, we just passed r-2 >> June 7 and r-3 is July 26 and rc1 is August 9 [2]. >> >> Do people feel like they've been receiving more review on their >> blueprints? Does it seem like we're completing more blueprints >> earlier? Is there feedback or suggestions for change that you can share? > > Lots of cores are not reviewing stuff in the current runways slots, > which defeats the purpose of runways for the most part if the majority > of the core team aren't going to review what's in a slot. I know I don't review a ton of stuff like you or Eric, but I just can't any more. It's too much for me to handle. While I have tried to review a few of the runway-slotted efforts, I have gotten burned out on a number of them. Other runway-slotted efforts, I simply don't care enough about or once I've seen some of the code, simply can't bring myself to review it (sorry, just being honest). I like the *concept* of the runways, though. It's good to have a focusing agent to direct reviewer attention to things that are "ready" for final review. Despite this focusing agent, though, we are still realistically limited by the small size of the Nova core team. I'm not sure there are processes (runways or otherwise) that are going to increase the velocity of merging code [1] unless we increase the size of the core team. It's not like we don't look for new core additions and attempt to identify folks that would be good cores and try to help them. We *do* do this. The issue is that Nova is big, scary, messy, fragile (in many ways), complex and more than any other project (no offense to those other projects) has a virtually *endless* stream of feature requests coming (mostly from vendors, sorry) looking to plug their latest and greatest hardware into the virt world. Until that endless stream of feature requests subsides, we will continue to have these problems. And, for those out there that say "well, Jay, then those vendors will just abandon OpenStack and go to more fertile feature-accepting grounds like k8s!", I say "hey, go for it." Not everything is appropriate to jam into Nova (or OpenStack for that matter). Let k8s deal with the never-ending feature velocity (NFV) and vendor/product-enablement requests. And let them collapse under that weight. [1] I say "increase the velocity of merging code" but keep in mind that Nova *already* merges the most code in all of OpenStack. We merge more code in Nova in a week than some service projects merge in three months. Our rate of code merging in just Nova often rivals larger-scoped monoliths like kubernetes/kubernetes. > Lots of people have ready-for-runways blueprint series that aren't > queued up in the runways etherpad, and then ask for reviews on those > series and I have to tell them, "throw it in the runways queue". > > I'm not sure if people are thinking subteams need to review series that > are ready for wider review first, but especially for the placement > stuff, I think those things need to be slotted up if they are ready. I can work with Eric to make sure placement patch series (for the required ones at least that are holding up other work) are queued up properly for runways. That said, I don't feel we are suffering from a lack of reviews in placement-land. Is your concern that placement stuff is getting unfair attention since many of the patch series aren't in the runways? Or is your concern that you'd like to see *more* core reviews on placement stuff outside of the usual placement-y core reviewers (you, me, Alex, Eric, Gibi and Dan)? > Having said that, it's clear from the list of things in the runways > etherpad that there are some lower priority efforts that have been > completed probably because they leveraged runways (there are a few > xenapi blueprints for example, and the powervm driver changes). Wasn't that kind of the point of the runways, though? To enable "lower priority" efforts to have a chance at getting reviews? Or are you just stating here the apparent success of that effort? Best, -jay From emilien at redhat.com Thu Jun 14 13:40:16 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 14 Jun 2018 06:40:16 -0700 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: It sounds like we merged a bunch last night thanks to the revert, so I went ahead and restored/rechecked everything that was out of the gate. I've checked and nothing was left over, but let me know in case I missed something. I'll keep updating this thread with the progress made to improve the situation etc. So from now, situation is back to "normal", recheck/+W is ok. Thanks again for your patience, On Wed, Jun 13, 2018 at 10:39 PM, Emilien Macchi wrote: > https://review.openstack.org/575264 just landed (and didn't timeout in > check nor gate without recheck, so good sigh it helped to mitigate). > > I've restore and rechecked some patches that I evacuated from the gate, > please do not restore others or recheck or approve anything for now, and > see how it goes with a few patches. > We're still working with Steve on his patches to optimize the way we > deploy containers on the registry and are investigating how we could make > it faster with a proxy. > > Stay tuned and thanks for your patience. > > On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi > wrote: > >> TL;DR: gate queue was 25h+, we put all patches from gate on standby, do >> not restore/recheck until further announcement. >> >> We recently enabled the containerized undercloud for multinode jobs and >> we believe this was a bit premature as the container download process >> wasn't optimized so it's not pulling the mirrors for the same containers >> multiple times yet. >> It caused the job runtime to increase and probably the load on docker.io >> mirrors hosted by OpenStack Infra to be a bit slower to provide the same >> containers multiple times. The time taken to prepare containers on the >> undercloud and then for the overcloud caused the jobs to randomly timeout >> therefore the gate to fail in a high amount of times, so we decided to >> remove all jobs from the gate by abandoning the patches temporarily (I have >> them in my browser and will restore when things are stable again, please do >> not touch anything). >> >> Steve Baker has been working on a series of patches that optimize the way >> we prepare the containers but basically the workflow will be: >> - pull containers needed for the undercloud into a local registry, using >> infra mirror if available >> - deploy the containerized undercloud >> - pull containers needed for the overcloud minus the ones already pulled >> for the undercloud, using infra mirror if available >> - update containers on the overcloud >> - deploy the containerized undercloud >> >> With that process, we hope to reduce the runtime of the deployment and >> therefore reduce the timeouts in the gate. >> To enable it, we need to land in that order: https://review.openstac >> k.org/#/c/571613/, https://review.openstack.org/#/c/574485/, >> https://review.openstack.org/#/c/571631/ and https://review.openstack.o >> rg/#/c/568403. >> >> In the meantime, we are disabling the containerized undercloud recently >> enabled on all scenarios: https://review.openstack.org/#/c/575264/ for >> mitigation with the hope to stabilize things until Steve's patches land. >> Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the >> containerized undercloud on scenarios after checking that we don't have >> timeouts and reasonable deployment runtimes. >> >> That's the plan we came with, if you have any question / feedback please >> share it. >> -- >> Emilien, Steve and Wes >> > > > > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Jun 14 13:49:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 14 Jun 2018 08:49:21 -0500 Subject: [openstack-dev] [release] Release countdown for week R-10, June 18-22 Message-ID: <20180614134920.GA2814@sm-workstation> Welcome to the weekly countdown email. Development Focus ----------------- Teams should be focused on implementing planned work for the cycle. It is also a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to Rocky-3, please be aware of the various freeze dates. This varies for deliverable type, starting with non-client libraries, then client libraries, then finally services. This is to ensure we have time for requirements updates and resolving any issues prior to RC. Just as a reminder, we have freeze dates ahead of the first RC in order to stabilize our requirements. Updating global requirements close to overall code freeze increases the risk of an unforeseen side effect being introduced too late in the cycle to properly fix. For this reason, we first freeze the non-client libraries that may be used by service and client libraries, followed a week later by the client libraries. This minimizes the ripple effects that have caused projects to scamble to fix last minute issues. Please keep these deadlines in mind as you work towards wrapping up feature work that may require library changes to complete. Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From melwittt at gmail.com Thu Jun 14 13:58:16 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 14 Jun 2018 06:58:16 -0700 Subject: [openstack-dev] [nova] nova-cells-v1 intermittent gate failure In-Reply-To: <48472d08-39a5-307f-0361-9fb2d92d9781@gmail.com> References: <48472d08-39a5-307f-0361-9fb2d92d9781@gmail.com> Message-ID: On Wed, 13 Jun 2018 15:47:33 -0700, Melanie Witt wrote: > Hi everybody, > > Just a heads up that we have an intermittent gate failure of the > nova-cells-v1 job happening right now [1] and a revert of the tempest > change related to it has been approved [2] and will be making its way > through the gate. The nova-cells-v1 job will be failing until [2] merges. > > -melanie > > [1] https://bugs.launchpad.net/nova/+bug/1776684 > [2] https://review.openstack.org/575132 The fix [2] has merged, so it is now safe to recheck your changes that were caught up in the nova-cells-v1 gate failure. Thanks, -melanie From dms at danplanet.com Thu Jun 14 14:25:16 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 14 Jun 2018 07:25:16 -0700 Subject: [openstack-dev] [nova] review runways check-in and feedback In-Reply-To: <26c26698-c119-765a-03f7-b98aa10f212c@gmail.com> (Jay Pipes's message of "Thu, 14 Jun 2018 09:32:39 -0400") References: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> <26c26698-c119-765a-03f7-b98aa10f212c@gmail.com> Message-ID: > While I have tried to review a few of the runway-slotted efforts, I > have gotten burned out on a number of them. Other runway-slotted > efforts, I simply don't care enough about or once I've seen some of > the code, simply can't bring myself to review it (sorry, just being > honest). I have the same feeling, although I have reviewed a lot of things I wouldn't have otherwise as a result of them being in the runway. I spent a bunch of time early on with the image signing stuff, which I think was worthwhile, although at this point I'm a bit worn out on it. That's not the fault of runways though. > Is your concern that placement stuff is getting unfair attention since > many of the patch series aren't in the runways? Or is your concern > that you'd like to see *more* core reviews on placement stuff outside > of the usual placement-y core reviewers (you, me, Alex, Eric, Gibi and > Dan)? I think placement has been getting a bit of a free ride, with constant review and insulation from the runway process. However, I don't think that we can stop progress on that effort while we circle around, and the subteam/group of people that focus on placement already has a lot of supporting cores already. So, it's cheating a little bit, but we always said that we're not going to tell cores *not* to review something unless it is in a runway and pragmatially I think it's probably the right thing to do for placement. >> Having said that, it's clear from the list of things in the runways >> etherpad that there are some lower priority efforts that have been >> completed probably because they leveraged runways (there are a few >> xenapi blueprints for example, and the powervm driver changes). > > Wasn't that kind of the point of the runways, though? To enable "lower > priority" efforts to have a chance at getting reviews? Or are you just > stating here the apparent success of that effort? It was, and I think it has worked well for that for several things. The image signing stuff got more review in its first runway slot than it has in years I think. Overall, I don't think we're worse off with runways than we were before it. I think that some things that will get attention regardless are still progressing. I think that some things that are far off on the fringe are still getting ignored. I think that for the huge bulk of things in the middle of those two, runways has helped focus review on specific efforts and thus increased the throughput there. For a first attempt, I'd call that a success. I think maybe a little more monitoring of the review rate of things in the runways and some gentle prodding of people to look at ones that are burning time and not seeing much review would maybe improve things a bit. --Dan From ed at leafe.com Thu Jun 14 16:37:12 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 14 Jun 2018 11:37:12 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <1D0673C8-275D-4A5E-AB53-7C11F617E871@leafe.com> Greetings OpenStack community, A small, intimate meeting today, as only cdent and edleafe were present. We discussed the work being done [7] to migrate our bug/issue tracking from Launchpad to StoryBoard [8]. The change to the infra docs will trigger the setup of StoryBoard when it merges. Once StoryBoard is up and running for the API-SIG, we will notify the GraphQL team, so that they can track their stories and tasks there. There was also more progress on updating the name of this group. When we switched from the API-WG to the API-SIG a few months ago, there were several places where we could make the change without much fuss. But some other places, such as Gerrit, required intervention from the infra team. We thought it would be too much bother, so we didn't spend time on it. But it turns out that it's not that difficult for infra to do during Gerrit downtimes, so that change [9] was also submitted. There being no recent changes to pending guidelines nor to bugs, we ended the meeting early. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/575120 [8] https://storyboard.openstack.org/#!/page/about [9] https://review.openstack.org/575478 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From sean.mcginnis at gmx.com Thu Jun 14 16:37:18 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 14 Jun 2018 11:37:18 -0500 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? In-Reply-To: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> References: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> Message-ID: <20180614163718.GA3634@sm-workstation> On Thu, Jun 14, 2018 at 11:13:22AM +0200, Thomas Goirand wrote: > Hi, > > When using cinder-backup, it first makes a snapshot, then sends the > backup wherever it's configured. The issue is, to perform a backup, one > needs to make a snapshot of a volume, meaning that one needs the size of > the volume as empty space to be able to make the snapshot. > > So, let's say I have a cinder volume of 1 TB, this mean I need 1 TB as > empty space on the volume node so I can do a backup of that volume. > > My question is: is there a way to tell cinder to reserve an amount of > space for this kind of operation? The only thing I saw was > reserved_percentage, but this looks like for thin provisioning only. If > this doesn't exist, would such new option be accepted by the Cinder > community, as a per volume node option? Or should we do it as a global > setting? > I don't believe we have this as a setting anywhere today. It would be best as a per-backend (or backend_defaults) setting as some backends can create volumes from snapshots without consuming any extra space, while others like you point out with LVM needing to allocate a considerable amount of space. Maybe someone else can chime in if they are aware of another way this is already being handled, but I have not had to deal with it, so I'm not aware of anything. From sean.mcginnis at gmx.com Thu Jun 14 16:38:31 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 14 Jun 2018 11:38:31 -0500 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? In-Reply-To: References: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> Message-ID: <20180614163830.GB3634@sm-workstation> On Thu, Jun 14, 2018 at 08:10:56AM -0300, Erlon Cruz wrote: > Hi Thomas, > > The reserved_percentage *is* taken in account for non thin provisoning > backends. So you can use it to spare the space you need for backups. It is > a per backend configuration. > > If you have already tried to used it and that is not working, please let us > know what release you are using, because despite this being the current > (and proper) behavior, it might not being like this in the past. > > Erlon > Guess I didn't read far enough ahead. Thanks Erlon! From doug at doughellmann.com Thu Jun 14 17:02:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Jun 2018 13:02:31 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> Message-ID: <1528995441-sup-1746@lrrr.local> Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900: > > > > ---- On Thu, 14 Jun 2018 05:55:55 +0900 Doug Hellmann wrote ---- > > Excerpts from Doug Hellmann's message of 2018-06-13 12:19:18 -0400: > > > Excerpts from Doug Hellmann's message of 2018-06-13 10:31:00 -0400: > > > > Excerpts from Ghanshyam's message of 2018-06-13 16:52:33 +0900: > > > > > ---- On Wed, 13 Jun 2018 05:09:03 +0900 Doug Hellmann wrote ---- > > > > > > I would like to create a version of the jobs that run as part of > > > > > > lib-forward-testing (legacy-tempest-dsvm-neutron-src) that works under > > > > > > python 3. I'm not sure the best way to proceed, since that's a legacy > > > > > > job. > > > > > > > > > > > > I'm not sure I'm familiar enough with the job to port it to be > > > > > > zuulv3 native and allow us to drop the "legacy". Should I just > > > > > > duplicate that job and modify it and keep the new one as "legacy" > > > > > > too? > > > > > > > > > > > > Is there a different job I should base the work on? I don't see anything > > > > > > obvious in the tempest repo's .zuul.yaml file. > > > > > > > > > > I had a quick glance of this job (legacy-tempest-dsvm-neutron-src) and it is similar to tempest-full-py3 job except it override the LIBS_FROM_GIT with corresponding lib. tempest-full-py3 job is py3 based with tempest-full tests running and disable the swift services > > > > > > > > > > You can create a new job (something tempest-full-py3-src) derived from 'tempest-full-py3' if all set var is ok for you like disable swift OR derived 'devstack-tempest' and then build other var similar to 'tempest-full-py3'. Extra things you need to do is to add libs you want to override in 'required_project' list (FYI- > > > > > Now LIBS_FROM_GIT is automatically set based on required projects [2]) . > > > > > > > > > > Later, old job (legacy-tempest-dsvm-neutron-src) can be migrated separately if needed to run or removed. > > > > > > > > > > But I am not sure which repo should own this new job. > > > > > > > > Could it be as simple as adding tempest-full-py3 with the > > > > required-projects list updated to include the current repository? So > > > > there isn't a special separate job, and we would just reuse > > > > tempest-full-py3 for this? > > This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. > > Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. Does it? So if I add tempest-full-py3 to a *library* that library is installed from source in the job? I know the source for the library will be checked out, but I'm surprised that devstack would be configured to use it. How does that work? > > And if anyone needed cross lib/project testing (like i mentioned above) then, it will be very easy by defining a new job derived from tempest-full-py3 and add required lib in required_projects list. Sure. Someone could do that, but it's not the problem I'm trying to solve right now. > > > > > > > > > It would be less "automatic" than the current project-template and job, > > > > but still relatively simple to set up. Am I missing something? This > > > > feels too easy... > > > > > > I think I could define a job with a name like tempest-full-py3-src based > > > on tempest-full-py3 and set LIBS_FROM_GIT to include > > > {{zuul.project.name}} in the devstack_localrc vars section. If I > > > understand correctly, that would automatically set LIBS_FROM_GIT to > > > refer to the project that the job is attached to, which would make it > > > easier to use from a project-template (I would also create a > > > lib-forward-testing-py3 project template to supplement > > > lib-forward-testing). > > > > > > Does that sound right? > > > > > > Doug > > > > This appears to be working. > > > > https://review.openstack.org/575164 adds a job to oslo.config and the > > log shows LIBS_FROM_GIT set to oslo.config's repository: > > > > http://logs.openstack.org/64/575164/1/check/tempest-full-py3-src/7a193fa/job-output.txt.gz#_2018-06-13_19_01_22_742338 > > > > How does the QA team feel about hosting the job definition in the > > tempest repository with the tempest-full-py3 job? If you think that will > > work, I can propose the patch tomorrow. > > > > [1] https://review.openstack.org/#/c/125433 > [2] https://review.openstack.org/#/c/575324 > > -gmann > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From doug at doughellmann.com Thu Jun 14 19:00:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Jun 2018 15:00:47 -0400 Subject: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons In-Reply-To: <1528395799-sup-4077@lrrr.local> References: <1528395799-sup-4077@lrrr.local> Message-ID: <1529002532-sup-6811@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-07 14:28:02 -0400: > As we discussed in today's office hours, I have set up some space in the > wiki for us to track which TC members are volunteering to act as liaison > to the teams and other groups within the community to ensure they have > the assistance and support they need from the TC. > > https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Liaisons > > For the first round, please sign up for groups you are interested in > helping. We will work out some sort of assignment system for the rest so > we have good coverage. > > The list is quite long, so I don't expect everyone to be checking in > with the groups weekly. But we do need to get a handle on where things > stand now, and work out a way to keep up to date over time. My hope is > that by dividing the work up, we won't *all* have to be tracking all of > the groups and we won't let anyone slip through the cracks. > > Doug After giving everyone a week to volunteer as liaisons for project teams, I have filled out the roster so that every team has 2 TC members assigned. I used random.shuffle() and then went down the list and tried to avoid assigning the same person twice while ensuring that everyone had 10. Please check my results. :-) We already have some reports from a few teams on the status page, https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates It would be good if we could complete a first pass for all teams between now and the PTG and post the results to that wiki page. Doug From prometheanfire at gentoo.org Thu Jun 14 19:17:26 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 14 Jun 2018 14:17:26 -0500 Subject: [openstack-dev] [all][requirements][docs] Message-ID: <20180614191726.3eut3j7q44gkkqe4@gentoo.org> Sphinx is being updated from 1.6.7 to 1.7.5. You may need to update your docs/templates to work with it. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From alee at redhat.com Thu Jun 14 20:30:17 2018 From: alee at redhat.com (Ade Lee) Date: Thu, 14 Jun 2018 16:30:17 -0400 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1524239859.2972.74.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> Message-ID: <1529008217.7441.68.camel@redhat.com> The new time slot has been pretty difficult for folks to attend. I'd like to propose a new time slot, which will hopefully be more amenable to everyone. Tuesday 12:00 UTC https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=00&se c=0 This works out to 8 am EST, around 1pm in Europe, and 8 pm in China. Please vote by responding to this email. Thanks, Ade From dmendiza at redhat.com Thu Jun 14 20:33:22 2018 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Thu, 14 Jun 2018 15:33:22 -0500 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1529008217.7441.68.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> <1529008217.7441.68.camel@redhat.com> Message-ID: <684bbe5e0fe0d1169e12618ae1817656ef5b7b82.camel@redhat.com> +1 The new time slot would definitely make it much easier for me to attend than the current one. - Douglas Mendizábal On Thu, 2018-06-14 at 16:30 -0400, Ade Lee wrote: > The new time slot has been pretty difficult for folks to attend. > I'd like to propose a new time slot, which will hopefully be more > amenable to everyone. > > Tuesday 12:00 UTC > > https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=00& > se > c=0 > > This works out to 8 am EST, around 1pm in Europe, and 8 pm in China. > Please vote by responding to this email. > > Thanks, > Ade > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Jun 14 21:17:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Jun 2018 17:17:34 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1528995441-sup-1746@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> Message-ID: <1529010884-sup-4343@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400: > Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900: > > > > > Could it be as simple as adding tempest-full-py3 with the > > > > > required-projects list updated to include the current repository? So > > > > > there isn't a special separate job, and we would just reuse > > > > > tempest-full-py3 for this? > > > > This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. > > > > Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. > > Does it? So if I add tempest-full-py3 to a *library* that library is > installed from source in the job? I know the source for the library > will be checked out, but I'm surprised that devstack would be configured > to use it. How does that work? Based on my testing, that doesn't seem to be the case. I added it to oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set to include oslo.config and the check function is returning false so that it is not installed from source [2]. So, I think we need the tempest-full-py3-src job. I will propose an update to the tempest repo to add that. Doug [1] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz [2] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136 From sean.mcginnis at gmx.com Thu Jun 14 21:28:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 14 Jun 2018 16:28:14 -0500 Subject: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons In-Reply-To: <1529002532-sup-6811@lrrr.local> References: <1528395799-sup-4077@lrrr.local> <1529002532-sup-6811@lrrr.local> Message-ID: <20180614212814.GA18775@sm-workstation> > > After giving everyone a week to volunteer as liaisons for project teams, > I have filled out the roster so that every team has 2 TC members > assigned. I used random.shuffle() and then went down the list and tried > to avoid assigning the same person twice while ensuring that everyone > had 10. Please check my results. :-) > > We already have some reports from a few teams on the status page, > https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates > > It would be good if we could complete a first pass for all teams between > now and the PTG and post the results to that wiki page. > > Doug > What is the expectation with these reports Doug? We talked a little about being a TC contact point for these teams or at least reaching out and checking in on the teams periodically, but this is the first I was aware of needing to write some sort of report about it. I can certainly collect some notes, but since I was not aware of this part of it, I'm sure there are probably others that were not as well. Sean From doug at doughellmann.com Thu Jun 14 21:41:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 14 Jun 2018 17:41:21 -0400 Subject: [openstack-dev] [tc][ptl] team, SIG, and working group liaisons In-Reply-To: <20180614212814.GA18775@sm-workstation> References: <1528395799-sup-4077@lrrr.local> <1529002532-sup-6811@lrrr.local> <20180614212814.GA18775@sm-workstation> Message-ID: <1529012417-sup-4016@lrrr.local> Excerpts from Sean McGinnis's message of 2018-06-14 16:28:14 -0500: > > > > After giving everyone a week to volunteer as liaisons for project teams, > > I have filled out the roster so that every team has 2 TC members > > assigned. I used random.shuffle() and then went down the list and tried > > to avoid assigning the same person twice while ensuring that everyone > > had 10. Please check my results. :-) > > > > We already have some reports from a few teams on the status page, > > https://wiki.openstack.org/wiki/OpenStack_health_tracker#Status_updates > > > > It would be good if we could complete a first pass for all teams between > > now and the PTG and post the results to that wiki page. > > > > Doug > > > > What is the expectation with these reports Doug? We talked a little about being > a TC contact point for these teams or at least reaching out and checking in on > the teams periodically, but this is the first I was aware of needing to write > some sort of report about it. > > I can certainly collect some notes, but since I was not aware of this part of > it, I'm sure there are probably others that were not as well. > > Sean > Sorry, that was a poor choice of words on my part. I don't expect a long detailed write up. Just leave your notes in the wiki page, like the other folks have been doing as they have started their reviews. Doug From gmann at ghanshyammann.com Fri Jun 15 00:04:35 2018 From: gmann at ghanshyammann.com (Ghanshyam) Date: Fri, 15 Jun 2018 09:04:35 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1529010884-sup-4343@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> Message-ID: <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> ---- On Fri, 15 Jun 2018 06:17:34 +0900 Doug Hellmann wrote ---- > Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400: > > Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900: > > > > > > > Could it be as simple as adding tempest-full-py3 with the > > > > > > required-projects list updated to include the current repository? So > > > > > > there isn't a special separate job, and we would just reuse > > > > > > tempest-full-py3 for this? > > > > > > This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. > > > > > > Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. > > > > Does it? So if I add tempest-full-py3 to a *library* that library is > > installed from source in the job? I know the source for the library > > will be checked out, but I'm surprised that devstack would be configured > > to use it. How does that work? > > Based on my testing, that doesn't seem to be the case. I added it to > oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set > to include oslo.config and the check function is returning false so that > it is not installed from source [2]. Yes, It will not be set on LIBS_FROM_GIT as we did not set it explicitly. But gate running on any repo does run job on current change set of that repo which is nothing but "master + current patch changes" . For example, any job running on oslo.config patch will take oslo.config source code from that patch which is "master + current change". You can see the results in this patch - https://review.openstack.org/#/c/575324/ . Where I deleted a module and gate jobs (including tempest-full-py3) fails as they run on current change set of neutron-lib code not on pypi version(which would pass the tests). In that case, lib's proposed change will be tested against integration tests job to check any regression. If we need to run cross lib/project testing of any lib then, yes we need the 'tempest-full-py3-src' job but that is separate things as you mentioned. -gmann > > So, I think we need the tempest-full-py3-src job. I will propose an > update to the tempest repo to add that. > > Doug > > [1] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz > [2] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From michele at acksyn.org Fri Jun 15 06:03:19 2018 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 15 Jun 2018 08:03:19 +0200 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: <20180615060319.GA4410@palahniuk.int.rhx> +1 On Wed, Jun 13, 2018 at 08:50:23AM -0700, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, > Thanks! > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From lyarwood at redhat.com Fri Jun 15 06:28:29 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Fri, 15 Jun 2018 07:28:29 +0100 Subject: [openstack-dev] Reminder to add "nova-status upgrade check" to deployment tooling In-Reply-To: References: Message-ID: <20180615062829.q6axidhwunw7xofy@lyarwood.usersys.redhat.com> On 13-06-18 10:14:32, Matt Riedemann wrote: > I was going through some recently reported nova bugs and came across [1] > which I opened at the Summit during one of the FFU sessions where I realized > the nova upgrade docs don't mention the nova-status upgrade check CLI [2] > (added in Ocata). > > As a result, I was wondering how many deployment tools out there support > upgrades and from those, which are actually integrating that upgrade status > check command. TripleO doesn't at present but like OSA it looks trivial to add: https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/nova-api.yaml I've created the following bug to track this: https://bugs.launchpad.net/tripleo/+bug/1777060 > I'm not really familiar with most of them, but I've dabbled in OSA enough to > know where the code lived for nova upgrades, so I posted a patch [3]. > > I'm hoping this can serve as a template for other deployment projects to > integrate similar checks into their upgrade (and install verification) > flows. > > [1] https://bugs.launchpad.net/nova/+bug/1772973 > [2] https://docs.openstack.org/nova/latest/cli/nova-status.html > [3] https://review.openstack.org/#/c/575125/ Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From Dave.Chen at Dell.com Fri Jun 15 08:17:09 2018 From: Dave.Chen at Dell.com (Dave.Chen at Dell.com) Date: Fri, 15 Jun 2018 08:17:09 +0000 Subject: [openstack-dev] [neutron] Question on the OVS configuration Message-ID: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> Dear folks, I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn't looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge "br-ex" configured as below: Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal ovs_version: "2.8.0" As you see, there is no external physical NIC bound to "br-ex", so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to "br-ex" like this: ovs-vsctl add-port br-ex eno2. now the "br-ex" is configured as below: Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} *Port "eno2"* Interface "eno2" Port br-ex Interface br-ex type: internal ovs_version: "2.8.0" Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn't work as expected, ping from the VM, the tcpdump shows the traffic still go the "eno1" which is the default route on the controller node. Inside of VM ubuntu at test-br:~$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms ... Dump the traffic on the "eno2", got nothing $ sudo tcpdump -nn -i eno2 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes ... Dump the traffic on the "eno1" (internal NIC), catch it! $ sudo tcpdump -nn -i eno1 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 1, length 64 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 1, length 64 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 2, length 64 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 2, length 64 $ sudo ip route default via 192.168.18.1 dev eno1 proto static metric 100 default via 192.168.8.1 dev eno2 proto static metric 101 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 metric 100 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 metric 100 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 What's going wrong here? Do I miss something? Or some service need to be restarted? Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! Best Regards, Dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 15 09:09:26 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 15 Jun 2018 11:09:26 +0200 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> Message-ID: Hi, If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex. > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: > > Dear folks, > > I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > *Port "eno2"* > Interface "eno2" > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. > > Inside of VM > ubuntu at test-br:~$ ping 8.8.8.8 > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms > … > > Dump the traffic on the “eno2”, got nothing > $ sudo tcpdump -nn -i eno2 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes > … > > Dump the traffic on the “eno1” (internal NIC), catch it! > $ sudo tcpdump -nn -i eno1 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes > 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 1, length 64 > 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 1, length 64 > 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 2, length 64 > 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 2, length 64 > > > $ sudo ip route > default via 192.168.18.1 dev eno1 proto static metric 100 > default via 192.168.8.1 dev eno2 proto static metric 101 > 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown > 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown > 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 metric 100 > 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 metric 100 > 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 > > > What’s going wrong here? Do I miss something? Or some service need to be restarted? > > Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! > > > Best Regards, > Dave > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From michele at acksyn.org Fri Jun 15 09:12:54 2018 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 15 Jun 2018 11:12:54 +0200 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: References: Message-ID: <20180615091254.GB4410@palahniuk.int.rhx> On Mon, May 21, 2018 at 01:58:26PM -0700, Emilien Macchi wrote: > During the Storyboard session today: > https://etherpad.openstack.org/p/continuing-the-migration-lp-sb > > We mentioned that TripleO would continue to migrate during Rocky cycle. > Like Alex mentioned in this thread, we need to migrate the scripts used by > the CI squad so they work with SB. > Once this is done, we'll proceed to the full migration of all blueprints > and bugs into tripleo-common project in SB. > Projects like tripleo-validations, tripleo-ui (more?) who have 1:1 mapping > between their "name" and project repository could use a dedicated project > in SB, although we need to keep things simple for our users so they know > where to file a bug without confusion. > We hope to proceed during Rocky but it'll probably take some time to update > our scripts and documentation, also educate our community to use the tool, > so we expect the Stein cycle the first cycle where we actually consume SB. > > I really wanted to thank the SB team for their patience and help, TripleO > is big and this migration hasn't been easy but we'll make it :-) Having used storyboard for the first time today to file a bug^Wstory in heat, I'd like to raise a couple of concerns on this migration. And by all means, if I just missed to RTFM, feel free to point me in the right direction. 1. Searching for bugs in a specific project is *extremely* cumbersome and I am not even sure I got it right (first you need to put openstack/project in the search bar, wait and click it. Then you add the term you are looking for. I have genuinely no idea if I get all the issues I was looking for or not as it is not obvious on what fields this search is performed 2. Advanced search is either very well hidden or not existant yet? E.g. how do you search for bugs filed by someone or over a certain release, or just generally more complex searches which are super useful in order to avoid filing duplicate bugs. I think Zane's additional list also matches my experience very well: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131365.html So my take is that a migration atm is a bit premature and I would postpone it at least to Stein. Thanks, Michele > Thanks, > > On Tue, May 15, 2018 at 7:53 AM, Alex Schultz wrote: > > > Bumping this up so folks can review this. It was mentioned in this > > week's meeting that it would be a good idea for folks to take a look > > at Storyboard to get familiar with it. The upstream docs have been > > updated[0] to point to the differences when dealing with proposed > > patches. Please take some time to review this and raise any > > concerns/issues now. > > > > Thanks, > > -Alex > > > > [0] https://docs.openstack.org/infra/manual/developers.html# > > development-workflow > > > > On Wed, May 9, 2018 at 1:24 PM, Alex Schultz wrote: > > > Hello tripleo folks, > > > > > > So we've been experimenting with migrating some squads over to > > > storyboard[0] but this seems to be causing more issues than perhaps > > > it's worth. Since the upstream community would like to standardize on > > > Storyboard at some point, I would propose that we do a cut over of all > > > the tripleo bugs/blueprints from Launchpad to Storyboard. > > > > > > In the irc meeting this week[1], I asked that the tripleo-ci team make > > > sure the existing scripts that we use to monitor bugs for CI support > > > Storyboard. I would consider this a prerequisite for the migration. > > > I am thinking it would be beneficial to get this done before or as > > > close to M2. > > > > > > Thoughts, concerns, etc? > > > > > > Thanks, > > > -Alex > > > > > > [0] https://storyboard.openstack.org/#!/project_group/76 > > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/ > > tripleo.2018-05-08-14.00.log.html#l-42 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From skaplons at redhat.com Fri Jun 15 09:16:21 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 15 Jun 2018 11:16:21 +0200 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> Message-ID: <07D119F2-1D5A-4B81-A098-56644841AD7C@redhat.com> Also, I think that You should sent such emails to openstack at lists.openstack.org or openstack-operators at lists.openstack.org in the future :) > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: > > Dear folks, > > I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > *Port "eno2"* > Interface "eno2" > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. > > Inside of VM > ubuntu at test-br:~$ ping 8.8.8.8 > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms > … > > Dump the traffic on the “eno2”, got nothing > $ sudo tcpdump -nn -i eno2 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes > … > > Dump the traffic on the “eno1” (internal NIC), catch it! > $ sudo tcpdump -nn -i eno1 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol decode > listening on eno1, link-type EN10MB (Ethernet), capture size 262144 bytes > 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 1, length 64 > 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 1, length 64 > 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id 1439, seq 2, length 64 > 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, seq 2, length 64 > > > $ sudo ip route > default via 192.168.18.1 dev eno1 proto static metric 100 > default via 192.168.8.1 dev eno2 proto static metric 101 > 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown > 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown > 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 metric 100 > 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 metric 100 > 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 > > > What’s going wrong here? Do I miss something? Or some service need to be restarted? > > Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! > > > Best Regards, > Dave > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From Dave.Chen at Dell.com Fri Jun 15 09:18:31 2018 From: Dave.Chen at Dell.com (Dave.Chen at Dell.com) Date: Fri, 15 Jun 2018 09:18:31 +0000 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> Message-ID: <8946937f8a634d2faa88cd21fe47949f@KULX13MDC117.APAC.DELL.COM> Thanks Slawomir for your reply, so what's the right configuration if I want my VM could be able to access external with physical NIC "eno2"? Do I still need add that NIC into "br-ex"? Best Regards, Dave Chen -----Original Message----- From: Slawomir Kaplonski [mailto:skaplons at redhat.com] Sent: Friday, June 15, 2018 5:09 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration Hi, If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex. > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: > > Dear folks, > > I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > *Port "eno2"* > Interface "eno2" > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. > > Inside of VM > ubuntu at test-br:~$ ping 8.8.8.8 > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms … > > Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i eno2 > icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode listening on eno2, link-type EN10MB (Ethernet), capture size > 262144 bytes … > > Dump the traffic on the “eno1” (internal NIC), catch it! > $ sudo tcpdump -nn -i eno1 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode listening on eno1, link-type EN10MB (Ethernet), capture size > 262144 bytes > 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id > 1439, seq 1, length 64 > 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, > seq 1, length 64 > 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id > 1439, seq 2, length 64 > 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, > seq 2, length 64 > > > $ sudo ip route > default via 192.168.18.1 dev eno1 proto static metric 100 default > via 192.168.8.1 dev eno2 proto static metric 101 > 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown > 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 > linkdown > 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 > metric 100 > 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 > metric 100 > 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 > > > What’s going wrong here? Do I miss something? Or some service need to be restarted? > > Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! > > > Best Regards, > Dave > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Dave.Chen at Dell.com Fri Jun 15 09:19:59 2018 From: Dave.Chen at Dell.com (Dave.Chen at Dell.com) Date: Fri, 15 Jun 2018 09:19:59 +0000 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <07D119F2-1D5A-4B81-A098-56644841AD7C@redhat.com> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> <07D119F2-1D5A-4B81-A098-56644841AD7C@redhat.com> Message-ID: <27e84f8be71f4d4dbc561b73d8154226@KULX13MDC117.APAC.DELL.COM> Sure, I will. I just think neutron guys should know this better. Best Regards, Dave Chen -----Original Message----- From: Slawomir Kaplonski [mailto:skaplons at redhat.com] Sent: Friday, June 15, 2018 5:16 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration Also, I think that You should sent such emails to openstack at lists.openstack.org or openstack-operators at lists.openstack.org in the future :) > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: > > Dear folks, > > I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: > > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > *Port "eno2"* > Interface "eno2" > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. > > Inside of VM > ubuntu at test-br:~$ ping 8.8.8.8 > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms … > > Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i eno2 > icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode listening on eno2, link-type EN10MB (Ethernet), capture size > 262144 bytes … > > Dump the traffic on the “eno1” (internal NIC), catch it! > $ sudo tcpdump -nn -i eno1 icmp > tcpdump: verbose output suppressed, use -v or -vv for full protocol > decode listening on eno1, link-type EN10MB (Ethernet), capture size > 262144 bytes > 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id > 1439, seq 1, length 64 > 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, > seq 1, length 64 > 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id > 1439, seq 2, length 64 > 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, > seq 2, length 64 > > > $ sudo ip route > default via 192.168.18.1 dev eno1 proto static metric 100 default > via 192.168.8.1 dev eno2 proto static metric 101 > 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown > 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 > linkdown > 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 > metric 100 > 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 > metric 100 > 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 > > > What’s going wrong here? Do I miss something? Or some service need to be restarted? > > Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! > > > Best Regards, > Dave > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Fri Jun 15 09:36:18 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 15 Jun 2018 17:36:18 +0800 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' Message-ID: on patch [1] , PS 50 and PS51 just a minor rebase but PS51 start to fail on requirements-check with following error in [2] Validating lower constraints of test-requirements.txt *** Incompatible requirement found! *** See http://docs.openstack.org/developer/requirements but it doesn't provide enough info to know what's wrong , and because I didn't make too much change , curious on why the job failed... can anyone provide any hint on what happened there? thanks [1]https://review.openstack.org/#/c/523387 [2] http://logs.openstack.org/87/523387/51/check/requirements-check/3598ba0/job-output.txt.gz Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jun 15 09:43:17 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 15 Jun 2018 11:43:17 +0200 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <8946937f8a634d2faa88cd21fe47949f@KULX13MDC117.APAC.DELL.COM> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> <8946937f8a634d2faa88cd21fe47949f@KULX13MDC117.APAC.DELL.COM> Message-ID: <5E21BE7E-A9D5-4EF2-AAC2-20EF05FAB1A5@redhat.com> Please send info about network to which You vm is connected and config of all bridges from ovs also. > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 11:18: > > Thanks Slawomir for your reply, so what's the right configuration if I want my VM could be able to access external with physical NIC "eno2"? Do I still need add that NIC into "br-ex"? > > > Best Regards, > Dave Chen > > -----Original Message----- > From: Slawomir Kaplonski [mailto:skaplons at redhat.com] > Sent: Friday, June 15, 2018 5:09 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration > > Hi, > > If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex. > >> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: >> >> Dear folks, >> >> I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: >> >> Bridge br-ex >> Controller "tcp:127.0.0.1:6633" >> is_connected: true >> fail_mode: secure >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> Port br-ex >> Interface br-ex >> type: internal >> ovs_version: "2.8.0" >> >> >> >> As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: >> >> Bridge br-ex >> Controller "tcp:127.0.0.1:6633" >> is_connected: true >> fail_mode: secure >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> *Port "eno2"* >> Interface "eno2" >> Port br-ex >> Interface br-ex >> type: internal >> ovs_version: "2.8.0" >> >> >> >> Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. >> >> Inside of VM >> ubuntu at test-br:~$ ping 8.8.8.8 >> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. >> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms >> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms … >> >> Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i eno2 >> icmp >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode listening on eno2, link-type EN10MB (Ethernet), capture size >> 262144 bytes … >> >> Dump the traffic on the “eno1” (internal NIC), catch it! >> $ sudo tcpdump -nn -i eno1 icmp >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode listening on eno1, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >> 1439, seq 1, length 64 >> 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, >> seq 1, length 64 >> 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >> 1439, seq 2, length 64 >> 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id 1439, >> seq 2, length 64 >> >> >> $ sudo ip route >> default via 192.168.18.1 dev eno1 proto static metric 100 default >> via 192.168.8.1 dev eno2 proto static metric 101 >> 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown >> 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 >> linkdown >> 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 >> metric 100 >> 192.168.16.0/21 dev eno1 proto kernel scope link src 192.168.20.132 >> metric 100 >> 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 >> >> >> What’s going wrong here? Do I miss something? Or some service need to be restarted? >> >> Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! >> >> >> Best Regards, >> Dave >> >> ______________________________________________________________________ >> ____ OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From Dave.Chen at Dell.com Fri Jun 15 10:13:57 2018 From: Dave.Chen at Dell.com (Dave.Chen at Dell.com) Date: Fri, 15 Jun 2018 10:13:57 +0000 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <5E21BE7E-A9D5-4EF2-AAC2-20EF05FAB1A5@redhat.com> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> <8946937f8a634d2faa88cd21fe47949f@KULX13MDC117.APAC.DELL.COM> <5E21BE7E-A9D5-4EF2-AAC2-20EF05FAB1A5@redhat.com> Message-ID: <2e8fb8a136494cb1989db1090ac97f5f@KULX13MDC117.APAC.DELL.COM> Apologize for having sent this question to a dev mailing list first! But I humbly request to continue the discussion here. My VM is connect to a private network under demo project, here is the info of the network: $ openstack network show 64f4f4dc-a851-486a-8789-43b816d9bf3d +---------------------------+----------------------------------------------------------------------------+ | Field | Value | +---------------------------+----------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2018-06-15T04:26:18Z | | description | | | dns_domain | None | | id | 64f4f4dc-a851-486a-8789-43b816d9bf3d | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | None | | is_vlan_transparent | None | | mtu | 1450 | | name | private | | port_security_enabled | True | | project_id | e202899d90ba449d880be42f19cd6a55 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 72 | | qos_policy_id | None | | revision_number | 4 | | router:external | Internal | | segments | None | | shared | False | | status | ACTIVE | | subnets | 18a0847e-b733-4ec2-9e25-d7d630a1af2f, 91e91bab-7405-4717-97cd-4ca2cb11589d | | tags | | | updated_at | 2018-06-15T04:26:23Z | +---------------------------+----------------------------------------------------------------------------+ And below is the full output of ovs bridges. $ sudo ovs-vsctl show 0ee72d8a-65bc-4c82-884a-61b0e86b9893 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "qvo97604b93-55" tag: 1 Interface "qvo97604b93-55" Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex} Port "qr-c3f198ac-0b" tag: 1 Interface "qr-c3f198ac-0b" type: internal Port "sg-8868b1a8-69" tag: 1 Interface "sg-8868b1a8-69" type: internal Port br-int Interface br-int type: internal Port "qvo6f012656-74" tag: 1 Interface "qvo6f012656-74" Port "fg-c4e5dcbc-a3" tag: 2 Interface "fg-c4e5dcbc-a3" type: internal Port "tap10dc7b3e-a7" tag: 1 Interface "tap10dc7b3e-a7" type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port "qg-4014b9e8-ce" tag: 2 Interface "qg-4014b9e8-ce" type: internal Port "qr-883cea95-31" tag: 1 Interface "qr-883cea95-31" type: internal Port "sg-69c838e6-bb" tag: 1 Interface "sg-69c838e6-bb" type: internal Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "vxlan-c0a81218" Interface "vxlan-c0a81218" type: vxlan options: {df_default="true", in_key=flow, local_ip="192.168.20.132", out_key=flow, remote_ip="192.168.18.24"} Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Bridge br-ex Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eno2" Interface "eno2" Port br-ex Interface br-ex type: internal ovs_version: "2.8.0" Thanks! Best Regards, Dave Chen -----Original Message----- From: Slawomir Kaplonski [mailto:skaplons at redhat.com] Sent: Friday, June 15, 2018 5:43 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration Please send info about network to which You vm is connected and config of all bridges from ovs also. > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 11:18: > > Thanks Slawomir for your reply, so what's the right configuration if I want my VM could be able to access external with physical NIC "eno2"? Do I still need add that NIC into "br-ex"? > > > Best Regards, > Dave Chen > > -----Original Message----- > From: Slawomir Kaplonski [mailto:skaplons at redhat.com] > Sent: Friday, June 15, 2018 5:09 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [neutron] Question on the OVS > configuration > > Hi, > > If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex. > >> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: >> >> Dear folks, >> >> I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: >> >> Bridge br-ex >> Controller "tcp:127.0.0.1:6633" >> is_connected: true >> fail_mode: secure >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> Port br-ex >> Interface br-ex >> type: internal >> ovs_version: "2.8.0" >> >> >> >> As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: >> >> Bridge br-ex >> Controller "tcp:127.0.0.1:6633" >> is_connected: true >> fail_mode: secure >> Port phy-br-ex >> Interface phy-br-ex >> type: patch >> options: {peer=int-br-ex} >> *Port "eno2"* >> Interface "eno2" >> Port br-ex >> Interface br-ex >> type: internal >> ovs_version: "2.8.0" >> >> >> >> Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. >> >> Inside of VM >> ubuntu at test-br:~$ ping 8.8.8.8 >> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. >> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms >> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms … >> >> Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i >> eno2 icmp >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode listening on eno2, link-type EN10MB (Ethernet), capture size >> 262144 bytes … >> >> Dump the traffic on the “eno1” (internal NIC), catch it! >> $ sudo tcpdump -nn -i eno1 icmp >> tcpdump: verbose output suppressed, use -v or -vv for full protocol >> decode listening on eno1, link-type EN10MB (Ethernet), capture size >> 262144 bytes >> 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >> 1439, seq 1, length 64 >> 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id >> 1439, seq 1, length 64 >> 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >> 1439, seq 2, length 64 >> 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id >> 1439, seq 2, length 64 >> >> >> $ sudo ip route >> default via 192.168.18.1 dev eno1 proto static metric 100 default >> via 192.168.8.1 dev eno2 proto static metric 101 >> 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown >> 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 >> linkdown >> 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 >> metric 100 >> 192.168.16.0/21 dev eno1 proto kernel scope link src >> 192.168.20.132 metric 100 >> 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 >> >> >> What’s going wrong here? Do I miss something? Or some service need to be restarted? >> >> Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! >> >> >> Best Regards, >> Dave >> >> _____________________________________________________________________ >> _ ____ OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Fri Jun 15 11:34:09 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 15 Jun 2018 13:34:09 +0200 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: References: Message-ID: <1529062449.22989.2@smtp.office365.com> On Fri, Jun 15, 2018 at 11:36 AM, Chen CH Ji wrote: > on patch [1] , PS 50 and PS51 just a minor rebase but PS51 start to > fail on requirements-check with following error in [2] > > Validating lower constraints of test-requirements.txt > *** Incompatible requirement found! > *** See http://docs.openstack.org/developer/requirements > > but it doesn't provide enough info to know what's wrong , and because > I didn't make too much change , curious on why > the job failed... can anyone provide any hint on what happened there? > thanks > > [1]https://review.openstack.org/#/c/523387 > [2]http://logs.openstack.org/87/523387/51/check/requirements-check/3598ba0/job-output.txt.gz > Looking at your change and the state of the global requirements repo I see the following contradiction https://github.com/openstack/requirements/blob/a07ef1c282a37a4bcc93166ddf4cdc97f7626d5d/lower-constraints.txt#L151 says zVMCloudConnector===0.3.2 while https://review.openstack.org/#/c/523387/51/lower-constraints.txt at 173 says zVMCloudConnector==1.1.1 Based on the history of the lower-constraints.txt in the global repo you have to manually bump the lower constraint there as well https://github.com/openstack/requirements/commits/master/lower-constraints.txt Cheers, gibi > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC From jean-philippe at evrard.me Fri Jun 15 12:12:26 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 15 Jun 2018 14:12:26 +0200 Subject: [openstack-dev] [tc] [summary] Organizational diversity tag In-Reply-To: <1528900004-sup-3478@lrrr.local> References: <5d793bcc-f872-8844-b250-e5ef9d2facc0@openstack.org> <1528900004-sup-3478@lrrr.local> Message-ID: I think PTLs would naturally like to have those updated, and for me a TC +w would make sense. But we need to have guidelines, so that's it's more tangible, and the subtlety stays impartial. From balazs.gibizer at ericsson.com Fri Jun 15 12:41:43 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 15 Jun 2018 14:41:43 +0200 Subject: [openstack-dev] [nova] review runways check-in and feedback In-Reply-To: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> References: <98ba549a-ca0d-eff0-5fb6-f338d187eaab@gmail.com> Message-ID: <1529066503.22989.3@smtp.office365.com> On Wed, Jun 13, 2018 at 10:33 PM, melanie witt wrote: > Howdy everyone, > > We've been experimenting with a new process this cycle, Review > Runways [1] and we're about at the middle of the cycle now as we had > the r-2 milestone last week June 7. > > I wanted to start a thread and gather thoughts and feedback from the > nova community about how they think runways have been working or not > working and lend any suggestions to change or improve as we continue > on in the rocky cycle. > > We decided to try the runways process to increase the chances of core > reviewers converging on the same changes and thus increasing reviews > and merges on approved blueprint work. As of today, we have 69 > blueprints approved and 28 blueprints completed, we just passed r-2 > June 7 and r-3 is July 26 and rc1 is August 9 [2]. > > Do people feel like they've been receiving more review on their > blueprints? Does it seem like we're completing more blueprints > earlier? Is there feedback or suggestions for change that you can > share? Looking at the Queens burndown chart from Matt [3] we had 11 completed bps at Queens milestone 2. So having 28 completed bps at R-2 means a really nice improvement on our bp completion rate. I think the runaways process contributed to this improvement. Did runaway solve the problem that not every equally ready patch gets equal attention from reviewers? Clearly not. But I don't think this would be a realistic goal for runaways. I suggest that in the future we continue the runaway process but we also revive the priority setting process. Before runaways we had 3-4 bps agreed as priority work for a given cycle. I think we had this 3-4 bps in our head for Rocky as well we just did not write them down. I feel this causes misunderstanding about priories, like: a) does reviewer X has the same 3-4 bps in her/his head with priority as in mine? b) does something that I think part of the 3-4 priority bps has more importance than what is in a runaway slot? Of course when I select what to review priority is only a single factor and there are others, like: * Do I have knowledge about the feature? (Did I review the related spec? Do I have knowledge in the domain or in the impacted code path?) * Is it seems easy to review? (e.g. low complexity feature, small patches, well written commit message) * Is it something that feels important to me, regardless of priority set by the community. (e.g. Do I get frequent company internal questions about the feature? Do I have another feature that depends on this feature as prerequisite work?) So during the cycle it happened that I selected patches to review even if they wasn't in a runaway slot and ignored some patches from the runaway slots. Cheers, gibi [3] https://docs.google.com/spreadsheets/d/e/2PACX-1vRh5glbJ44-Ru2iARidNRa7uFfn2yjiRPjHIEQOc3Fjp5YDAlcMmXkYAEFW0WNhALl010T4rzyChuO9/pubhtml?gid=128173249&single=true > > > Thanks all, > -melanie > > [1] https://etherpad.openstack.org/p/nova-runways-rocky > [2] https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zigo at debian.org Fri Jun 15 12:52:03 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Jun 2018 14:52:03 +0200 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? In-Reply-To: References: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> Message-ID: <2b9c548c-feb1-43f0-5649-c737b654bedb@debian.org> On 06/14/2018 01:10 PM, Erlon Cruz wrote: > Hi Thomas, > > The reserved_percentage *is* taken in account for non thin provisoning > backends. So you can use it to spare the space you need for backups. It > is a per backend configuration. Oh. Reading the doc, I thought it was only for thin provisioning, it's nice if it works with "normal" cinder LVM then ... :P When you say "per backend", does it means it can be set differently on each volume node? > If you have already tried to used it and that is not working, please let > us know what release you are using, because despite this being the > current (and proper) behavior, it might not being like this in the past. > > Erlon Will do, thanks. Cheers, Thomas From doug at doughellmann.com Fri Jun 15 13:01:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Jun 2018 09:01:24 -0400 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: <1529062449.22989.2@smtp.office365.com> References: <1529062449.22989.2@smtp.office365.com> Message-ID: <1529067528-sup-5035@lrrr.local> Excerpts from Balázs Gibizer's message of 2018-06-15 13:34:09 +0200: > > On Fri, Jun 15, 2018 at 11:36 AM, Chen CH Ji > wrote: > > on patch [1] , PS 50 and PS51 just a minor rebase but PS51 start to > > fail on requirements-check with following error in [2] > > > > Validating lower constraints of test-requirements.txt > > *** Incompatible requirement found! > > *** See http://docs.openstack.org/developer/requirements > > > > but it doesn't provide enough info to know what's wrong , and because > > I didn't make too much change , curious on why > > the job failed... can anyone provide any hint on what happened there? > > thanks > > > > [1]https://review.openstack.org/#/c/523387 > > [2]http://logs.openstack.org/87/523387/51/check/requirements-check/3598ba0/job-output.txt.gz > > > > Looking at your change and the state of the global requirements repo I > see the following contradiction > https://github.com/openstack/requirements/blob/a07ef1c282a37a4bcc93166ddf4cdc97f7626d5d/lower-constraints.txt#L151 > says zVMCloudConnector===0.3.2 > while > https://review.openstack.org/#/c/523387/51/lower-constraints.txt at 173 > says zVMCloudConnector==1.1.1 > > Based on the history of the lower-constraints.txt in the global repo > you have to manually bump the lower constraint there as well > https://github.com/openstack/requirements/commits/master/lower-constraints.txt No, that file is not used and does not need to be changed. The lower constraints tests only look at files in the same repo. The minimum versions of dependencies set in requirements.txt, test-requirements.txt, etc. need to match the values in lower-constraints.txt. In this case, the more detailed error message is a few lines above the error quoted by Chen CH Ji. The detail say "Requirement for package retrying has no lower bound" which means that there is a line in requirements.txt indicating a dependency on "retrying" but without specifying a minimum version. That is the problem. Doug From kchamart at redhat.com Fri Jun 15 13:08:07 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 15 Jun 2018 15:08:07 +0200 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4bd5a90a-46f4-492f-4f13-201872d43919@fried.cc> References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> <20180608093545.GE11695@paraplu> <20180611095529.GA3344@redhat> <20180611150604.GF11695@paraplu> <4bd5a90a-46f4-492f-4f13-201872d43919@fried.cc> Message-ID: <20180615130807.GG11695@paraplu> On Mon, Jun 11, 2018 at 10:14:33AM -0500, Eric Fried wrote: > I thought we were leaning toward the option where nova itself doesn't > impose a limit, but lets the virt driver decide. Yeah, I agree with that, if we can't arrive at a sensible limit for Nova, after testing with all drivers that matter (which I doubt will happen anytime soon). [...] -- /kashyap From openstack at fried.cc Fri Jun 15 13:35:50 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 15 Jun 2018 08:35:50 -0500 Subject: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad In-Reply-To: <3cf75c77-d513-c779-7c74-3211ec9724e8@gmail.com> References: <3cf75c77-d513-c779-7c74-3211ec9724e8@gmail.com> Message-ID: <3e3d9878-78f1-f78c-4148-4b2b3f6271b9@fried.cc> We just merged an initial pass at direct access to the placement service [1]. See the test_direct suite for simple usage examples. Note that this was written primarily to satisfy the FFU use case in blueprint reshape-provider-tree [2] and therefore likely won't have everything cinder needs. So play around with it, but please do not put it anywhere near production until we've had some more collab. Find us in #openstack-placement. -efried [1] https://review.openstack.org/572576 [2] https://review.openstack.org/572583 On 06/04/2018 07:57 AM, Jay S Bryant wrote: > > > On 6/1/2018 7:28 PM, Chris Dent wrote: >> On Wed, 9 May 2018, Chris Dent wrote: >> >>> I've started an etherpad for the forum session in Vancouver devoted >>> to discussing the possibility of tracking and allocation resources >>> in Cinder using the Placement service. This is not a done deal. >>> Instead the session is to discuss if it could work and how to make >>> it happen if it seems like a good idea. >>> >>> The etherpad is at >>> >>>    https://etherpad.openstack.org/p/YVR-cinder-placement >> >> The session went well. Some of the members of the cinder team who >> might have had more questions had not been able to be at summit so >> we were unable to get their input. >> >> We clarified some of the things that cinder wants to be able to >> accomplish (run multiple schedulers in active-active and avoid race >> conditions) and the fact that this is what placement is built for. >> We also made it clear that placement itself can be highly available >> (and scalable) because of its nature as a dead-simple web app over a >> database. >> >> The next steps are for the cinder team to talk amongst themselves >> and socialize the capabilities of placement (with the help of >> placement people) and see if it will be suitable. It is unlikely >> there will be much visible progress in this area before Stein. > Chris, > > Thanks for this update.  I have it on the agenda for the Cinder team to > discuss this further.  We ran out of time in last week's meeting but > will hopefully get some time to discuss it this week.  We will keep you > updated as to how things progress on our end and pull in the placement > guys as necessary.  > > Jay >> >> See the etherpad for a bit more detail. >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Fri Jun 15 13:41:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Jun 2018 09:41:41 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> Message-ID: <1529070040-sup-2028@lrrr.local> Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > > > ---- On Fri, 15 Jun 2018 06:17:34 +0900 Doug Hellmann wrote ---- > > Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400: > > > Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900: > > > > > > > > > Could it be as simple as adding tempest-full-py3 with the > > > > > > > required-projects list updated to include the current repository? So > > > > > > > there isn't a special separate job, and we would just reuse > > > > > > > tempest-full-py3 for this? > > > > > > > > This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. > > > > > > > > Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. > > > > > > Does it? So if I add tempest-full-py3 to a *library* that library is > > > installed from source in the job? I know the source for the library > > > will be checked out, but I'm surprised that devstack would be configured > > > to use it. How does that work? > > > > Based on my testing, that doesn't seem to be the case. I added it to > > oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set > > to include oslo.config and the check function is returning false so that > > it is not installed from source [2]. > > Yes, It will not be set on LIBS_FROM_GIT as we did not set it explicitly. But gate running on any repo does run job on current change set of that repo which is nothing but "master + current patch changes" . For example, any job running on oslo.config patch will take oslo.config source code from that patch which is "master + current change". You can see the results in this patch - https://review.openstack.org/#/c/575324/ . Where I deleted a module and gate jobs (including tempest-full-py3) fails as they run on current change set of neutron-lib code not on pypi version(which would pass the tests). The tempest-full-py3 job passed for that patch, though. Which seems to indicate that the neutron-lib repository was not used in the test job, even though it was checked out. > > In that case, lib's proposed change will be tested against integration tests job to check any regression. If we need to run cross lib/project testing of any lib then, yes we need the 'tempest-full-py3-src' job but that is separate things as you mentioned. > > -gmann > > > > > So, I think we need the tempest-full-py3-src job. I will propose an > > update to the tempest repo to add that. > > > > Doug > > > > [1] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz > > [2] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From bodenvmw at gmail.com Fri Jun 15 13:56:29 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Fri, 15 Jun 2018 07:56:29 -0600 Subject: [openstack-dev] [tricircle] Zuul v3 integration status Message-ID: <922b0570-988e-98d2-56db-615d388de1f6@gmail.com> Is there anyone who can speak to the status of tricircle's adoption of Zuul v3? As per [1] it doesn't seem like the project is setup properly for Zuul v3. Thus, it's difficult/impossible to land patches like [2] that require neutron/master + a depends on patch. Assuming tricircle is still being maintained, IMO we need to find a way to get it up to speed with zuul v3; otherwise some of our neutron efforts will be held up, or tricircle will fall behind with respect to neutron-lib adoption. Thanks [1] https://bugs.launchpad.net/tricircle/+bug/1776922 [2] https://review.openstack.org/#/c/565879/ From aschultz at redhat.com Fri Jun 15 13:55:15 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 15 Jun 2018 07:55:15 -0600 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: <20180615091254.GB4410@palahniuk.int.rhx> References: <20180615091254.GB4410@palahniuk.int.rhx> Message-ID: On Fri, Jun 15, 2018 at 3:12 AM, Michele Baldessari wrote: > On Mon, May 21, 2018 at 01:58:26PM -0700, Emilien Macchi wrote: >> During the Storyboard session today: >> https://etherpad.openstack.org/p/continuing-the-migration-lp-sb >> >> We mentioned that TripleO would continue to migrate during Rocky cycle. >> Like Alex mentioned in this thread, we need to migrate the scripts used by >> the CI squad so they work with SB. >> Once this is done, we'll proceed to the full migration of all blueprints >> and bugs into tripleo-common project in SB. >> Projects like tripleo-validations, tripleo-ui (more?) who have 1:1 mapping >> between their "name" and project repository could use a dedicated project >> in SB, although we need to keep things simple for our users so they know >> where to file a bug without confusion. >> We hope to proceed during Rocky but it'll probably take some time to update >> our scripts and documentation, also educate our community to use the tool, >> so we expect the Stein cycle the first cycle where we actually consume SB. >> >> I really wanted to thank the SB team for their patience and help, TripleO >> is big and this migration hasn't been easy but we'll make it :-) > > Having used storyboard for the first time today to file a bug^Wstory in heat, > I'd like to raise a couple of concerns on this migration. And by all > means, if I just missed to RTFM, feel free to point me in the right > direction. > > 1. Searching for bugs in a specific project is *extremely* cumbersome > and I am not even sure I got it right (first you need to put > openstack/project in the search bar, wait and click it. Then you add > the term you are looking for. I have genuinely no idea if I get all > the issues I was looking for or not as it is not obvious on what > fields this search is performed > 2. Advanced search is either very well hidden or not existant yet? > E.g. how do you search for bugs filed by someone or over a certain > release, or just generally more complex searches which are super > useful in order to avoid filing duplicate bugs. > > I think Zane's additional list also matches my experience very well: > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131365.html > > So my take is that a migration atm is a bit premature and I would > postpone it at least to Stein. > Given that my original request was to try and do it prior to M2 and that's past, I think I'd also side with waiting until early Stein to continue. Let's focus on the Rocky work and push this to early Stein instead. I'll talk about this in the next IRC meeting if anyone wishes to discuss further. Thanks, -Alex > Thanks, > Michele > >> Thanks, >> >> On Tue, May 15, 2018 at 7:53 AM, Alex Schultz wrote: >> >> > Bumping this up so folks can review this. It was mentioned in this >> > week's meeting that it would be a good idea for folks to take a look >> > at Storyboard to get familiar with it. The upstream docs have been >> > updated[0] to point to the differences when dealing with proposed >> > patches. Please take some time to review this and raise any >> > concerns/issues now. >> > >> > Thanks, >> > -Alex >> > >> > [0] https://docs.openstack.org/infra/manual/developers.html# >> > development-workflow >> > >> > On Wed, May 9, 2018 at 1:24 PM, Alex Schultz wrote: >> > > Hello tripleo folks, >> > > >> > > So we've been experimenting with migrating some squads over to >> > > storyboard[0] but this seems to be causing more issues than perhaps >> > > it's worth. Since the upstream community would like to standardize on >> > > Storyboard at some point, I would propose that we do a cut over of all >> > > the tripleo bugs/blueprints from Launchpad to Storyboard. >> > > >> > > In the irc meeting this week[1], I asked that the tripleo-ci team make >> > > sure the existing scripts that we use to monitor bugs for CI support >> > > Storyboard. I would consider this a prerequisite for the migration. >> > > I am thinking it would be beneficial to get this done before or as >> > > close to M2. >> > > >> > > Thoughts, concerns, etc? >> > > >> > > Thanks, >> > > -Alex >> > > >> > > [0] https://storyboard.openstack.org/#!/project_group/76 >> > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/ >> > tripleo.2018-05-08-14.00.log.html#l-42 >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> >> -- >> Emilien Macchi > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Fri Jun 15 14:00:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 15 Jun 2018 15:00:05 +0100 (BST) Subject: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad In-Reply-To: <3e3d9878-78f1-f78c-4148-4b2b3f6271b9@fried.cc> References: <3cf75c77-d513-c779-7c74-3211ec9724e8@gmail.com> <3e3d9878-78f1-f78c-4148-4b2b3f6271b9@fried.cc> Message-ID: On Fri, 15 Jun 2018, Eric Fried wrote: > We just merged an initial pass at direct access to the placement service > [1]. See the test_direct suite for simple usage examples. > > Note that this was written primarily to satisfy the FFU use case in > blueprint reshape-provider-tree [2] and therefore likely won't have > everything cinder needs. So play around with it, but please do not put > it anywhere near production until we've had some more collab. Find us > in #openstack-placement. Just to word this a bit more strongly (see also http://p.anticdent.org/2nbF, where this is paraphrased from): It would be bad news for cinder to start from placement direct. Better would be for cinder to figure out how to use placement "normally", and then for the standalone special case, consider placement direct or something derived from it. PlacementDirect, as currently written, is really for special cases only, for use in extremis only. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sombrafam at gmail.com Fri Jun 15 13:59:52 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Fri, 15 Jun 2018 10:59:52 -0300 Subject: [openstack-dev] [cinder] backups need reserved space for LVM snapshots: do we have it implemented already? In-Reply-To: <2b9c548c-feb1-43f0-5649-c737b654bedb@debian.org> References: <7dbdc6dd-6f3b-631d-7328-61d06961c96f@debian.org> <2b9c548c-feb1-43f0-5649-c737b654bedb@debian.org> Message-ID: Hi Thomas, Yes. If you have more than 1 volume node, or 1 volume node with multiple backends definitions. Each volume node should have at least one [backend] that will point to your storage configuration. You can add that config for each of them. Erlon Em sex, 15 de jun de 2018 às 09:52, Thomas Goirand escreveu: > On 06/14/2018 01:10 PM, Erlon Cruz wrote: > > Hi Thomas, > > > > The reserved_percentage *is* taken in account for non thin provisoning > > backends. So you can use it to spare the space you need for backups. It > > is a per backend configuration. > > Oh. Reading the doc, I thought it was only for thin provisioning, it's > nice if it works with "normal" cinder LVM then ... :P > > When you say "per backend", does it means it can be set differently on > each volume node? > > > If you have already tried to used it and that is not working, please let > > us know what release you are using, because despite this being the > > current (and proper) behavior, it might not being like this in the past. > > > > Erlon > > Will do, thanks. > > Cheers, > > Thomas > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmccowan at cisco.com Fri Jun 15 14:41:14 2018 From: dmccowan at cisco.com (Dave McCowan (dmccowan)) Date: Fri, 15 Jun 2018 14:41:14 +0000 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1529008217.7441.68.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> <1529008217.7441.68.camel@redhat.com> Message-ID: +1 This is a great time. On 6/14/18, 4:30 PM, "Ade Lee" wrote: >The new time slot has been pretty difficult for folks to attend. >I'd like to propose a new time slot, which will hopefully be more >amenable to everyone. > >Tuesday 12:00 UTC > >https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=00&se >c=0 > >This works out to 8 am EST, around 1pm in Europe, and 8 pm in China. >Please vote by responding to this email. > >Thanks, >Ade > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Fri Jun 15 15:00:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jun 2018 15:00:50 +0000 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages Message-ID: <20180615150050.hhz777oa35junk5c@yuggoth.org> Governance tooling change https://review.openstack.org/575554 is currently up for review to start displaying current PTL E-mail addresses on the team specific pages linked from the projects index https://governance.openstack.org/tc/reference/projects/ page. Since https://review.openstack.org/234420 merged a few years ago we've been tracking PTL E-mail addresses in the structured data from which we generate those pages, but the Sphinx extension we're using was never amended to include them. Having somewhere consistent to point members of the community who need to reach out to the PTL of a team in private (and may not have easy access or comfort to do so via IRC privmsg) would be useful. Right now we're stuck telling people to dig around in a YAML file for them, which is not an especially friendly answer. A knee-jerk reaction any time E-mail addresses get displayed somewhere new is that it's going to increase the amount of spam those addresses receive. Keep in mind that we've been publishing all of them on a Web page for years now, just one which is only convenient for spammers and not one which is convenient for people who might have a legitimate need to contact our PTLs. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Fri Jun 15 15:04:27 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 15 Jun 2018 16:04:27 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-24 Message-ID: HTML: https://anticdent.org/placement-update-18-24.html This is placement update 18-24, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). It's been quite a while since the last one, mostly because of travel, but also because coming to grips with the placement universe takes some time. Catching up will mean that this update is likely to be a bit long. Bear with it. This is obviously an _expand_ style update (where we add new stuff). Next week will be a _contract_. One thing I'd like to highlight is that with the merge of change [560459](https://review.openstack.org/#/c/560459/) we've hit a long promised milestone with placement. Thanks to an initial hit by Eric Fried and considerable followups by Bhagyashri Shewale, we now have rudimentary support in nova for libvirt-using compute nodes that use shared disk to accurately report and claim that disk. Using it requires some currently manual set up for the resource provider associated with the disk and creating the aggregate of that disk with the compute nodes that use it. But: this is one of the earliest promises provided by the placement concept, in the works for more than two years by many different people, finally showing up. Open the bubbly or something, a light celebration is in order. The flip side of this is that it highlights that we have a growing documentation debt with the many features provided by placement and how to make best use of them in nova (and other services that might like to use placement). Before the end of the cycle we will need to be sure that we set aside a considerable chunk of time to address this gap. # Most Important Getting nested providers and consumer generations working are still the key pieces of work. See the links in the themes below. A lot of complicated work is in progress or recently merged and we are getting deeper into the cycle. There are going to be bugs. The sooner we get stuff merged so it has time to interact and we have time to experiment with it the better. And there's also that documentation gap mentioned above. Also a reminder that for blueprints that have code that is ready for wide review, put it on the [runway](https://etherpad.openstack.org/p/nova-runways-rocky). # What's Changed (This is rather long because of the gap since the last report, but also because we've hit a point where lots of stuff can merge.) Discussion revealed an issue with allocations and inventory that exists on a top-level resource provider which we'd later like to move to a nested provider. An example is VGPU inventory which, until sometime very soon, was represented as inventory on the compute node (I think). Fixing this should be an atomic operation so a spec is in progress for [Handling Reshaped Provider Trees](https://review.openstack.org/#/c/572583/). This suggests a new `/migrator` URI in the placement service, and for the sake of fast-forward-upgrades, a way to reach that URI from a within-process placement service (rather than over HTTP). The [PlacementDirect](https://review.openstack.org/#/c/572576/) tool has been created to allow this and has merged. Quite a lot of work will need to be done to implement that spec, so I'm going to add it as a theme (below). Nova now requires the 1.25 placement microversion. It will go up again soon. The groundwork for consumer generations (including requiring some form of project and user on all allocations) has merged. What remains is exposing it all at the API layer. The placement version discovery document was incomplete, causing trouble for certain ways of using the openstacksdk. This has [been fixed](https://review.openstack.org/#/c/575117/). Placement now supports granular policy (policy per URI) in-code, with customization possible via a policy file. A potential 500 when listing usage information has been fixed. There is now a [heal allocations CLI](https://review.openstack.org/#/c/565886/) which is designed to help people migrate away from the CachingScheduler (which doesn't use placement). Nova host aggregates are now magically mirrored as placement aggregates and, amongst other things, this is used to honor the [availability_zone hint via placement](https://review.openstack.org/#/c/546282/). # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16, same as last time, but a different set of bugs. * [In progress placement bugs](https://goo.gl/vzGGDQ) 9, -1 on last time. # Specs Total four weeks ago: 13. Now: 13 Spec-freeze has passed, so presumably exceptions will be required for these. There's already a notional exception for "Reshaped Provider Trees". * VMware: place instances on resource pool (using update_provider_tree) * Proposes NUMA topology with RPs * Account for host agg allocation ratio in placement * Support default allocation ratios * Spec on preemptible servers * Standardize CPU resource tracking * Propose counting quota usage from placement * Add history behind nullable project_id and user_id * Placement: any traits in allocation_candidate query * Placement: support mixing required traits with any traits * [WIP] Support Placement in Cinder * Handling Reshaped Provider Trees * Count quota based on resource class # Main Themes "Mirror nova host aggregates to placement" and "Granular" are done, so no longer listed as a theme. "Reshaped Provider Trees" is added because we're stuck if we don't do it. ## Nested providers in allocation candidates Quite a bit of the work related to nested providers in allocation candidates has merged. What remains is on this topic: * Eric noticed that in this process we've injected some changes in behavior in Rocky in the response to /allocation_candidates without guarding it by microversion changes. There's [some discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-06-14.log.html#t2018-06-14T16:53:06) about it in IRC. First with me and then later with Jay. The gist is that it's unfortunate that happened, but it's not a disaster and the best outcome is that the diff between Queens and Rocky demonstrates the right behavior. ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. The code is in progress: * As noted above, much of this is merged. Most of what is left is exposing the functionality at the API level. ## Reshaped Provider Trees This allows moving inventory and allocations that were on resource provider A to resource provider B in an atomic fashion. Right now this is a spec on the following topic: * A glance at the spec will reveal that this is a multi-faceted and multi-party effort. Nine people are listed in the Assignee section. The placement direct part merged today. # Extraction The placement [db connection](https://review.openstack.org/#/c/362766/) change has been previously +W but since had a few merge conflicts. It presumably will merge soon. This will allow installations to optionally use a separate database for placement data. When that merges a [zuul](https://review.openstack.org/#/c/564067/) change to use it will adjust the nova-next job. The changes required to devstack are already in place. A stack of changes to placement unit tests to make them not rely on nova.test has merged. There are functional tests remaining which still use that. If you are looking for extraction-related work, finding ways in which nova code is imported but isn't really needed is a good way to make progress. A while back, Jay made a first pass at an [os-resource-classes](https://github.com/jaypipes/os-resource-classes/), which needs some additional eyes on it. I personally thought it might be heavier than required. If you have ideas please share them. The placement extraction [forum session](https://etherpad.openstack.org/p/YVR-placement-extraction) went well. There was pretty good consensus from the people in the room and we got some useful feedback from some operators on how things ought to work. An area we will need to prepare for is dealing with the various infra and co-gating issues that will come up once placement is extracted. # Other 19 entries four weeks ago. 23 now. Some of the older items in this list are not getting much attention. That's a shame. The list is ordered (oldest first) the way it is on purpose. * Purge comp_node and res_prvdr records during deletion of cells/hosts * A huge pile of improvements to osc-placement * Get resource provider by uuid or name (osc-placement) * placement: Make API history doc more consistent * Handle agg generation conflict in report client * Add unit test for non-placement resize * cover migration cases with functional tests * Bug fixes for sharing resource providers * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * replace deprecated accept.best_match * Don't heal allocations for deleted servers * Ignore UserWarning for scope checks during test runs * Enforce placement minimum in nova.cmd.status * normalize_name helper (in os-traits) * Fix nits in nested provider allocation candidates(2) * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams # End Yow. That was long. Thanks for reading. Review some code please. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From prometheanfire at gentoo.org Fri Jun 15 15:23:36 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 15 Jun 2018 10:23:36 -0500 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <20180615150050.hhz777oa35junk5c@yuggoth.org> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> Message-ID: <20180615152336.fles6tu7xerw6x2r@gentoo.org> On 18-06-15 15:00:50, Jeremy Stanley wrote: > Governance tooling change https://review.openstack.org/575554 is > currently up for review to start displaying current PTL E-mail > addresses on the team specific pages linked from the projects index > https://governance.openstack.org/tc/reference/projects/ page. > > Since https://review.openstack.org/234420 merged a few years ago > we've been tracking PTL E-mail addresses in the structured data from > which we generate those pages, but the Sphinx extension we're using > was never amended to include them. Having somewhere consistent to > point members of the community who need to reach out to the PTL of a > team in private (and may not have easy access or comfort to do so via > IRC privmsg) would be useful. Right now we're stuck telling people > to dig around in a YAML file for them, which is not an especially > friendly answer. > > A knee-jerk reaction any time E-mail addresses get displayed > somewhere new is that it's going to increase the amount of spam > those addresses receive. Keep in mind that we've been publishing all > of them on a Web page for years now, just one which is only > convenient for spammers and not one which is convenient for people > who might have a legitimate need to contact our PTLs. > -- > Jeremy Stanley Not sure it'd help but one option we do is to create aliases based on the title. Though since the PTLs don't have addresses on the openstack domain an alias may not make as much sense, it'd have to be a full account forward. It's useful for centralized spam filtering. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jun 15 15:29:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jun 2018 15:29:07 +0000 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <20180615152336.fles6tu7xerw6x2r@gentoo.org> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> Message-ID: <20180615152907.5rhw4mfeggrfqtca@yuggoth.org> On 2018-06-15 10:23:36 -0500 (-0500), Matthew Thode wrote: [...] > Not sure it'd help but one option we do is to create aliases based > on the title. Though since the PTLs don't have addresses on the > openstack domain an alias may not make as much sense, it'd have to > be a full account forward. It's useful for centralized spam > filtering. I'm not personally comfortable having someone else decide for me what is or isn't spam, and I doubt I'm alone in that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Fri Jun 15 15:34:33 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 15 Jun 2018 10:34:33 -0500 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <20180615152907.5rhw4mfeggrfqtca@yuggoth.org> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> <20180615152907.5rhw4mfeggrfqtca@yuggoth.org> Message-ID: <20180615153433.ibw3raotsyvgmex6@gentoo.org> On 18-06-15 15:29:07, Jeremy Stanley wrote: > On 2018-06-15 10:23:36 -0500 (-0500), Matthew Thode wrote: > [...] > > Not sure it'd help but one option we do is to create aliases based > > on the title. Though since the PTLs don't have addresses on the > > openstack domain an alias may not make as much sense, it'd have to > > be a full account forward. It's useful for centralized spam > > filtering. > > I'm not personally comfortable having someone else decide for me > what is or isn't spam, and I doubt I'm alone in that. > -- > Jeremy Stanley That makes sense, it'd only be for openstack ptl emails, still makes sense. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jean-philippe at evrard.me Fri Jun 15 15:37:02 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 15 Jun 2018 17:37:02 +0200 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <20180615152336.fles6tu7xerw6x2r@gentoo.org> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> Message-ID: > Not sure it'd help but one option we do is to create aliases based on > the title. Though since the PTLs don't have addresses on the openstack > domain an alias may not make as much sense, it'd have to be a full > account forward. It's useful for centralized spam filtering. I foresee this: 1) We create an alias to PTL email 2) PTL think that kind of emails are worth sharing with a team to balance work 3) We now have a project mailing list 4) People stop using openstack-dev lists. But that's maybe me... From corvus at inaugust.com Fri Jun 15 15:46:40 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 15 Jun 2018 08:46:40 -0700 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1529070040-sup-2028@lrrr.local> (Doug Hellmann's message of "Fri, 15 Jun 2018 09:41:41 -0400") References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> Message-ID: <87in6kaw7z.fsf@meyer.lemoncheese.net> Doug Hellmann writes: > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: >> Yes, It will not be set on LIBS_FROM_GIT as we did not set it >> explicitly. But gate running on any repo does run job on current >> change set of that repo which is nothing but "master + current patch >> changes" . For example, any job running on oslo.config patch will >> take oslo.config source code from that patch which is "master + >> current change". You can see the results in this patch - >> https://review.openstack.org/#/c/575324/ . Where I deleted a module >> and gate jobs (including tempest-full-py3) fails as they run on >> current change set of neutron-lib code not on pypi version(which >> would pass the tests). > > The tempest-full-py3 job passed for that patch, though. Which seems to > indicate that the neutron-lib repository was not used in the test job, > even though it was checked out. The automatic generation of LIBS_FROM_GIT only includes projects which appear in required-projects. So in this case neutron-lib does not appear in LIBS_FROM_GIT[1], so the change is not actually tested by that job. Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work, but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic generation, so more complex jobs (which may want to inherit from that job but need multiple libraries) would also have to override LIBS_FROM_GIT and add the full set of projects. The code that automatically sets LIBS_FROM_GIT is fairly simple and could be modified to automatically add the project of the change under test. We could do that for all jobs, or we could add a flag which toggles the behavior. The question to answer here is: is there ever a case where a devstack job should not install the change under test from source? I think the answer is no, and even though under Zuul v2 devstack-gate didn't automatically add the project under test to LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB templating which did. A further thing to consider is what the desired behavior is for a series of changes. If a change to neutron-lib depends on a change to oslo.messaging, when the forward testing job runs on neutron-lib, should it also add oslo.messaging to LIBS_FROM_GIT? That's equally easy to implement (but would certainly need a flag as it essentially would add everything in the change series to LIBS_FROM_GIT which defeats the purpose of the restriction for the jobs which need it), but I honestly am not certain what's desired. For each type of project (service, lib, lib-group (eg oslo.messaging)), what do we want to test from git vs pypi? How many jobs are needed to accomplish that? What should happen with a change series with other projects in it? [1] http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt -Jim From skaplons at redhat.com Fri Jun 15 15:53:47 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 15 Jun 2018 17:53:47 +0200 Subject: [openstack-dev] [neutron] Question on the OVS configuration In-Reply-To: <2e8fb8a136494cb1989db1090ac97f5f@KULX13MDC117.APAC.DELL.COM> References: <8063e93ac6784f76a8f78844229403cd@KULX13MDC117.APAC.DELL.COM> <8946937f8a634d2faa88cd21fe47949f@KULX13MDC117.APAC.DELL.COM> <5E21BE7E-A9D5-4EF2-AAC2-20EF05FAB1A5@redhat.com> <2e8fb8a136494cb1989db1090ac97f5f@KULX13MDC117.APAC.DELL.COM> Message-ID: You are using vxlan network which is not going through br-ex but via br-tun. In br-tun You have established vxlan tunnel: > Port "vxlan-c0a81218" > Interface "vxlan-c0a81218" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="192.168.20.132", out_key=flow, remote_ip="192.168.18.24”} So traffic from Your vm is going via this tunnel to remote end with IP 192.168.18.24 from local IP 192.168.20.132 This local IP 192.168.20.132 is probably configured on Your eno1 interface. Ovs is sending packets to remote IP according To Your routing table so packets to 192.168.18.24 are going via eno1. If You want to use br-ex to send packets, You should have flat or vlan network created and such networks are going via br-ex basically. > Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 12:13: > > Apologize for having sent this question to a dev mailing list first! But I humbly request to continue the discussion here. > > > My VM is connect to a private network under demo project, here is the info of the network: > > $ openstack network show 64f4f4dc-a851-486a-8789-43b816d9bf3d > +---------------------------+----------------------------------------------------------------------------+ > | Field | Value | > +---------------------------+----------------------------------------------------------------------------+ > | admin_state_up | UP | > | availability_zone_hints | | > | availability_zones | nova | > | created_at | 2018-06-15T04:26:18Z | > | description | | > | dns_domain | None | > | id | 64f4f4dc-a851-486a-8789-43b816d9bf3d | > | ipv4_address_scope | None | > | ipv6_address_scope | None | > | is_default | None | > | is_vlan_transparent | None | > | mtu | 1450 | > | name | private | > | port_security_enabled | True | > | project_id | e202899d90ba449d880be42f19cd6a55 | > | provider:network_type | vxlan | > | provider:physical_network | None | > | provider:segmentation_id | 72 | > | qos_policy_id | None | > | revision_number | 4 | > | router:external | Internal | > | segments | None | > | shared | False | > | status | ACTIVE | > | subnets | 18a0847e-b733-4ec2-9e25-d7d630a1af2f, 91e91bab-7405-4717-97cd-4ca2cb11589d | > | tags | | > | updated_at | 2018-06-15T04:26:23Z | > +---------------------------+----------------------------------------------------------------------------+ > > > > And below is the full output of ovs bridges. > > $ sudo ovs-vsctl show > 0ee72d8a-65bc-4c82-884a-61b0e86b9893 > Manager "ptcp:6640:127.0.0.1" > is_connected: true > Bridge br-int > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port "qvo97604b93-55" > tag: 1 > Interface "qvo97604b93-55" > Port int-br-ex > Interface int-br-ex > type: patch > options: {peer=phy-br-ex} > Port "qr-c3f198ac-0b" > tag: 1 > Interface "qr-c3f198ac-0b" > type: internal > Port "sg-8868b1a8-69" > tag: 1 > Interface "sg-8868b1a8-69" > type: internal > Port br-int > Interface br-int > type: internal > Port "qvo6f012656-74" > tag: 1 > Interface "qvo6f012656-74" > Port "fg-c4e5dcbc-a3" > tag: 2 > Interface "fg-c4e5dcbc-a3" > type: internal > Port "tap10dc7b3e-a7" > tag: 1 > Interface "tap10dc7b3e-a7" > type: internal > Port patch-tun > Interface patch-tun > type: patch > options: {peer=patch-int} > Port "qg-4014b9e8-ce" > tag: 2 > Interface "qg-4014b9e8-ce" > type: internal > Port "qr-883cea95-31" > tag: 1 > Interface "qr-883cea95-31" > type: internal > Port "sg-69c838e6-bb" > tag: 1 > Interface "sg-69c838e6-bb" > type: internal > Bridge br-tun > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port "vxlan-c0a81218" > Interface "vxlan-c0a81218" > type: vxlan > options: {df_default="true", in_key=flow, local_ip="192.168.20.132", out_key=flow, remote_ip="192.168.18.24"} > Port br-tun > Interface br-tun > type: internal > Port patch-int > Interface patch-int > type: patch > options: {peer=patch-tun} > Bridge br-ex > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port phy-br-ex > Interface phy-br-ex > type: patch > options: {peer=int-br-ex} > Port "eno2" > Interface "eno2" > Port br-ex > Interface br-ex > type: internal > ovs_version: "2.8.0" > > > > Thanks! > > Best Regards, > Dave Chen > > -----Original Message----- > From: Slawomir Kaplonski [mailto:skaplons at redhat.com] > Sent: Friday, June 15, 2018 5:43 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [neutron] Question on the OVS configuration > > Please send info about network to which You vm is connected and config of all bridges from ovs also. > >> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 11:18: >> >> Thanks Slawomir for your reply, so what's the right configuration if I want my VM could be able to access external with physical NIC "eno2"? Do I still need add that NIC into "br-ex"? >> >> >> Best Regards, >> Dave Chen >> >> -----Original Message----- >> From: Slawomir Kaplonski [mailto:skaplons at redhat.com] >> Sent: Friday, June 15, 2018 5:09 PM >> To: OpenStack Development Mailing List (not for usage questions) >> Subject: Re: [openstack-dev] [neutron] Question on the OVS >> configuration >> >> Hi, >> >> If You have vxlan network than traffic from it is going via vxlan tunnel which is in br-tun bridge instead of br-ex. >> >>> Wiadomość napisana przez Dave.Chen at Dell.com w dniu 15.06.2018, o godz. 10:17: >>> >>> Dear folks, >>> >>> I have setup a pretty simple OpenStack cluster in our lab based on devstack, couples of guest VM are running on one controller node (this doesn’t looks like a right behavior anyway), the Neutron network is configured as OVS + vxlan, the bridge “br-ex” configured as below: >>> >>> Bridge br-ex >>> Controller "tcp:127.0.0.1:6633" >>> is_connected: true >>> fail_mode: secure >>> Port phy-br-ex >>> Interface phy-br-ex >>> type: patch >>> options: {peer=int-br-ex} >>> Port br-ex >>> Interface br-ex >>> type: internal >>> ovs_version: "2.8.0" >>> >>> >>> >>> As you see, there is no external physical NIC bound to “br-ex”, so I guess the traffic from the VM to external will use the default route set on the controller node, since there is a NIC (eno2) that can access external so I bind it to “br-ex” like this: ovs-vsctl add-port br-ex eno2. now the “br-ex” is configured as below: >>> >>> Bridge br-ex >>> Controller "tcp:127.0.0.1:6633" >>> is_connected: true >>> fail_mode: secure >>> Port phy-br-ex >>> Interface phy-br-ex >>> type: patch >>> options: {peer=int-br-ex} >>> *Port "eno2"* >>> Interface "eno2" >>> Port br-ex >>> Interface br-ex >>> type: internal >>> ovs_version: "2.8.0" >>> >>> >>> >>> Looks like this is how it should be configured according to lots of wiki/blog suggestion I have googled, but it doesn’t work as expected, ping from the VM, the tcpdump shows the traffic still go the “eno1” which is the default route on the controller node. >>> >>> Inside of VM >>> ubuntu at test-br:~$ ping 8.8.8.8 >>> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. >>> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=38 time=168 ms >>> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=38 time=168 ms … >>> >>> Dump the traffic on the “eno2”, got nothing $ sudo tcpdump -nn -i >>> eno2 icmp >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode listening on eno2, link-type EN10MB (Ethernet), capture size >>> 262144 bytes … >>> >>> Dump the traffic on the “eno1” (internal NIC), catch it! >>> $ sudo tcpdump -nn -i eno1 icmp >>> tcpdump: verbose output suppressed, use -v or -vv for full protocol >>> decode listening on eno1, link-type EN10MB (Ethernet), capture size >>> 262144 bytes >>> 16:08:59.609888 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >>> 1439, seq 1, length 64 >>> 16:08:59.781042 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id >>> 1439, seq 1, length 64 >>> 16:09:00.611453 IP 192.168.20.132 > 8.8.8.8: ICMP echo request, id >>> 1439, seq 2, length 64 >>> 16:09:00.779550 IP 8.8.8.8 > 192.168.20.132: ICMP echo reply, id >>> 1439, seq 2, length 64 >>> >>> >>> $ sudo ip route >>> default via 192.168.18.1 dev eno1 proto static metric 100 default >>> via 192.168.8.1 dev eno2 proto static metric 101 >>> 169.254.0.0/16 dev docker0 scope link metric 1000 linkdown >>> 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 >>> linkdown >>> 192.168.8.0/24 dev eno2 proto kernel scope link src 192.168.8.101 >>> metric 100 >>> 192.168.16.0/21 dev eno1 proto kernel scope link src >>> 192.168.20.132 metric 100 >>> 192.168.42.0/24 dev br-ex proto kernel scope link src 192.168.42.1 >>> >>> >>> What’s going wrong here? Do I miss something? Or some service need to be restarted? >>> >>> Anyone could help me out? This question made me sick for many days! Huge thanks in the advance! >>> >>> >>> Best Regards, >>> Dave >>> >>> _____________________________________________________________________ >>> _ ____ OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> >> ______________________________________________________________________ >> ____ OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From gouthampravi at gmail.com Fri Jun 15 15:55:08 2018 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 15 Jun 2018 08:55:08 -0700 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: <20180615060319.GA4410@palahniuk.int.rhx> References: <20180615060319.GA4410@palahniuk.int.rhx> Message-ID: +1 On Thu, Jun 14, 2018 at 11:03 PM Michele Baldessari wrote: > > +1 > > On Wed, Jun 13, 2018 at 08:50:23AM -0700, Emilien Macchi wrote: > > Alan Bishop has been highly involved in the Storage backends integration in > > TripleO and Puppet modules, always here to update with new features, fix > > (nasty and untestable third-party backends) bugs and manage all the > > backports for stable releases: > > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > > > He's also well knowledgeable of how TripleO works and how containers are > > integrated, I would like to propose him as core on TripleO projects for > > patches related to storage things (Cinder, Glance, Swift, Manila, and > > backends). > > > > Please vote -1/+1, > > Thanks! > > -- > > Emilien Macchi > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Fri Jun 15 16:23:14 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 15 Jun 2018 09:23:14 -0700 Subject: [openstack-dev] [nova] Rocky blueprint status tracking Message-ID: Howdy everyone, Similar to last cycle, we have an etherpad for tracking the status of approved nova blueprints for Rocky here: https://etherpad.openstack.org/p/nova-rocky-blueprint-status that we can use to help us review patches. If I've missed any blueprints or if anything needs an update, please add a note on the etherpad and we'll get it sorted. Thanks, -melanie From dms at danplanet.com Fri Jun 15 17:05:26 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 15 Jun 2018 10:05:26 -0700 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: <4bd5a90a-46f4-492f-4f13-201872d43919@fried.cc> (Eric Fried's message of "Mon, 11 Jun 2018 10:14:33 -0500") References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> <4254211e-7f4e-31c8-89f6-0338d6c7464f@gmail.com> <20180608093545.GE11695@paraplu> <20180611095529.GA3344@redhat> <20180611150604.GF11695@paraplu> <4bd5a90a-46f4-492f-4f13-201872d43919@fried.cc> Message-ID: > I thought we were leaning toward the option where nova itself doesn't > impose a limit, but lets the virt driver decide. > > I would really like NOT to see logic like this in any nova code: > >> if kvm|qemu: >> return 256 >> elif POWER: >> return 4000 >> elif: >> ... It's insanity to try to find a limit that will work for everyone. PowerVM supports a billion, libvirt/kvm has some practical and theoretical limits, both of which are higher than what is actually sane. It depends on your virt driver, and how you're attaching your volumes, maybe how tightly you pack your instances, probably how many threads you give to an instance, how big your compute nodes are, and definitely what your workload is. That's a really big matrix, and even if we decide on something, IBM will come out of the woodwork with some other hypervisor that has been around since the Nixon era that uses BCD-encoded volume numbers and thus can only support 10. It's going to depend, and a user isn't going to be able to reasonably probe it using any of our existing APIs. If it's going to depend on all the above factors, I see no reason not to put a conf value in so that operators can pick a reasonably sane limit. Otherwise, the limit we pick will be wrong for everyone. Plus... if we do a conf option we can put this to rest and stop talking about it, which I for one am *really* looking forward to :) --Dan From colleen at gazlene.net Fri Jun 15 18:36:29 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 15 Jun 2018 20:36:29 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 11 June 2018 Message-ID: <1529087789.3505113.1409511520.55B2D71F@webmail.messagingengine.com> # Keystone Team Update - Week of 11 June 2018 ## News ### New Implied Roles in keystone-manage bootstrap We landed one of the first building blocks for Default Roles across OpenStack, which is ensuring they are created during the bootstrap of keystone[1]. Since this is new feature of the bootstrap command, it has implications for people who run the command after their keystone is already bootstrapped. We talked[2] about what the intended purpose of the bootstrap command is versus what it is potentially being used for, for example as part of automation that expects it to be idempotent. We agreed that the bootstrap change for default roles needed to highlight the changing behavior in the upgrade notes so that operators could prepare for it if needed. Separately, it would also be good to implement a way to check whether bootstrap had already been run so that automated tools can make decisions about whether they need to run it, perhaps sidestepping the question of whether the command itself should be considered idempotent. [1] https://review.openstack.org/572243 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-06-13.log.html#t2018-06-13T12:44:00 ### OPNFV Edge Computing I attended the OPNFV Edge Computing Group's[3] meeting[4] to represent the keystone team and answer their questions about keystone's testing for federated scenarios. They were interested in donating hardware resources to fill out our use case coverage, but I had to inform them that we're still a ways away from having even basic keystone-to-keystone coverage and that the best way to help would to provide people resources to help work on it. [3] https://wiki.opnfv.org/display/PROJ/Edge+cloud [4] https://etherpad.opnfv.org/p/edge_cloud_meeting_minutes ### Flaskification Morgan's work to replace our custom WSGI framework with Flask[5] is well underway. We'll be starting to move our API dispatching to Flask next week. [5] https://review.openstack.org/#/q/topic:flaskification ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 19 changes this week, including the first steps for setting up default reader and member roles[6] and several changes for the Flask work[7]. [6] https://review.openstack.org/572243 [7] https://review.openstack.org/#/q/status:merged+topic:flaskification ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 53 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots, whose authors are waiting for any feedback. We've also started feature implementations, and initial feedback is welcome even if they are not passing tests yet[8][9][10][11]. [8] https://review.openstack.org/572776 [9] https://review.openstack.org/#/q/topic:bp/unified-limits+status:open [10] https://review.openstack.org/#/q/topic:bp/strict-two-level-model+status:open [11] https://review.openstack.org/#/q/topic:bug/1754184+status:open ## Bugs These week we opened 8 new bugs and closed 2. Bugs opened (8) Bug #1776506 (keystone:High) opened by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1776506 Bug #1776504 (keystone:Medium) opened by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1776504 Bug #1776532 (keystone:Medium) opened by John Dennis https://bugs.launchpad.net/keystone/+bug/1776532 Bug #1776541 (keystone:Medium) opened by John Dennis https://bugs.launchpad.net/keystone/+bug/1776541 Bug #1776221 (keystone:Undecided) opened by Yuxin Wang https://bugs.launchpad.net/keystone/+bug/1776221 Bug #1777086 (keystone:Undecided) opened by 徐爱保 https://bugs.launchpad.net/keystone/+bug/1777086 Bug #1776501 (keystoneauth:Undecided) opened by Chris Dent https://bugs.launchpad.net/keystoneauth/+bug/1776501 Bug #1777177 (keystonemiddleware:Medium) opened by Morgan Fainberg https://bugs.launchpad.net/keystonemiddleware/+bug/1777177 Bugs fixed (2) Bug #1776506 (keystone:High) fixed by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1776506 Bug #1776501 (keystoneauth:Undecided) fixed by Eric Fried https://bugs.launchpad.net/keystoneauth/+bug/1776501 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Next week is our feature *proposal* freeze. If you're working on implementing specs, some initial patches should be proposed by the end of next week. Many feature patchs have already been proposed. Initial feedback on these WIP proposals is encouraged. ## Shout-outs Thanks to everyone getting started on implementing our major feature work for this cycle: adriant, hrybacki, jaosorior, jgrassler, wxy, thank you! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From doug at doughellmann.com Fri Jun 15 19:01:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Jun 2018 15:01:45 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <87in6kaw7z.fsf@meyer.lemoncheese.net> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> <87in6kaw7z.fsf@meyer.lemoncheese.net> Message-ID: <1529088834-sup-1897@lrrr.local> Excerpts from corvus's message of 2018-06-15 08:46:40 -0700: > Doug Hellmann writes: > > > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > >> Yes, It will not be set on LIBS_FROM_GIT as we did not set it > >> explicitly. But gate running on any repo does run job on current > >> change set of that repo which is nothing but "master + current patch > >> changes" . For example, any job running on oslo.config patch will > >> take oslo.config source code from that patch which is "master + > >> current change". You can see the results in this patch - > >> https://review.openstack.org/#/c/575324/ . Where I deleted a module > >> and gate jobs (including tempest-full-py3) fails as they run on > >> current change set of neutron-lib code not on pypi version(which > >> would pass the tests). > > > > The tempest-full-py3 job passed for that patch, though. Which seems to > > indicate that the neutron-lib repository was not used in the test job, > > even though it was checked out. > > The automatic generation of LIBS_FROM_GIT only includes projects which > appear in required-projects. So in this case neutron-lib does not > appear in LIBS_FROM_GIT[1], so the change is not actually tested by that > job. > > Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work, > but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic > generation, so more complex jobs (which may want to inherit from that > job but need multiple libraries) would also have to override > LIBS_FROM_GIT and add the full set of projects. > > The code that automatically sets LIBS_FROM_GIT is fairly simple and > could be modified to automatically add the project of the change under > test. We could do that for all jobs, or we could add a flag which > toggles the behavior. The question to answer here is: is there ever a > case where a devstack job should not install the change under test from > source? I think the answer is no, and even though under Zuul v2 > devstack-gate didn't automatically add the project under test to > LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB > templating which did. Adding the project-under-test to LIBS_FROM_GIT unconditionally feels like the behavior I would expect from the job. > A further thing to consider is what the desired behavior is for a series > of changes. If a change to neutron-lib depends on a change to > oslo.messaging, when the forward testing job runs on neutron-lib, should > it also add oslo.messaging to LIBS_FROM_GIT? That's equally easy to > implement (but would certainly need a flag as it essentially would add > everything in the change series to LIBS_FROM_GIT which defeats the > purpose of the restriction for the jobs which need it), but I honestly > am not certain what's desired. I think going ahead and adding everything in the dependency chain also makes sense. If I have 2 changes in libraries and a change in a service and I want to test them all together I would expect to be able to do that by using Depends-On and then for all 3 to be installed from source in the job that runs. > > For each type of project (service, lib, lib-group (eg oslo.messaging)), > what do we want to test from git vs pypi? We want to test changes to service projects with libraries from PyPI so that we do not end up with services that rely on unreleased features of libraries. We want to test changes to libraries with some services installed from git so that we know changes to the library do not break (current) master of the service. The set of interesting services may vary, but a default set that represents the tightly coupled services that run in the integrated gate now is reasonable. > How many jobs are needed to accomplish that? Ideally 1? Or 2? That's what I'm trying to work out. > What should happen with a change series with other > projects in it? I expect all of the patches in a series to be installed from source somewhere in the chain. That works today if we have a library patch that depends on a service patch because that patched version of the service is used in the dsvm job run against the library change. If we could make the reverse dependency work, too (where a patch to a service depends on a library change), that would be grand. I think your patch https://review.openstack.org/#/c/575801/ at least lets us go in one direction (library->service) using a single job definition, but I can't tell if it would work the other way around. > > [1] http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt > > -Jim > From doug at doughellmann.com Fri Jun 15 19:05:51 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Jun 2018 15:05:51 -0400 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> Message-ID: <1529089443-sup-6239@lrrr.local> Excerpts from Jean-Philippe Evrard's message of 2018-06-15 17:37:02 +0200: > > Not sure it'd help but one option we do is to create aliases based on > > the title. Though since the PTLs don't have addresses on the openstack > > domain an alias may not make as much sense, it'd have to be a full > > account forward. It's useful for centralized spam filtering. > > I foresee this: > 1) We create an alias to PTL email > 2) PTL think that kind of emails are worth sharing with a team to balance work > 3) We now have a project mailing list > 4) People stop using openstack-dev lists. > > But that's maybe me... > Yeah, setting all of that up feels like it would just be something else we would have to remember to do every time we have an election. I'm trying to reduce the number those kinds of tasks we have, so let's not add a new one. Doug From mriedemos at gmail.com Fri Jun 15 21:12:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 15 Jun 2018 16:12:21 -0500 Subject: [openstack-dev] [nova] Rocky blueprint status tracking In-Reply-To: References: Message-ID: On 6/15/2018 11:23 AM, melanie witt wrote: > Similar to last cycle, we have an etherpad for tracking the status of > approved nova blueprints for Rocky here: > > https://etherpad.openstack.org/p/nova-rocky-blueprint-status > > that we can use to help us review patches. If I've missed any blueprints > or if anything needs an update, please add a note on the etherpad and > we'll get it sorted. Thanks for doing this, I find it very useful to get an overall picture of where we're sitting in the final milestone. -- Thanks, Matt From kennelson11 at gmail.com Fri Jun 15 21:13:23 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 15 Jun 2018 14:13:23 -0700 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: <20180615091254.GB4410@palahniuk.int.rhx> References: <20180615091254.GB4410@palahniuk.int.rhx> Message-ID: On Fri, Jun 15, 2018 at 2:13 AM Michele Baldessari wrote: > On Mon, May 21, 2018 at 01:58:26PM -0700, Emilien Macchi wrote: > > During the Storyboard session today: > > https://etherpad.openstack.org/p/continuing-the-migration-lp-sb > > > > We mentioned that TripleO would continue to migrate during Rocky cycle. > > Like Alex mentioned in this thread, we need to migrate the scripts used > by > > the CI squad so they work with SB. > > Once this is done, we'll proceed to the full migration of all blueprints > > and bugs into tripleo-common project in SB. > > Projects like tripleo-validations, tripleo-ui (more?) who have 1:1 > mapping > > between their "name" and project repository could use a dedicated project > > in SB, although we need to keep things simple for our users so they know > > where to file a bug without confusion. > > We hope to proceed during Rocky but it'll probably take some time to > update > > our scripts and documentation, also educate our community to use the > tool, > > so we expect the Stein cycle the first cycle where we actually consume > SB. > > > > I really wanted to thank the SB team for their patience and help, TripleO > > is big and this migration hasn't been easy but we'll make it :-) > > Having used storyboard for the first time today to file a bug^Wstory in > heat, > I'd like to raise a couple of concerns on this migration. And by all > means, if I just missed to RTFM, feel free to point me in the right > direction. > > 1. Searching for bugs in a specific project is *extremely* cumbersome > and I am not even sure I got it right (first you need to put > openstack/project in the search bar, wait and click it. Then you add > the term you are looking for. I have genuinely no idea if I get all > the issues I was looking for or not as it is not obvious on what > fields this search is performed > The wait and click-it part is being resolved- that was a bug where you had to select it to actually get the correct project- it couldn't just be typed. Yes, if you are looking for stories from a certain project- you want to restrict the search by adding the project- same way you do in gerrit. If you feel like the search results aren't showing as many as you think there should be- you can adjust the setting to show more results. I suppose we could add some sort of natural language process thing where with the results it says searching for 'y' in project 'x' (where you put x and y into the search- x as the project and y as the term as you did in the description above). Not sure if that would help? I think the search is a lot more intelligent than people assume and so it seems difficult to deal with until you figure out how it works. Checking out the api docs on search might be helpful[1]? [1] https://docs.openstack.org/infra/storyboard/webapi/v1.html > 2. Advanced search is either very well hidden or not existant yet? > E.g. how do you search for bugs filed by someone or over a certain > release, or just generally more complex searches which are super > useful in order to avoid filing duplicate bugs. > > Advanced search is the magnifying glass icon on the left navigation bar (assuming you are using the webclient interface at storyboard.o.o) which I think is what you were describing using above. The search in the top right of the webclient is just a jump to quick search. > I think Zane's additional list also matches my experience very well: > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131365.html > > I think a lot of Zane's comment's have already been addressed, but I can go reply on that thread too and hopefully clear up some of the concerns/fears. > So my take is that a migration atm is a bit premature and I would > postpone it at least to Stein. > > Hopefully- it will be a stein release goal so it wouldn't be that big a deal if you waited, but ideally, if there aren't any real blockers for your team, the sooner the better. > Thanks, > Michele > > > Thanks, > > > > On Tue, May 15, 2018 at 7:53 AM, Alex Schultz > wrote: > > > > > Bumping this up so folks can review this. It was mentioned in this > > > week's meeting that it would be a good idea for folks to take a look > > > at Storyboard to get familiar with it. The upstream docs have been > > > updated[0] to point to the differences when dealing with proposed > > > patches. Please take some time to review this and raise any > > > concerns/issues now. > > > > > > Thanks, > > > -Alex > > > > > > [0] https://docs.openstack.org/infra/manual/developers.html# > > > development-workflow > > > > > > On Wed, May 9, 2018 at 1:24 PM, Alex Schultz > wrote: > > > > Hello tripleo folks, > > > > > > > > So we've been experimenting with migrating some squads over to > > > > storyboard[0] but this seems to be causing more issues than perhaps > > > > it's worth. Since the upstream community would like to standardize > on > > > > Storyboard at some point, I would propose that we do a cut over of > all > > > > the tripleo bugs/blueprints from Launchpad to Storyboard. > > > > > > > > In the irc meeting this week[1], I asked that the tripleo-ci team > make > > > > sure the existing scripts that we use to monitor bugs for CI support > > > > Storyboard. I would consider this a prerequisite for the migration. > > > > I am thinking it would be beneficial to get this done before or as > > > > close to M2. > > > > > > > > Thoughts, concerns, etc? > > > > > > > > Thanks, > > > > -Alex > > > > > > > > [0] https://storyboard.openstack.org/#!/project_group/76 > > > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/ > > > tripleo.2018-05-08-14.00.log.html#l-42 > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > -- > > Emilien Macchi > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Michele Baldessari > C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hope this helps :) Thanks! -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Fri Jun 15 21:14:00 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 15 Jun 2018 16:14:00 -0500 Subject: [openstack-dev] [nova] [placement] placement update 18-24 In-Reply-To: References: Message-ID: Thank you as always for doing this, Chris. > Some of the older items in this list are not getting much attention. > That's a shame. The list is ordered (oldest first) the way it is on > purpose. > > * >   Purge comp_node and res_prvdr records during deletion of >   cells/hosts This is still on its first patch set, in merge conflict, with no action for about 3mo. Is it still being worked? > * >   placement: Make API history doc more consistent Reviewed. > * >   Handle agg generation conflict in report client Rebased. This previously had three +1s. > * >   Add unit test for non-placement resize Reviewed. > * >   cover migration cases with functional tests Reviewed. > * >   Bug fixes for sharing resource providers Two patches under this topic. https://review.openstack.org/533437 is abandoned https://review.openstack.org/#/c/519601/ reviewed (again) & rebased > * >   Move refresh time from report client to prov tree This patch is still alive only as a marker on my TODO list - I need to replace it with something completely different as noted by Jay & myself at the bottom. > * >   PCPU resource class Reviewed & rebased. This also made me notice an unused thing, which I've proposed to remove via https://review.openstack.org/575847 > * >   rework how we pass candidate request information This represents a toe in the waters of "we ought to go back and majorly refactor a lot of the placement code - especially nova/api/openstack/placement/objects/resource_provider.py - to make it more readable and maintainable. This particular patch is in merge conflict (pretty majorly, if I'm not mistaken) and probably needs to wait until the dust settles around nrp-in-alloc-candidates to be resurrected. > * >   add root parent NULL online migration Reviewed. (In merge conflict, and needs tests.) > * >   add resource_requests field to RequestSpec Active series currently starts at https://review.openstack.org/#/c/570018/ I've been reviewing these; need to catch up on the latest. > * >   replace deprecated accept.best_match Heading to the gate. > * >   Enforce placement minimum in nova.cmd.status Heading to the gate. > * >   normalize_name helper (in os-traits) Needs second core review, please. > * >   Fix nits in nested provider allocation candidates(2) Heading to the gate. > * >   Convert driver supported capabilities to compute node provider >   traits Merge conflict and a bevy of -1s, needs TLC from the author. > * >   Use placement.inventory.inuse in report client Rebased. > * >   ironic: Report resources as reserved when needed Needs merge conflict resolved. > * >   Test for multiple limit/group_policy qparams Another marker for my TODO list. Added -W. > # End > > Yow. That was long. Thanks for reading. Review some code please. ++ -efried From aschultz at redhat.com Fri Jun 15 22:29:46 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 15 Jun 2018 16:29:46 -0600 Subject: [openstack-dev] [tripleo] Status of Standalone installer (aka All-In-One) In-Reply-To: References: Message-ID: On Mon, Jun 4, 2018 at 6:26 PM, Emilien Macchi wrote: > TL;DR: we made nice progress and you can checkout this demo: > https://asciinema.org/a/185533 > > We started the discussion back in Dublin during the last PTG. The idea of > Standalone (aka All-In-One, but can be mistaken with all-in-one overcloud) > is to deploy a single node OpenStack where the provisioning happens on the > same node (there is no notion of {under/over}cloud). > > A kind of a "packstack" or "devstack" but using TripleO which has can offer: > - composable containerized services > - composable upgrades > - composable roles > - Ansible driven deployment > > One of the key features we have been focusing so far are: > - low bar to be able to dev/test TripleO (single machine: VM), with simpler > tooling > - make it fast (being able to deploy OpenStack in minutes) So to provide an update, I spent this week trying to get the network configuration down for the standalone configuration. I've proposed docs[0] for two configurations. I was able to test the two configurations: a) 2 nic (requires second nic with an accessable second "public" network that is optionally route-able for VM connectivity) b) 1 nic (requires 3 ips) Additionally I've proposed an update to the Standalone role[1] that includes Controller + Compute on a single node. With this I was able to try out Keystone, Nova, Neutron (with ovs, floating ips), Glance (backed by Swift), Cinder (lvm). This configuration took about 35 mins to go from 0 to cloud on a single 16gb VM hosted on some old hardware. Thanks, -Alex [0] https://review.openstack.org/#/c/575859/ [1] https://review.openstack.org/#/c/575862/ From gdubreui at redhat.com Fri Jun 15 22:42:02 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sat, 16 Jun 2018 08:42:02 +1000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point Message-ID: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> Hello, This initial patch [1]  allows to retrieve networks, subnets. This is very easy, thanks to the graphene-sqlalchemy helper. The edges, nodes layer might be confusing at first meanwhile they make the Schema Relay-compliant in order to offer re-fetching, pagination features out of the box. The next priority is to set the unit test in order to implement mutations. Could someone help provide a base in order to respect Neutron test requirements? [1] https://review.openstack.org/#/c/574543/ Thanks, Gilles From openstack at fried.cc Fri Jun 15 23:09:49 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 15 Jun 2018 18:09:49 -0500 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: <1529067528-sup-5035@lrrr.local> References: <1529062449.22989.2@smtp.office365.com> <1529067528-sup-5035@lrrr.local> Message-ID: <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> Doug- > The lower constraints tests only look at files in the same repo. > The minimum versions of dependencies set in requirements.txt, > test-requirements.txt, etc. need to match the values in > lower-constraints.txt. > > In this case, the more detailed error message is a few lines above the > error quoted by Chen CH Ji. The detail say "Requirement for package > retrying has no lower bound" which means that there is a line in > requirements.txt indicating a dependency on "retrying" but without > specifying a minimum version. That is the problem. The patch didn't change the retrying constraint in requirements.txt [1]; why isn't this same failure affecting every other patch in nova? [1] https://review.openstack.org/#/c/523387/51/requirements.txt at 65 -efried From doug at doughellmann.com Sat Jun 16 00:13:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 15 Jun 2018 20:13:53 -0400 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> References: <1529062449.22989.2@smtp.office365.com> <1529067528-sup-5035@lrrr.local> <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> Message-ID: <1529107561-sup-175@lrrr.local> Excerpts from Eric Fried's message of 2018-06-15 18:09:49 -0500: > Doug- > > > The lower constraints tests only look at files in the same repo. > > The minimum versions of dependencies set in requirements.txt, > > test-requirements.txt, etc. need to match the values in > > lower-constraints.txt. > > > > In this case, the more detailed error message is a few lines above the > > error quoted by Chen CH Ji. The detail say "Requirement for package > > retrying has no lower bound" which means that there is a line in > > requirements.txt indicating a dependency on "retrying" but without > > specifying a minimum version. That is the problem. > > The patch didn't change the retrying constraint in requirements.txt [1]; > why isn't this same failure affecting every other patch in nova? > > [1] https://review.openstack.org/#/c/523387/51/requirements.txt at 65 > > -efried > Earlier this cycle I updated the requirements check job to verify that all of the settings are correct any time any changes to the dependency lists are made. We used to only look at the line being changed, but that allowed incorrect settings to stay in place for a long time so we weren't actually testing with good settings. We still only run that job when the dependency list is modified in some way. Earlier this week, Matt Thode updated the job to be more strict and to require that all dependencies have a minimum version specified [2]. We did this because some project teams thought that after we dropped the minimums from the global-requirements.txt list they were supposed to (or allowed to) drop them from their project dependency lists, too. My guess is that this dependency in nova never had a lower bound and that this is the first patch to touch the dependency list, so now it's being blocked on the fact that the list has a validation error. I recommend using a separate patch to fix the minimum version of retrying and then rebasing 523387 on top of the new patch. Doug [2] https://review.openstack.org/#/c/574367/ From lars at redhat.com Sat Jun 16 02:06:30 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Fri, 15 Jun 2018 22:06:30 -0400 Subject: [openstack-dev] DeployArtifacts considered...complicated? Message-ID: <20180616020630.dkivh6ugwjt4k2tn@redhat.com> I've been working on a series of patches to enable support for keystone federation in tripleo. I've been making good use of the DeployArtifacts support for testing puppet modules...until today. I have some patches that teach puppet-keystone about multi-valued configuration options (like trusted_dashboard). They replace the keystone_config provider (and corresponding type) with ones that work with the 'openstackconfig' provider (instead of ini_settings). These work great when I test them in isolation, but whenever I ran them as part of an "overcloud deploy" I would get erroneous output. After digging through the various layers I found myself looking at docker-puppet.py [1], which ultimately ends up calling puppet like this: puppet apply ... --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules ... It's that --modulepath argument that's the culprit. DeployArtifacts (when using the upload-puppet-modules script) works by replacing the symlinks in /etc/puppet/modules with the directories from your upload directory. Even though the 'keystone' module in /etc/puppet/modules takes precedence when doing something like 'include ::keystone', *all the providers and types* in lib/puppet/* in /usr/share/openstack-puppet/modules will be activated. So in this case -- in which I've replaced the keystone_config provider -- we get the old ini_settings provider, and I don't get the output that I expect. The quickest workaround is to generate the tarball by hand and map the modules onto /usr/share/openstack-puppet/modules... tar -cz -f patches/puppet-modules.tar.gz \ --transform "s|patches/puppet-modules|usr/share/openstack-puppet/modules|" \ patches/puppet-modules ...and then use upload-swift-artifacts: upload-swift-artifacts -f patches/puppet-modules.tar.gz Done this way, I get the output I expect. [1]: https://github.com/openstack/tripleo-heat-templates/blob/master/docker/docker-puppet.py -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From emilien at redhat.com Sat Jun 16 06:15:01 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 15 Jun 2018 23:15:01 -0700 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: Sending an update before the weekend: Gate was in very bad shape today (long queue, lot of failures) again today, and it turns out we had a few more issues that we tracked here: https://etherpad.openstack.org/p/tripleo-gate-issues-june-2018 ## scenario007 broke because of a patch in networking-ovn https://bugs.launchpad.net/tripleo/+bug/1777168 We made the job non voting and meanwhile tried and managed to fix it: https://review.rdoproject.org/r/#/c/14155/ Breaking commit was: https://github.com/openstack/networking-ovn/commit/2365df1cc3e24deb2f3745c925d78d6d8e5bb5df Kudos to Daniel Alvarez for having the patch ready! Also thanks to Wes for making the job non voting in the meantime. I've reverted the non-voting things are situation is fixed now, so we can vote again on this one. ## Dockerhub proxy issue Infra using wrong image layer object storage proxy for Dockerhub: https://review.openstack.org/#/c/575787/ Huge thanks to infra team, specially Clark for fixing this super quickly, it clearly helped to stabilize our container jobs, I actually haven't seen timeouts since we merged your patch. Thanks a ton! ## RDO master wasn't consistent anymore, python-cloudkittyclient broke The client was refactored: https://git.openstack.org/cgit/openstack/python-cloudkittyclient/commit/?id=d070f6a68cddf51c57e77107f1b823a8f75770ba And it broke the RPM, we had to completely rewrite the dependencies so we can build the package: https://review.rdoproject.org/r/#/c/14265/ Mille merci Heikel for your responsive help at 3am, so we could come back consistent and have our latest rpms that contained a bunch of fixes. ## Where we are now Gate looks stable now. You can recheck and approve things. I went ahead and rechecked everything and made sure nothing was left abandoned. Steve's work has merged so I think we could re-consider https://review.openstack.org/#/c/575330/ again. Special thanks to everyone involved in these issues and Alex & John who also stepped up to help. Enjoy your weekend! On Thu, Jun 14, 2018 at 6:40 AM, Emilien Macchi wrote: > It sounds like we merged a bunch last night thanks to the revert, so I > went ahead and restored/rechecked everything that was out of the gate. I've > checked and nothing was left over, but let me know in case I missed > something. > I'll keep updating this thread with the progress made to improve the > situation etc. > So from now, situation is back to "normal", recheck/+W is ok. > > Thanks again for your patience, > > On Wed, Jun 13, 2018 at 10:39 PM, Emilien Macchi > wrote: > >> https://review.openstack.org/575264 just landed (and didn't timeout in >> check nor gate without recheck, so good sigh it helped to mitigate). >> >> I've restore and rechecked some patches that I evacuated from the gate, >> please do not restore others or recheck or approve anything for now, and >> see how it goes with a few patches. >> We're still working with Steve on his patches to optimize the way we >> deploy containers on the registry and are investigating how we could make >> it faster with a proxy. >> >> Stay tuned and thanks for your patience. >> >> On Wed, Jun 13, 2018 at 5:50 PM, Emilien Macchi >> wrote: >> >>> TL;DR: gate queue was 25h+, we put all patches from gate on standby, do >>> not restore/recheck until further announcement. >>> >>> We recently enabled the containerized undercloud for multinode jobs and >>> we believe this was a bit premature as the container download process >>> wasn't optimized so it's not pulling the mirrors for the same containers >>> multiple times yet. >>> It caused the job runtime to increase and probably the load on docker.io >>> mirrors hosted by OpenStack Infra to be a bit slower to provide the same >>> containers multiple times. The time taken to prepare containers on the >>> undercloud and then for the overcloud caused the jobs to randomly timeout >>> therefore the gate to fail in a high amount of times, so we decided to >>> remove all jobs from the gate by abandoning the patches temporarily (I have >>> them in my browser and will restore when things are stable again, please do >>> not touch anything). >>> >>> Steve Baker has been working on a series of patches that optimize the >>> way we prepare the containers but basically the workflow will be: >>> - pull containers needed for the undercloud into a local registry, using >>> infra mirror if available >>> - deploy the containerized undercloud >>> - pull containers needed for the overcloud minus the ones already pulled >>> for the undercloud, using infra mirror if available >>> - update containers on the overcloud >>> - deploy the containerized undercloud >>> >>> With that process, we hope to reduce the runtime of the deployment and >>> therefore reduce the timeouts in the gate. >>> To enable it, we need to land in that order: https://review.openstac >>> k.org/#/c/571613/, https://review.openstack.org/#/c/574485/, >>> https://review.openstack.org/#/c/571631/ and https://review.openstack.o >>> rg/#/c/568403. >>> >>> In the meantime, we are disabling the containerized undercloud recently >>> enabled on all scenarios: https://review.openstack.org/#/c/575264/ for >>> mitigation with the hope to stabilize things until Steve's patches land. >>> Hopefully, we can merge Steve's work tonight/tomorrow and re-enable the >>> containerized undercloud on scenarios after checking that we don't have >>> timeouts and reasonable deployment runtimes. >>> >>> That's the plan we came with, if you have any question / feedback please >>> share it. >>> -- >>> Emilien, Steve and Wes >>> >> >> >> >> -- >> Emilien Macchi >> > > > > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Sat Jun 16 06:42:37 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sat, 16 Jun 2018 06:42:37 +0000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> Message-ID: <1529131252.j1u6fixa3j.tristanC@fedora> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: > Hello, > > This initial patch [1]  allows to retrieve networks, subnets. > > This is very easy, thanks to the graphene-sqlalchemy helper. > > The edges, nodes layer might be confusing at first meanwhile they make > the Schema Relay-compliant in order to offer re-fetching, pagination > features out of the box. > > The next priority is to set the unit test in order to implement mutations. > > Could someone help provide a base in order to respect Neutron test > requirements? > > > [1] [abandoned] Actually, the correct review (proposed on the feature/graphql branch) is: [1] https://review.openstack.org/575898 > > Thanks, > Gilles > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jaosorior at gmail.com Sat Jun 16 08:11:39 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Sat, 16 Jun 2018 11:11:39 +0300 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> <1529008217.7441.68.camel@redhat.com> Message-ID: +1 I dig On Fri, 15 Jun 2018, 17:41 Dave McCowan (dmccowan), wrote: > +1 > This is a great time. > > On 6/14/18, 4:30 PM, "Ade Lee" wrote: > > >The new time slot has been pretty difficult for folks to attend. > >I'd like to propose a new time slot, which will hopefully be more > >amenable to everyone. > > > >Tuesday 12:00 UTC > > > >https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=00&se > >c=0 > > > >This works out to 8 am EST, around 1pm in Europe, and 8 pm in China. > >Please vote by responding to this email. > > > >Thanks, > >Ade > > > >__________________________________________________________________________ > >OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Sat Jun 16 12:39:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 16 Jun 2018 07:39:21 -0500 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> References: <1529062449.22989.2@smtp.office365.com> <1529067528-sup-5035@lrrr.local> <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> Message-ID: <99e3e928-ab49-eb94-0a32-d9b681a4563e@gmail.com> On 6/15/2018 6:09 PM, Eric Fried wrote: > why isn't this same failure affecting every other patch in nova? It is: https://review.openstack.org/#/c/571225/ -- Thanks, Matt From fungi at yuggoth.org Sat Jun 16 12:47:10 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 16 Jun 2018 12:47:10 +0000 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: References: Message-ID: <20180616124704.vbt32lms5hc6tjfh@yuggoth.org> On 2018-06-15 23:15:01 -0700 (-0700), Emilien Macchi wrote: [...] > ## Dockerhub proxy issue > Infra using wrong image layer object storage proxy for Dockerhub: > https://review.openstack.org/#/c/575787/ > Huge thanks to infra team, specially Clark for fixing this super quickly, > it clearly helped to stabilize our container jobs, I actually haven't seen > timeouts since we merged your patch. Thanks a ton! [...] As best we can tell from logs, the way Dockerhub served these images changed a few weeks ago (at the end of May) leading to this problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From pabelanger at redhat.com Sat Jun 16 14:20:39 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Sat, 16 Jun 2018 10:20:39 -0400 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: <20180616124704.vbt32lms5hc6tjfh@yuggoth.org> References: <20180616124704.vbt32lms5hc6tjfh@yuggoth.org> Message-ID: <20180616142039.GA23813@localhost.localdomain> On Sat, Jun 16, 2018 at 12:47:10PM +0000, Jeremy Stanley wrote: > On 2018-06-15 23:15:01 -0700 (-0700), Emilien Macchi wrote: > [...] > > ## Dockerhub proxy issue > > Infra using wrong image layer object storage proxy for Dockerhub: > > https://review.openstack.org/#/c/575787/ > > Huge thanks to infra team, specially Clark for fixing this super quickly, > > it clearly helped to stabilize our container jobs, I actually haven't seen > > timeouts since we merged your patch. Thanks a ton! > [...] > > As best we can tell from logs, the way Dockerhub served these images > changed a few weeks ago (at the end of May) leading to this problem. > -- > Jeremy Stanley Should also note what we are doing here is a terrible hack, we've only been able to learn the information by sniffing the traffic to hub.docker.io for our reverse proxy cache configuration. It is also possible this can break in the future too, so something to always keep in the back of your mind. It would be great if docker tools just worked with HTTP proxies. -Paul From whayutin at redhat.com Sat Jun 16 16:00:22 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Sat, 16 Jun 2018 10:00:22 -0600 Subject: [openstack-dev] [tripleo] tripleo gate is blocked - please read In-Reply-To: <20180616142039.GA23813@localhost.localdomain> References: <20180616124704.vbt32lms5hc6tjfh@yuggoth.org> <20180616142039.GA23813@localhost.localdomain> Message-ID: On Sat, Jun 16, 2018 at 10:21 AM Paul Belanger wrote: > On Sat, Jun 16, 2018 at 12:47:10PM +0000, Jeremy Stanley wrote: > > On 2018-06-15 23:15:01 -0700 (-0700), Emilien Macchi wrote: > > [...] > > > ## Dockerhub proxy issue > > > Infra using wrong image layer object storage proxy for Dockerhub: > > > https://review.openstack.org/#/c/575787/ > > > Huge thanks to infra team, specially Clark for fixing this super > quickly, > > > it clearly helped to stabilize our container jobs, I actually haven't > seen > > > timeouts since we merged your patch. Thanks a ton! > > [...] > > > > As best we can tell from logs, the way Dockerhub served these images > > changed a few weeks ago (at the end of May) leading to this problem. > > -- > > Jeremy Stanley > > Should also note what we are doing here is a terrible hack, we've only > been able > to learn the information by sniffing the traffic to hub.docker.io for our > reverse > proxy cache configuration. It is also possible this can break in the > future too, > so something to always keep in the back of your mind. > Thanks Paul, Jeremy and the other infra folks involved. The TripleO CI team is working towards tracking the time on some of these container tasks atm. Thanks for doing what you guys could do given the circumstances. > > It would be great if docker tools just worked with HTTP proxies. > > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Jun 18 00:48:38 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Jun 2018 10:48:38 +1000 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <1529089443-sup-6239@lrrr.local> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> <1529089443-sup-6239@lrrr.local> Message-ID: <20180618004837.GE18927@thor.bakeyournoodle.com> On Fri, Jun 15, 2018 at 03:05:51PM -0400, Doug Hellmann wrote: > Excerpts from Jean-Philippe Evrard's message of 2018-06-15 17:37:02 +0200: > > > Not sure it'd help but one option we do is to create aliases based on > > > the title. Though since the PTLs don't have addresses on the openstack > > > domain an alias may not make as much sense, it'd have to be a full > > > account forward. It's useful for centralized spam filtering. > > > > I foresee this: > > 1) We create an alias to PTL email > > 2) PTL think that kind of emails are worth sharing with a team to balance work > > 3) We now have a project mailing list > > 4) People stop using openstack-dev lists. > > > > But that's maybe me... > > > > Yeah, setting all of that up feels like it would just be something > else we would have to remember to do every time we have an election. > I'm trying to reduce the number those kinds of tasks we have, so > let's not add a new one. While I'm not sure that JP's scenario would eventuate I am against adding the aliases and adding additional work for the election officials. It's not that this would be terribly hard to automate it just seems like duplication of data/effort whereas the change under review is pretty straight forward. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From namnh at vn.fujitsu.com Mon Jun 18 01:05:26 2018 From: namnh at vn.fujitsu.com (Nguyen Hoai, Nam) Date: Mon, 18 Jun 2018 01:05:26 +0000 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1529008217.7441.68.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> <1529008217.7441.68.camel@redhat.com> Message-ID: +1 from me. > -----Original Message----- > From: Ade Lee [mailto:alee at redhat.com] > Sent: Friday, June 15, 2018 3:30 AM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [barbican] NEW weekly meeting time > > The new time slot has been pretty difficult for folks to attend. > I'd like to propose a new time slot, which will hopefully be more amenable to > everyone. > > Tuesday 12:00 UTC > > https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min=0 > 0&se > c=0 > > This works out to 8 am EST, around 1pm in Europe, and 8 pm in China. > Please vote by responding to this email. > > Thanks, > Ade > > ______________________________________________________________ > ____________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Mon Jun 18 02:03:52 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 18 Jun 2018 12:03:52 +1000 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint Message-ID: <20180618020352.GF18927@thor.bakeyournoodle.com> Hello folks, Recently Ivan became the Horizon PTL and as with past PTLs (Hi Rob) isn't a member of the horizon-stable-maint team. Ivan is a member of the Cinder stable team and as such has demonstrated an understanding of the stable policy. Since the Dublin PTG Ivan has been doing consistent high quality reviews on Horizon's stable branches. As such I'm suggesting adding him to the existing stable team. Without strong objections I'll do that on (my) Monday 25th June. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jichenjc at cn.ibm.com Mon Jun 18 10:42:54 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 18 Jun 2018 18:42:54 +0800 Subject: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' In-Reply-To: <1529107561-sup-175@lrrr.local> References: <1529062449.22989.2@smtp.office365.com> <1529067528-sup-5035@lrrr.local> <19c1044c-a5e9-306f-ddab-572398bd3831@fried.cc> <1529107561-sup-175@lrrr.local> Message-ID: Thanks all for helping , Saw this patch [1] merged and assume that's the fix for the issue , we will rebase based on it then try again, [1] https://review.openstack.org/#/c/575872/4 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Doug Hellmann To: openstack-dev Date: 06/16/2018 08:14 AM Subject: Re: [openstack-dev] [requirements][nova] weird error on 'Validating lower constraints of test-requirements.txt' Excerpts from Eric Fried's message of 2018-06-15 18:09:49 -0500: > Doug- > > > The lower constraints tests only look at files in the same repo. > > The minimum versions of dependencies set in requirements.txt, > > test-requirements.txt, etc. need to match the values in > > lower-constraints.txt. > > > > In this case, the more detailed error message is a few lines above the > > error quoted by Chen CH Ji. The detail say "Requirement for package > > retrying has no lower bound" which means that there is a line in > > requirements.txt indicating a dependency on "retrying" but without > > specifying a minimum version. That is the problem. > > The patch didn't change the retrying constraint in requirements.txt [1]; > why isn't this same failure affecting every other patch in nova? > > [1] https://review.openstack.org/#/c/523387/51/requirements.txt at 65 > > -efried > Earlier this cycle I updated the requirements check job to verify that all of the settings are correct any time any changes to the dependency lists are made. We used to only look at the line being changed, but that allowed incorrect settings to stay in place for a long time so we weren't actually testing with good settings. We still only run that job when the dependency list is modified in some way. Earlier this week, Matt Thode updated the job to be more strict and to require that all dependencies have a minimum version specified [2]. We did this because some project teams thought that after we dropped the minimums from the global-requirements.txt list they were supposed to (or allowed to) drop them from their project dependency lists, too. My guess is that this dependency in nova never had a lower bound and that this is the first patch to touch the dependency list, so now it's being blocked on the fact that the list has a validation error. I recommend using a separate patch to fix the minimum version of retrying and then rebasing 523387 on top of the new patch. Doug [2] https://review.openstack.org/#/c/574367/ __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Jun 18 11:25:02 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jun 2018 20:25:02 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1529070040-sup-2028@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> Message-ID: <16412a41d62.f218f57a2621.6054755881129644919@ghanshyammann.com> ---- On Fri, 15 Jun 2018 22:41:41 +0900 Doug Hellmann wrote ---- > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > > > > > > > ---- On Fri, 15 Jun 2018 06:17:34 +0900 Doug Hellmann wrote ---- > > > Excerpts from Doug Hellmann's message of 2018-06-14 13:02:31 -0400: > > > > Excerpts from Ghanshyam's message of 2018-06-14 16:54:33 +0900: > > > > > > > > > > > Could it be as simple as adding tempest-full-py3 with the > > > > > > > > required-projects list updated to include the current repository? So > > > > > > > > there isn't a special separate job, and we would just reuse > > > > > > > > tempest-full-py3 for this? > > > > > > > > > > This can work if lib-forward-testing is going to run against current lib repo only not cross lib or cross project. For example, if neutron want to tests neutron change against neutron-lib src then this will not work. But from history [1] this does not seems to be scope of lib-forward-testing. > > > > > > > > > > Even we do not need to add current repo to required-projects list or in LIBS_FROM_GIT . That will always from master + current patch changes. So this makes no change in tempest-full-py3 job and we can directly use tempest-full-py3 job in lib-forward-testing. Testing in [2]. > > > > > > > > Does it? So if I add tempest-full-py3 to a *library* that library is > > > > installed from source in the job? I know the source for the library > > > > will be checked out, but I'm surprised that devstack would be configured > > > > to use it. How does that work? > > > > > > Based on my testing, that doesn't seem to be the case. I added it to > > > oslo.config and looking at the logs [1] I do not set LIBS_FROM_GIT set > > > to include oslo.config and the check function is returning false so that > > > it is not installed from source [2]. > > > > Yes, It will not be set on LIBS_FROM_GIT as we did not set it explicitly. But gate running on any repo does run job on current change set of that repo which is nothing but "master + current patch changes" . For example, any job running on oslo.config patch will take oslo.config source code from that patch which is "master + current change". You can see the results in this patch - https://review.openstack.org/#/c/575324/ . Where I deleted a module and gate jobs (including tempest-full-py3) fails as they run on current change set of neutron-lib code not on pypi version(which would pass the tests). > > The tempest-full-py3 job passed for that patch, though. Which seems to > indicate that the neutron-lib repository was not used in the test job, > even though it was checked out. oops, I saw the other job failure and overlooked tempest-full-py3 (friday night effect :)). Your point is correct on LIBS_FROM_GIT . -gmann > > > > > In that case, lib's proposed change will be tested against integration tests job to check any regression. If we need to run cross lib/project testing of any lib then, yes we need the 'tempest-full-py3-src' job but that is separate things as you mentioned. > > > > -gmann > > > > > > > > So, I think we need the tempest-full-py3-src job. I will propose an > > > update to the tempest repo to add that. > > > > > > Doug > > > > > > [1] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz > > > [2] http://logs.openstack.org/64/575164/2/check/tempest-full-py3/9aa50ad/job-output.txt.gz#_2018-06-14_19_40_56_223136 > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From andrea.frittoli at gmail.com Mon Jun 18 11:28:56 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 18 Jun 2018 13:28:56 +0200 Subject: [openstack-dev] [qa][integrated] Multinode base job Message-ID: Dear all, the Tempest / devstack multinode integration base job in it's Zuulv3 native incarnation is finally working, and the patch [0] to making it voting on Tempest is approved. The name of the new job is "tempest-multinode-full". This effectively replaces the legacy "neutron-tempest-multinode-full". Since "neutron-tempest-multinode-full" is used as non-voting by all projects that use the "integrated-gate" job template, I'd propose to: - add "tempest-multinode-full" as non-voting to the integrated gate for master, queens and pike - fix any issue that may show up on queens/pike - remove neutron-tempest-multinode-full legacy job I would also like to make the multinode job voting, at least on devstack, possibly on all integrated gate repos. Please let me know if anyone as any concern with this plan. Andrea -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jun 18 11:38:00 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jun 2018 20:38:00 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1529088834-sup-1897@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> <87in6kaw7z.fsf@meyer.lemoncheese.net> <1529088834-sup-1897@lrrr.local> Message-ID: <16412affcf9.d40dd1203330.4903406179594531642@ghanshyammann.com> ---- On Sat, 16 Jun 2018 04:01:45 +0900 Doug Hellmann wrote ---- > Excerpts from corvus's message of 2018-06-15 08:46:40 -0700: > > Doug Hellmann writes: > > > > > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > > > >> Yes, It will not be set on LIBS_FROM_GIT as we did not set it > > >> explicitly. But gate running on any repo does run job on current > > >> change set of that repo which is nothing but "master + current patch > > >> changes" . For example, any job running on oslo.config patch will > > >> take oslo.config source code from that patch which is "master + > > >> current change". You can see the results in this patch - > > >> https://review.openstack.org/#/c/575324/ . Where I deleted a module > > >> and gate jobs (including tempest-full-py3) fails as they run on > > >> current change set of neutron-lib code not on pypi version(which > > >> would pass the tests). > > > > > > The tempest-full-py3 job passed for that patch, though. Which seems to > > > indicate that the neutron-lib repository was not used in the test job, > > > even though it was checked out. > > > > The automatic generation of LIBS_FROM_GIT only includes projects which > > appear in required-projects. So in this case neutron-lib does not > > appear in LIBS_FROM_GIT[1], so the change is not actually tested by that > > job. Yes, this is now clear to me. I was in impressions of treating the lib same way as service for testing repo always from source but that's not the case. > > > > Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work, > > but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic > > generation, so more complex jobs (which may want to inherit from that > > job but need multiple libraries) would also have to override > > LIBS_FROM_GIT and add the full set of projects. > > > > The code that automatically sets LIBS_FROM_GIT is fairly simple and > > could be modified to automatically add the project of the change under > > test. We could do that for all jobs, or we could add a flag which > > toggles the behavior. The question to answer here is: is there ever a > > case where a devstack job should not install the change under test from > > source? I think the answer is no, and even though under Zuul v2 > > devstack-gate didn't automatically add the project under test to > > LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB > > templating which did. > > Adding the project-under-test to LIBS_FROM_GIT unconditionally feels > like the behavior I would expect from the job. ++ > > > A further thing to consider is what the desired behavior is for a series > > of changes. If a change to neutron-lib depends on a change to > > oslo.messaging, when the forward testing job runs on neutron-lib, should > > it also add oslo.messaging to LIBS_FROM_GIT? That's equally easy to > > implement (but would certainly need a flag as it essentially would add > > everything in the change series to LIBS_FROM_GIT which defeats the > > purpose of the restriction for the jobs which need it), but I honestly > > am not certain what's desired. > > I think going ahead and adding everything in the dependency chain > also makes sense. If I have 2 changes in libraries and a change in > a service and I want to test them all together I would expect to > be able to do that by using Depends-On and then for all 3 to be > installed from source in the job that runs. Yes, I agree on testing all series(either alone repo or depends-on on lib or service) with installed from source. > > > > > For each type of project (service, lib, lib-group (eg oslo.messaging)), > > what do we want to test from git vs pypi? > > We want to test changes to service projects with libraries from > PyPI so that we do not end up with services that rely on unreleased > features of libraries. > > We want to test changes to libraries with some services installed > from git so that we know changes to the library do not break (current) > master of the service. The set of interesting services may vary, > but a default set that represents the tightly coupled services that > run in the integrated gate now is reasonable. > > > How many jobs are needed to accomplish that? > > Ideally 1? Or 2? That's what I'm trying to work out. Currently "tempest-full/-py3" job does not run the 'slow' marked scenario tests and they run in separate job "tempest-scenario-multinode-lvm-multibackend"(which i am working to make it more clean) I think "tempest-full/-py3" will cover the most of the code/lib usage coverage. > > > What should happen with a change series with other > > projects in it? > > I expect all of the patches in a series to be installed from source > somewhere in the chain. That works today if we have a library patch > that depends on a service patch because that patched version of the > service is used in the dsvm job run against the library change. If > we could make the reverse dependency work, too (where a patch to a > service depends on a library change), that would be grand. > > I think your patch https://review.openstack.org/#/c/575801/ at least > lets us go in one direction (library->service) using a single job > definition, but I can't tell if it would work the other way around. > > > > > [1] http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt > > > > -Jim > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From amotoki at gmail.com Mon Jun 18 12:31:07 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 18 Jun 2018 21:31:07 +0900 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint In-Reply-To: <20180618020352.GF18927@thor.bakeyournoodle.com> References: <20180618020352.GF18927@thor.bakeyournoodle.com> Message-ID: +1 to add Ivan to the horizon stable maint team. 2018年6月18日(月) 11:04 Tony Breeds : > Hello folks, > Recently Ivan became the Horizon PTL and as with past PTLs (Hi Rob) > isn't a member of the horizon-stable-maint team. Ivan is a member of > the Cinder stable team and as such has demonstrated an understanding of > the stable policy. Since the Dublin PTG Ivan has been doing consistent > high quality reviews on Horizon's stable branches. > > As such I'm suggesting adding him to the existing stable team. > > Without strong objections I'll do that on (my) Monday 25th June. > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Mon Jun 18 12:35:26 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 18 Jun 2018 13:35:26 +0100 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: <20180616020630.dkivh6ugwjt4k2tn@redhat.com> References: <20180616020630.dkivh6ugwjt4k2tn@redhat.com> Message-ID: On Sat, Jun 16, 2018 at 3:06 AM, Lars Kellogg-Stedman wrote: > I've been working on a series of patches to enable support for > keystone federation in tripleo. I've been making good use of the > DeployArtifacts support for testing puppet modules...until today. > > I have some patches that teach puppet-keystone about multi-valued > configuration options (like trusted_dashboard). They replace the > keystone_config provider (and corresponding type) with ones that work > with the 'openstackconfig' provider (instead of ini_settings). These > work great when I test them in isolation, but whenever I ran them as > part of an "overcloud deploy" I would get erroneous output. > > After digging through the various layers I found myself looking at > docker-puppet.py [1], which ultimately ends up calling puppet like > this: > > puppet apply ... --modulepath=/etc/puppet/modules:/usr/share/openstack-puppet/modules ... > > It's that --modulepath argument that's the culprit. DeployArtifacts > (when using the upload-puppet-modules script) works by replacing the > symlinks in /etc/puppet/modules with the directories from your upload > directory. Even though the 'keystone' module in /etc/puppet/modules > takes precedence when doing something like 'include ::keystone', *all > the providers and types* in lib/puppet/* in > /usr/share/openstack-puppet/modules will be activated. Is this the same issue Carlos is trying to fix via https://review.openstack.org/#/c/494517/ ? I think there was some confusion on that patch around the underlying problem, but I think your explanation here helps, e.g the problem is you can conceivably end up with a mix of old/new modules? Thanks, Steve From shardy at redhat.com Mon Jun 18 12:49:23 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 18 Jun 2018 13:49:23 +0100 Subject: [openstack-dev] [tripleo][heat][jinja] resources.RedisVirtualIP: Property error: resources.VipPort.properties.network: Error validating value 'internal_api': Unable to find network with name or id 'internal_api' In-Reply-To: References: Message-ID: On Thu, Jun 14, 2018 at 1:48 PM, Mark Hamzy wrote: > I am trying to delete the Storage, StorageMgmt, Tenant, and Management > networks and trying to deploy using TripleO. > > The following patch > https://hamzy.fedorapeople.org/0001-RedisVipPort-error-internal_api.patchapplied > on top of /usr/share/openstack-tripleo-heat-templates from > openstack-tripleo-heat-templates-8.0.2-14.el7ost.noarch > > yields the following error: > > (undercloud) [stack at oscloud5 ~]$ openstack overcloud deploy --templates -e > ~/templates/node-info.yaml -e ~/templates/overcloud_images.yaml -e > ~/templates/environments/network-environment.yaml -e > ~/templates/environments/network-isolation.yaml -e > ~/templates/environments/config-debug.yaml --ntp-server pool.ntp.org > --control-scale 1 --compute-scale 1 --control-flavor control > --compute-flavor compute 2>&1 | tee output.overcloud.deploy > ... > overcloud.RedisVirtualIP: > resource_type: OS::TripleO::Network::Ports::RedisVipPort > physical_resource_id: > status: CREATE_FAILED > status_reason: | > resources.RedisVirtualIP: Property error: > resources.VipPort.properties.network: Error validating value 'internal_api': > Unable to find network with name or id 'internal_api' > ... > > The following patch seems to fix it: > > 8<-----8<-----8<-----8<-----8<----- > diff --git a/environments/network-isolation.j2.yaml > b/environments/network-isolation.j2.yaml > index 3d4f59b..07cb748 100644 > --- a/environments/network-isolation.j2.yaml > +++ b/environments/network-isolation.j2.yaml > @@ -20,7 +20,13 @@ resource_registry: > {%- for network in networks if network.vip and > network.enabled|default(true) %} > OS::TripleO::Network::Ports::{{network.name}}VipPort: > ../network/ports/{{network.name_lower|default(network.name.lower())}}.yaml > {%- endfor %} > +{%- for role in roles -%} > + {%- if internal_api in role.networks|default([]) and > internal_api.enabled|default(true) %} > OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml > + {%- else %} > + # Avoid weird jinja2 bugs that don't output a newline... > + {%- endif %} > +{%- endfor -%} Does this actually work, or just suppress the error because your network_data.yaml has deleted the internal_api network? >From the diff it looks like you're also deleting the internal_api and external network which won't work with the default ServiceNetMap: https://github.com/openstack/tripleo-heat-templates/blob/master/network/service_net_map.j2.yaml#L27 Can you please provide the network_data.yaml to confirm this? If you really want to delete the internal_api network then you'll need to pass a ServiceNetMap to specify a new bind network for every service (and any other deleted networks where used as a value in the ServiceNetMapDefaults). Thanks, Steve From dtantsur at redhat.com Mon Jun 18 12:51:01 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 18 Jun 2018 14:51:01 +0200 Subject: [openstack-dev] [TripleO] config-download/ansible next steps In-Reply-To: References: Message-ID: <89bcaba1-6eee-30c2-7d6a-a032863a48ba@redhat.com> On 06/13/2018 03:17 PM, James Slagle wrote: > On Wed, Jun 13, 2018 at 6:49 AM, Dmitry Tantsur wrote: >> Slightly hijacking the thread to provide a status update on one of the items >> :) > > Thanks for jumping in. > > >> The immediate plan right now is to wait for metalsmith 0.4.0 to hit the >> repositories, then start experimenting. I need to find a way to >> 1. make creating nova instances no-op >> 2. collect the required information from the created stack (I need networks, >> ports, hostnames, initial SSH keys, capabilities, images) >> 3. update the config-download code to optionally include the role [2] >> I'm not entirely sure where to start, so any hints are welcome. > > Here are a couple of possibilities. > > We could reuse the OS::TripleO::{{role.name}}Server mappings that we > already have in place for pre-provisioned nodes (deployed-server). > This could be mapped to a template that exposes some Ansible tasks as > outputs that drives metalsmith to do the deployment. When > config-download runs, it would execute these ansible tasks to > provision the nodes with Ironic. This has the advantage of maintaining > compatibility with our existing Heat parameter interfaces. It removes > Nova from the deployment so that from the undercloud perspective you'd > roughly have: > > Mistral -> Heat -> config-download -> Ironic (driven via ansible/metalsmith) One thing that came to my mind while planning this work is that I'd prefer all nodes to be processed in one step. This will help avoiding some issues that we have now. For example, the following does not work reliably: compute-0: just any profile:compute compute-1: precise node=abcd control-0: any node This has two issues that will pop up randomly: 1. compute-0 can pick node abcd designated for compute-1 2. control-0 can pick a compute node, failing either compute-0 or compute-1 This problem is hard to fix if all deployment requests are processed separately, but is quite trivial if the decision is done based on the whole deployment plan. I'm going to work on a bulk scheduler like that in metalsmith. > > A further (or completely different) iteration might look like: > > Step 1: Mistral -> Ironic (driven via ansible/metalsmith) > Step 2: Heat -> config-download Step 1 will still use provided environment to figure out the count of nodes for each role, their images, capabilities and (optionally) precise node scheduling? I'm a bit worried about the last bit: IIRC we rely on Heat's %index% variable currently. We can, of course, ask people to replace it with something more explicit on upgrade. > > Step 2 would use the pre-provisioned node (deployed-server) feature > already existing in TripleO and treat the just provisioned by Ironic > nodes, as pre-provisioned from the Heat stack perspective. Step 1 and > Step 2 would also probably be driven by a higher level Mistral > workflow. This has the advantage of minimal impact to > tripleo-heat-templates, and also removes Heat from the baremetal > provisioning step. However, we'd likely need some python compatibility > libraries that could translate Heat parameter values such as > HostnameMap to ansible vars for some basic backwards compatibility. Overall, I like this option better. It will allow an operator to isolate the bare metal provisioning step from everything else. > >> >> [1] https://github.com/openstack/metalsmith >> [2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html >> >>> >>> Obviously we have things to consider here such as backwards compatibility >>> and >>> upgrades, but overall, I think this would be a great simplification to our >>> overall deployment workflow. >>> >> >> Yeah, this is tricky. Can we make Heat "forget" about Nova instances? Maybe >> by re-defining them to OS::Heat::None? > > Not exactly, as Heat would delete the previous versions of the > resources. We'd need some special migrations, or could support the > existing method forever for upgrades, and only deprecate it for new > deployments. Do I get it right that if we redefine OS::TripleO::{{role.name}}Server to be OS::Heat::None, Heat will delete the old {{role.name}}Server instances on the next update? This is sad.. I'd prefer not to keep Nova support forever, this is going to be hard to maintain and cover by the CI. Should we extend Heat to support "forgetting" resources? I think it may have a use case outside of TripleO. > > I'd like to help with this work. I'll start by taking a look at what > you've got so far. Feel free to reach out if you'd like some > additional dev assistance or testing. > Thanks! From doug at doughellmann.com Mon Jun 18 13:04:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 18 Jun 2018 09:04:38 -0400 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <16412affcf9.d40dd1203330.4903406179594531642@ghanshyammann.com> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> <87in6kaw7z.fsf@meyer.lemoncheese.net> <1529088834-sup-1897@lrrr.local> <16412affcf9.d40dd1203330.4903406179594531642@ghanshyammann.com> Message-ID: <1529326895-sup-5159@lrrr.local> Excerpts from Ghanshyam Mann's message of 2018-06-18 20:38:00 +0900: > ---- On Sat, 16 Jun 2018 04:01:45 +0900 Doug Hellmann wrote ---- > > Excerpts from corvus's message of 2018-06-15 08:46:40 -0700: > > > Doug Hellmann writes: > > > > > > > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > > > > > >> Yes, It will not be set on LIBS_FROM_GIT as we did not set it > > > >> explicitly. But gate running on any repo does run job on current > > > >> change set of that repo which is nothing but "master + current patch > > > >> changes" . For example, any job running on oslo.config patch will > > > >> take oslo.config source code from that patch which is "master + > > > >> current change". You can see the results in this patch - > > > >> https://review.openstack.org/#/c/575324/ . Where I deleted a module > > > >> and gate jobs (including tempest-full-py3) fails as they run on > > > >> current change set of neutron-lib code not on pypi version(which > > > >> would pass the tests). > > > > > > > > The tempest-full-py3 job passed for that patch, though. Which seems to > > > > indicate that the neutron-lib repository was not used in the test job, > > > > even though it was checked out. > > > > > > The automatic generation of LIBS_FROM_GIT only includes projects which > > > appear in required-projects. So in this case neutron-lib does not > > > appear in LIBS_FROM_GIT[1], so the change is not actually tested by that > > > job. > > Yes, this is now clear to me. I was in impressions of treating the lib same way as service for testing repo always from source but that's not the case. > > > > > > > Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work, > > > but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic > > > generation, so more complex jobs (which may want to inherit from that > > > job but need multiple libraries) would also have to override > > > LIBS_FROM_GIT and add the full set of projects. > > > > > > The code that automatically sets LIBS_FROM_GIT is fairly simple and > > > could be modified to automatically add the project of the change under > > > test. We could do that for all jobs, or we could add a flag which > > > toggles the behavior. The question to answer here is: is there ever a > > > case where a devstack job should not install the change under test from > > > source? I think the answer is no, and even though under Zuul v2 > > > devstack-gate didn't automatically add the project under test to > > > LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB > > > templating which did. > > > > Adding the project-under-test to LIBS_FROM_GIT unconditionally feels > > like the behavior I would expect from the job. > > ++ > > > > > > A further thing to consider is what the desired behavior is for a series > > > of changes. If a change to neutron-lib depends on a change to > > > oslo.messaging, when the forward testing job runs on neutron-lib, should > > > it also add oslo.messaging to LIBS_FROM_GIT? That's equally easy to > > > implement (but would certainly need a flag as it essentially would add > > > everything in the change series to LIBS_FROM_GIT which defeats the > > > purpose of the restriction for the jobs which need it), but I honestly > > > am not certain what's desired. > > > > I think going ahead and adding everything in the dependency chain > > also makes sense. If I have 2 changes in libraries and a change in > > a service and I want to test them all together I would expect to > > be able to do that by using Depends-On and then for all 3 to be > > installed from source in the job that runs. > > Yes, I agree on testing all series(either alone repo or depends-on on lib or service) with installed from source. > > > > > > > > > For each type of project (service, lib, lib-group (eg oslo.messaging)), > > > what do we want to test from git vs pypi? > > > > We want to test changes to service projects with libraries from > > PyPI so that we do not end up with services that rely on unreleased > > features of libraries. > > > > We want to test changes to libraries with some services installed > > from git so that we know changes to the library do not break (current) > > master of the service. The set of interesting services may vary, > > but a default set that represents the tightly coupled services that > > run in the integrated gate now is reasonable. > > > > > How many jobs are needed to accomplish that? > > > > Ideally 1? Or 2? That's what I'm trying to work out. > > Currently "tempest-full/-py3" job does not run the 'slow' marked scenario tests and they run in separate job "tempest-scenario-multinode-lvm-multibackend"(which i am working to make it more clean) > > I think "tempest-full/-py3" will cover the most of the code/lib usage coverage. It does seem like enough coverage to start out, and matches what we are doing under python 2. I have proposed https://review.openstack.org/#/c/575925/ to set up a project template using tempest-full-py3. Andreas points out there that we could put the project template in the tempest repo where the job is defined. I don't know which we would prefer, so let me know what you think. I also set up a test patch on oslo.config https://review.openstack.org/#/c/575927/ that depends on the project-template patch above and Jim's devstack patch https://review.openstack.org/#/c/575801/. The lots from the tempest-full-py3 job are at http://logs.openstack.org/27/575927/2/check/tempest-full-py3/16b8922/ > > > > > > What should happen with a change series with other > > > projects in it? > > > > I expect all of the patches in a series to be installed from source > > somewhere in the chain. That works today if we have a library patch > > that depends on a service patch because that patched version of the > > service is used in the dsvm job run against the library change. If > > we could make the reverse dependency work, too (where a patch to a > > service depends on a library change), that would be grand. > > > > I think your patch https://review.openstack.org/#/c/575801/ at least > > lets us go in one direction (library->service) using a single job > > definition, but I can't tell if it would work the other way around. > > > > > > > > [1] http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt > > > > > > -Jim > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From e0ne at e0ne.info Mon Jun 18 13:11:00 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 18 Jun 2018 16:11:00 +0300 Subject: [openstack-dev] [horiozn][plugins] Third-party JavaScript libraries and Xstatic Python packages Message-ID: Hi team, As you may know, Horizon uses both Python and JavaScript dependencies as well. All of our JS dependencies are packed into the xstatic-* packages which could be installed like a regular python package. All current xstatic-* packages could be found on the Horizon's deliverables list [1]. We need all of these things to make development and packaging processes easier. Of course, we can't cover all cases and JS libs, so there is a manual on how to create new xstatic package [2]. Historically, all xstatic-* projects were maintained by Horizon Core team. Probably, they were introduced even before we've got plugins implemented but it doesn't matter at this point. Today, when we've got dozens of horizon plugins [3], some of them would like to use new xstatic packages which are not existing at the moment. E.g.: - heat-dashboard uses some JS libs which are nor required by Horizon itself. - vitrage-dashboard team is going to use React in their plugin. We discussed it briefly on the last meeting [4], As Horizon team, we don't want to forbid using some 3rd-party JS lib if it's acceptable by license. >From the other side, Horizon Core team can't maintain all xstatic-* packages which could be needed by plugins. That's why we decided to add some folks form heat-dashboard team to xstatic-* core for that projects, which are used only be Heat Dashboard now. IMO, it looks like a reasonable and fair enough solution. I think we're OK if plugin teams will share the responsibility to maintain their xstatic-* dependencies both with Horizon team following current guidelines [2]. Maintaining xstatic-* project means: - create this repo according to the current guidelines - follow community rules according to stable policy - publish a new version of xstatic project if it's required by bugfixes, security fixes, new features - help other teams to integrate this project into their plugin. I hope if we agree on things above it helps both Horizon and plugin teams to deliver new versions faster with a limited teams capacity. [1] https://governance.openstack.org/tc/reference/projects/horizon.html#deliverables [2] https://docs.openstack.org/horizon/latest/contributor/contributing.html#javascript-and-css-libraries-using-xstatic [3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html [4] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-06-13-15.04.log.html#l-124 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Jun 18 13:11:55 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 18 Jun 2018 16:11:55 +0300 Subject: [openstack-dev] [horizon][plugins] Third-party JavaScript libraries and Xstatic Python packages Message-ID: Fixed horizon tag in the subject :( Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Jun 18, 2018 at 4:11 PM, Ivan Kolodyazhny wrote: > Hi team, > > As you may know, Horizon uses both Python and JavaScript dependencies as > well. All of our JS dependencies are packed into the xstatic-* packages > which could be installed like a regular python package. All current > xstatic-* packages could be found on the Horizon's deliverables list [1]. > > We need all of these things to make development and packaging processes > easier. > > Of course, we can't cover all cases and JS libs, so there is a manual on > how to create new xstatic package [2]. > > > Historically, all xstatic-* projects were maintained by Horizon Core > team. Probably, they were introduced even before we've got plugins > implemented but it doesn't matter at this point. > > Today, when we've got dozens of horizon plugins [3], some of them would > like to use new xstatic packages which are not existing at the moment. E.g.: > - heat-dashboard uses some JS libs which are nor required by Horizon > itself. > - vitrage-dashboard team is going to use React in their plugin. > > We discussed it briefly on the last meeting [4], As Horizon team, we don't > want to forbid using some 3rd-party JS lib if it's acceptable by license. > From the other side, Horizon Core team can't maintain all xstatic-* > packages which could be needed by plugins. That's why we decided to add > some folks form heat-dashboard team to xstatic-* core for that projects, > which are used only be Heat Dashboard now. IMO, it looks like a reasonable > and fair enough solution. > > I think we're OK if plugin teams will share the responsibility to maintain > their xstatic-* dependencies both with Horizon team following current > guidelines [2]. > > Maintaining xstatic-* project means: > - create this repo according to the current guidelines > - follow community rules according to stable policy > - publish a new version of xstatic project if it's required by bugfixes, > security fixes, new features > - help other teams to integrate this project into their plugin. > > > I hope if we agree on things above it helps both Horizon and plugin teams > to deliver new versions faster with a limited teams capacity. > > > > [1] https://governance.openstack.org/tc/reference/projects/horizon.html# > deliverables > [2] https://docs.openstack.org/horizon/latest/contributor/ > contributing.html#javascript-and-css-libraries-using-xstatic > [3] https://docs.openstack.org/horizon/latest/install/plugin-registry.html > [4] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-06- > 13-15.04.log.html#l-124 > > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Jun 18 13:16:41 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 18 Jun 2018 16:16:41 +0300 Subject: [openstack-dev] [horizon][plugins] Introduce horizonlib (again) In-Reply-To: <1528904409-sup-6615@lrrr.local> References: <1528904409-sup-6615@lrrr.local> Message-ID: Hi Doug, We discussed this option a bit too. Maybe we need to think about this again. >From my point of view, it would be better to keep current release model for now, because we've got a very small amount of active horizon contributors, so current release model helps us deliver the project in time. From the other side, your option is less complicated and could be easier to implement. Let's get more feedback from the team before further discussion. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Jun 13, 2018 at 6:43 PM, Doug Hellmann wrote: > Excerpts from Ivan Kolodyazhny's message of 2018-06-13 18:01:26 +0300: > > Hi team, > > > > Last week on the Horizon meeting we discussed [1] possible options for > > Horizon release model to address current issues for plugins maintainers. > > Some background could be found here [2]. > > > > The main issue is that we should have some stable API for plugins and be > > able to release it as needed. We're trying to cover several use cases > with > > this effort. E.g: > > - do not break plugins with Horizon changes (cross-project CI would help > > with some issues here too) > > - provide an easy way to develop plugins which require specific Horizon > > version and features > > > > For now, most of the plugins use 'horizon' package to implement > > dashboard extensions. Some plugins use parts of 'openstack_dashboard' > > package. In such case, it becomes complicated to develop plugins based on > > current master and have CI up and running. > > > > The idea is to introduce something like 'horizonlib' or 'horizon-sdk' > with > > a stable API for plugin development. We're going to collect everything > > needed for this library, so plugins developers could consume only it and > do > > not relate on any internal Horizon things. > > > > We'd got horizonlib in the past. Unfortunately, we missed information > about > > what was good or bad but we'll do our best to succeed in this. > > > > > > If you have any comments or questions, please do not hesitate to drop few > > words into this conversation or ping me in IRC. We're going to collect as > > much feedback as we can before we'll discuss it in details during the > next > > PTG. > > > > > > [1] > > http://eavesdrop.openstack.org/meetings/horizon/2018/ > horizon.2018-06-06-15.01.log.html#l-29 > > [2] > > http://lists.openstack.org/pipermail/openstack-dev/2018- > March/128310.html > > > > Regards, > > Ivan Kolodyazhny, > > http://blog.e0ne.info/ > > Another solution that may end up being less work is to release Horizon > using the cycle-with-intermediary model and publish the releases to > PyPI. Those two changes would let you release changes at any point in > the cycle, to support your plugin authors, and would not require > reorganizing the code in Horizon to build a new release artifact. > > The release team would be happy to offer advice about how to make the > changes, if you want to talk about it. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Mon Jun 18 13:43:27 2018 From: shardy at redhat.com (Steven Hardy) Date: Mon, 18 Jun 2018 14:43:27 +0100 Subject: [openstack-dev] [TripleO] config-download/ansible next steps In-Reply-To: <89bcaba1-6eee-30c2-7d6a-a032863a48ba@redhat.com> References: <89bcaba1-6eee-30c2-7d6a-a032863a48ba@redhat.com> Message-ID: On Mon, Jun 18, 2018 at 1:51 PM, Dmitry Tantsur wrote: > On 06/13/2018 03:17 PM, James Slagle wrote: >> >> On Wed, Jun 13, 2018 at 6:49 AM, Dmitry Tantsur >> wrote: >>> >>> Slightly hijacking the thread to provide a status update on one of the >>> items >>> :) >> >> >> Thanks for jumping in. >> >> >>> The immediate plan right now is to wait for metalsmith 0.4.0 to hit the >>> repositories, then start experimenting. I need to find a way to >>> 1. make creating nova instances no-op >>> 2. collect the required information from the created stack (I need >>> networks, >>> ports, hostnames, initial SSH keys, capabilities, images) >>> 3. update the config-download code to optionally include the role [2] >>> I'm not entirely sure where to start, so any hints are welcome. >> >> >> Here are a couple of possibilities. >> >> We could reuse the OS::TripleO::{{role.name}}Server mappings that we >> already have in place for pre-provisioned nodes (deployed-server). >> This could be mapped to a template that exposes some Ansible tasks as >> outputs that drives metalsmith to do the deployment. When >> config-download runs, it would execute these ansible tasks to >> provision the nodes with Ironic. This has the advantage of maintaining >> compatibility with our existing Heat parameter interfaces. It removes >> Nova from the deployment so that from the undercloud perspective you'd >> roughly have: >> >> Mistral -> Heat -> config-download -> Ironic (driven via >> ansible/metalsmith) > > > One thing that came to my mind while planning this work is that I'd prefer > all nodes to be processed in one step. This will help avoiding some issues > that we have now. For example, the following does not work reliably: > > compute-0: just any profile:compute > compute-1: precise node=abcd > control-0: any node > > This has two issues that will pop up randomly: > 1. compute-0 can pick node abcd designated for compute-1 > 2. control-0 can pick a compute node, failing either compute-0 or compute-1 > > This problem is hard to fix if all deployment requests are processed > separately, but is quite trivial if the decision is done based on the whole > deployment plan. I'm going to work on a bulk scheduler like that in > metalsmith. > >> >> A further (or completely different) iteration might look like: >> >> Step 1: Mistral -> Ironic (driven via ansible/metalsmith) >> Step 2: Heat -> config-download > > > Step 1 will still use provided environment to figure out the count of nodes > for each role, their images, capabilities and (optionally) precise node > scheduling? > I'm a bit worried about the last bit: IIRC we rely on Heat's %index% > variable currently. We can, of course, ask people to replace it with > something more explicit on upgrade. > >> >> Step 2 would use the pre-provisioned node (deployed-server) feature >> already existing in TripleO and treat the just provisioned by Ironic >> nodes, as pre-provisioned from the Heat stack perspective. Step 1 and >> Step 2 would also probably be driven by a higher level Mistral >> workflow. This has the advantage of minimal impact to >> tripleo-heat-templates, and also removes Heat from the baremetal >> provisioning step. However, we'd likely need some python compatibility >> libraries that could translate Heat parameter values such as >> HostnameMap to ansible vars for some basic backwards compatibility. > > > Overall, I like this option better. It will allow an operator to isolate the > bare metal provisioning step from everything else. > >> >>> >>> [1] https://github.com/openstack/metalsmith >>> [2] https://metalsmith.readthedocs.io/en/latest/user/ansible.html >>> >>>> >>>> Obviously we have things to consider here such as backwards >>>> compatibility >>>> and >>>> upgrades, but overall, I think this would be a great simplification to >>>> our >>>> overall deployment workflow. >>>> >>> >>> Yeah, this is tricky. Can we make Heat "forget" about Nova instances? >>> Maybe >>> by re-defining them to OS::Heat::None? >> >> >> Not exactly, as Heat would delete the previous versions of the >> resources. We'd need some special migrations, or could support the >> existing method forever for upgrades, and only deprecate it for new >> deployments. > > > Do I get it right that if we redefine OS::TripleO::{{role.name}}Server to be > OS::Heat::None, Heat will delete the old {{role.name}}Server instances on > the next update? This is sad.. > > I'd prefer not to keep Nova support forever, this is going to be hard to > maintain and cover by the CI. Should we extend Heat to support "forgetting" > resources? I think it may have a use case outside of TripleO. This is already supported, it's just not the default: https://docs.openstack.org/heat/latest/template_guide/hot_spec.html#resources-section you can used e.g deletion_policy: retain to skip the deletion of the underlying heat-managed resource. Steve From doug at doughellmann.com Mon Jun 18 13:56:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 18 Jun 2018 09:56:31 -0400 Subject: [openstack-dev] [tc] Technical Committee status for 18 June Message-ID: <1529330063-sup-3228@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: * Add afsmon project under infra, https://review.openstack.org/572527 * Add new roles to OpenStack-Ansible, https://review.openstack.org/572556 Other approved changes: * document house-rule for chair-proposed typo fixes, https://review.openstack.org/572811 * showing PTL and TC email addresses on gov site: https://review.openstack.org/#/c/575554/2 and https://review.openstack.org/#/c/575797/2 * the principle "We Value Constructive Peer Review": https://review.openstack.org/#/c/570940/ * related changes to the project team guide to add review guidelines: https://review.openstack.org/#/c/574888/ and https://docs.openstack.org/project-team-guide/review-the-openstack-way.html * The consensus seemed to be that we should go ahead and update past goal documents when projects complete them, so I approved the patch to add details to the python 3.5 goal for Kolla: https://review.openstack.org/#/c/557863 Office hour logs from last week: * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-12-09.02.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-13-01.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-14-15.01.html A few folks expressed concern that using the meeting bot to record the office hours made them more like a meeting. It would be useful to have some feedback from the community about whether having the logs separate is helfpul, or if linking to the timestamp in the regular channel logs would be sufficient. == Ongoing Discussions == I filled out the remaining liaison assignments for TC members to be responsible for reaching out to project teams. Our goal is to check in with each team between now and the PTG, and record notes in the wiki. Information about several teams is already available there. * http://lists.openstack.org/pipermail/openstack-dev/2018-June/131507.html * https://wiki.openstack.org/wiki/OpenStack_health_tracker The resolution layout out the Python 2 deprecation timeline and deadline for supporting Python 3 is approved and I have started writing up a "python 3 first" goal to be considered for Stein. * https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html * https://review.openstack.org/#/c/575933/ Zane's proposal to clarify diversity requirements for new projects is receiving some discussion. * https://review.openstack.org/567944 Zane has started a draft of a technical vision for OpenStack * https://etherpad.openstack.org/p/tech-vision-2018 == TC member actions/focus/discussions for the coming week(s) == Jeremy's proposal to add a "Castellan-compatible key store" to the base services list seems to have good support but has not yet reached majority. TC members, please review. * https://review.openstack.org/572656 Ian's update to the PTI for documentation translation and PDF generation has some review feedback and needs to be updated. * https://review.openstack.org/572559 == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From alifshit at redhat.com Mon Jun 18 14:16:05 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 18 Jun 2018 10:16:05 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard Message-ID: Hey all, For Rocky I'm trying to get live migration to work properly for instances that have a NUMA topology [1]. A question that came up on one of patches [2] is how to handle resources claims on the destination, or indeed whether to handle that at all. The previous attempt's approach [3] (call it A) was to use the resource tracker. This is race-free and the "correct" way to do it, but the code is pretty opaque and not easily reviewable, as evidenced by [3] sitting in review purgatory for literally years. A simpler approach (call it B) is to ignore resource claims entirely for now and wait for NUMA in placement to land in order to handle it that way. This is obviously race-prone and not the "correct" way of doing it, but the code would be relatively easy to review. For the longest time, live migration did not keep track of resources (until it started updating placement allocations). The message to operators was essentially "we're giving you this massive hammer, don't break your fingers." Continuing to ignore resource claims for now is just maintaining the status quo. In addition, there is value in improving NUMA live migration *now*, even if the improvement is incomplete because it's missing resource claims. "Best is the enemy of good" and all that. Finally, making use of the resource tracker is just work that we know will get thrown out once we start using placement for NUMA resources. For all those reasons, I would favor approach B, but I wanted to ask the community for their thoughts. Thanks! [1] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) [2] https://review.openstack.org/#/c/567242/ [3] https://review.openstack.org/#/c/244489/ From gmann at ghanshyammann.com Mon Jun 18 14:22:13 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jun 2018 23:22:13 +0900 Subject: [openstack-dev] [qa][python3] advice needed with updating lib-forward-testing jobs In-Reply-To: <1529326895-sup-5159@lrrr.local> References: <1528833992-sup-8052@lrrr.local> <163f821c5ff.b4e8f66b26106.2998204036223302213@ghanshyammann.com> <1528900141-sup-6518@lrrr.local> <1528906598-sup-3505@lrrr.local> <1528923244-sup-2628@lrrr.local> <163fd49f739.10938e48d35712.9143436355419392438@ghanshyammann.com> <1528995441-sup-1746@lrrr.local> <1529010884-sup-4343@lrrr.local> <16400c21218.10d46190117803.5418101651230681301@ghanshyammann.com> <1529070040-sup-2028@lrrr.local> <87in6kaw7z.fsf@meyer.lemoncheese.net> <1529088834-sup-1897@lrrr.local> <16412affcf9.d40dd1203330.4903406179594531642@ghanshyammann.com> <1529326895-sup-5159@lrrr.local> Message-ID: <164134653d2.c0f744fb12368.416031068400747858@ghanshyammann.com> ---- On Mon, 18 Jun 2018 22:04:38 +0900 Doug Hellmann wrote ---- > Excerpts from Ghanshyam Mann's message of 2018-06-18 20:38:00 +0900: > > ---- On Sat, 16 Jun 2018 04:01:45 +0900 Doug Hellmann wrote ---- > > > Excerpts from corvus's message of 2018-06-15 08:46:40 -0700: > > > > Doug Hellmann writes: > > > > > > > > > Excerpts from Ghanshyam's message of 2018-06-15 09:04:35 +0900: > > > > > > > > >> Yes, It will not be set on LIBS_FROM_GIT as we did not set it > > > > >> explicitly. But gate running on any repo does run job on current > > > > >> change set of that repo which is nothing but "master + current patch > > > > >> changes" . For example, any job running on oslo.config patch will > > > > >> take oslo.config source code from that patch which is "master + > > > > >> current change". You can see the results in this patch - > > > > >> https://review.openstack.org/#/c/575324/ . Where I deleted a module > > > > >> and gate jobs (including tempest-full-py3) fails as they run on > > > > >> current change set of neutron-lib code not on pypi version(which > > > > >> would pass the tests). > > > > > > > > > > The tempest-full-py3 job passed for that patch, though. Which seems to > > > > > indicate that the neutron-lib repository was not used in the test job, > > > > > even though it was checked out. > > > > > > > > The automatic generation of LIBS_FROM_GIT only includes projects which > > > > appear in required-projects. So in this case neutron-lib does not > > > > appear in LIBS_FROM_GIT[1], so the change is not actually tested by that > > > > job. > > > > Yes, this is now clear to me. I was in impressions of treating the lib same way as service for testing repo always from source but that's not the case. > > > > > > > > > > Doug's approach of adding {{zuul.project}} to LIBS_FROM_GIT would work, > > > > but anytime LIBS_FROM_GIT is set explicitly, it turns off the automatic > > > > generation, so more complex jobs (which may want to inherit from that > > > > job but need multiple libraries) would also have to override > > > > LIBS_FROM_GIT and add the full set of projects. > > > > > > > > The code that automatically sets LIBS_FROM_GIT is fairly simple and > > > > could be modified to automatically add the project of the change under > > > > test. We could do that for all jobs, or we could add a flag which > > > > toggles the behavior. The question to answer here is: is there ever a > > > > case where a devstack job should not install the change under test from > > > > source? I think the answer is no, and even though under Zuul v2 > > > > devstack-gate didn't automatically add the project under test to > > > > LIBS_FROM_GIT, we probably had that behavior anyway due to some JJB > > > > templating which did. > > > > > > Adding the project-under-test to LIBS_FROM_GIT unconditionally feels > > > like the behavior I would expect from the job. > > > > ++ > > > > > > > > > A further thing to consider is what the desired behavior is for a series > > > > of changes. If a change to neutron-lib depends on a change to > > > > oslo.messaging, when the forward testing job runs on neutron-lib, should > > > > it also add oslo.messaging to LIBS_FROM_GIT? That's equally easy to > > > > implement (but would certainly need a flag as it essentially would add > > > > everything in the change series to LIBS_FROM_GIT which defeats the > > > > purpose of the restriction for the jobs which need it), but I honestly > > > > am not certain what's desired. > > > > > > I think going ahead and adding everything in the dependency chain > > > also makes sense. If I have 2 changes in libraries and a change in > > > a service and I want to test them all together I would expect to > > > be able to do that by using Depends-On and then for all 3 to be > > > installed from source in the job that runs. > > > > Yes, I agree on testing all series(either alone repo or depends-on on lib or service) with installed from source. > > > > > > > > > > > > > For each type of project (service, lib, lib-group (eg oslo.messaging)), > > > > what do we want to test from git vs pypi? > > > > > > We want to test changes to service projects with libraries from > > > PyPI so that we do not end up with services that rely on unreleased > > > features of libraries. > > > > > > We want to test changes to libraries with some services installed > > > from git so that we know changes to the library do not break (current) > > > master of the service. The set of interesting services may vary, > > > but a default set that represents the tightly coupled services that > > > run in the integrated gate now is reasonable. > > > > > > > How many jobs are needed to accomplish that? > > > > > > Ideally 1? Or 2? That's what I'm trying to work out. > > > > Currently "tempest-full/-py3" job does not run the 'slow' marked scenario tests and they run in separate job "tempest-scenario-multinode-lvm-multibackend"(which i am working to make it more clean) > > > > I think "tempest-full/-py3" will cover the most of the code/lib usage coverage. > > It does seem like enough coverage to start out, and matches what > we are doing under python 2. > > I have proposed https://review.openstack.org/#/c/575925/ to set up a > project template using tempest-full-py3. Andreas points out there that > we could put the project template in the tempest repo where the job is > defined. I don't know which we would prefer, so let me know what you > think. > > I also set up a test patch on oslo.config > https://review.openstack.org/#/c/575927/ that depends on the > project-template patch above and Jim's devstack patch > https://review.openstack.org/#/c/575801/. The lots from the > tempest-full-py3 job are at > http://logs.openstack.org/27/575927/2/check/tempest-full-py3/16b8922/ Thanks. lgtm. I feel we can keep the template in 'openstack-zuul-jobs' like we have 'integrated-gate' template[1]. [1] https://github.com/openstack-infra/openstack-zuul-jobs/blob/22cf73ae5a3090a91eb3c81cc4c427b546b0254e/zuul.d/zuul-legacy-project-templates.yaml#L57 -gmann > > > > > > > > > > What should happen with a change series with other > > > > projects in it? > > > > > > I expect all of the patches in a series to be installed from source > > > somewhere in the chain. That works today if we have a library patch > > > that depends on a service patch because that patched version of the > > > service is used in the dsvm job run against the library change. If > > > we could make the reverse dependency work, too (where a patch to a > > > service depends on a library change), that would be grand. > > > > > > I think your patch https://review.openstack.org/#/c/575801/ at least > > > lets us go in one direction (library->service) using a single job > > > definition, but I can't tell if it would work the other way around. > > > > > > > > > > > [1] http://logs.openstack.org/24/575324/3/check/tempest-full-py3/d183788/controller/logs/_.localrc_auto.txt > > > > > > > > -Jim > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Mon Jun 18 14:28:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 18 Jun 2018 07:28:40 -0700 Subject: [openstack-dev] [tripleo] Newton branch is End-Of-Life Message-ID: After many postpones, we finally went ahead and closed Newton branch for TripleO repositories A last tag was created and from now we won't accept backports in this branch. RPMs in RDO will be updated with this last tag. If there is any question or concern, please let us know. PS: Thanks to the stable-maint/release-managers who helped in that process. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Jun 18 14:30:02 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 18 Jun 2018 09:30:02 -0500 Subject: [openstack-dev] Reminder: UC Meeting Today 1800UTC Message-ID: Hey everyone, Please see https://wiki.openstack.org/wiki/Governance/Foundation/ UserCommittee for UC meeting info and add additional agenda items if needed. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Jun 18 15:10:34 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 18 Jun 2018 17:10:34 +0200 Subject: [openstack-dev] [nova]Notification update week 25 Message-ID: <1529334634.13962.0@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- No update on bugs and we have no new bugs tagged with notifications. Features -------- Sending full traceback in versioned notifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications We are really close to merge https://review.openstack.org/#/c/564092/ but some nits still needs to be addressed. Add notification support for trusted_certs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The notification impact of the trusted_certs bp has been merged. \o/ Introduce Pending VM state ~~~~~~~~~~~~~~~~~~~~~~~~~~ The spec https://review.openstack.org/#/c/554212 still not exactly define what will be in the select_destination notification payload. Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Good progress on the implementation https://review.openstack.org/#/c/536243 No progress: ~~~~~~~~~~~~ * Versioned notification transformation https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * Introduce instance.lock and instance.unlock notifications https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Blocked: ~~~~~~~~ * Add versioned notifications for removing a member from a server group https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications Weekly meeting -------------- The next meeting will be held on 19th of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180619T170000 Cheers, gibi From lars at redhat.com Mon Jun 18 15:13:59 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 18 Jun 2018 11:13:59 -0400 Subject: [openstack-dev] Puppet debugging help? Message-ID: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> Hey folks, I'm trying to patch puppet-keystone to support multi-valued configuration options (like trusted_dashboard). I have a patch that works, mostly, but I've run into a frustrating problem (frustrating because it would seem to be orthogonal to my patches, which affect the keystone_config provider and type). During the initial deploy, running tripleo::profile::base::keystone fails with: "Error: Could not set 'present' on ensure: undefined method `new' for nil:NilClass at /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274", The line in question is: 70: if $step == 3 and $manage_domain { 71: if hiera('heat_engine_enabled', false) { 72: # create these seperate and don't use ::heat::keystone::domain since 73: # that class writes out the configs 74: keystone_domain { $heat_admin_domain: ensure => 'present', enabled => true } The thing is, despite the error...it creates the keystone domain *anyway*, and a subsequent run of the module will complete without any errors. I'm not entirely sure that the error is telling me, since *none* of the puppet types or providers have a "new" method as far as I can see. Any pointers you can offer would be appreciated. Thanks! -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From sean.mcginnis at gmx.com Mon Jun 18 15:14:53 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 18 Jun 2018 10:14:53 -0500 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint In-Reply-To: <20180618020352.GF18927@thor.bakeyournoodle.com> References: <20180618020352.GF18927@thor.bakeyournoodle.com> Message-ID: <20180618151453.GA18400@sm-workstation> On Mon, Jun 18, 2018 at 12:03:52PM +1000, Tony Breeds wrote: > Hello folks, > Recently Ivan became the Horizon PTL and as with past PTLs (Hi Rob) > isn't a member of the horizon-stable-maint team. Ivan is a member of > the Cinder stable team and as such has demonstrated an understanding of > the stable policy. Since the Dublin PTG Ivan has been doing consistent > high quality reviews on Horizon's stable branches. > > As such I'm suggesting adding him to the existing stable team. > > Without strong objections I'll do that on (my) Monday 25th June. > > Yours Tony. Speaking with both stable and Cinder hats on, Ivan has been doing good stable reviews and I do not have any concerns about him being in any *-stable-maint groups. Sean From alee at redhat.com Mon Jun 18 15:27:24 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 18 Jun 2018 11:27:24 -0400 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> <1524239859.2972.74.camel@redhat.com> <1529008217.7441.68.camel@redhat.com> Message-ID: <1529335644.5564.2.camel@redhat.com> Based on popular demand, the new meeting time is now active. We will meet at Tuesday 12:00 UTC starting this week. redrobot and Dave will chair the next two meetings as I'm on vacation. Ade On Sat, 2018-06-16 at 11:11 +0300, Juan Antonio Osorio wrote: > +1 I dig > > On Fri, 15 Jun 2018, 17:41 Dave McCowan (dmccowan), om> wrote: > > +1 > > This is a great time. > > > > On 6/14/18, 4:30 PM, "Ade Lee" wrote: > > > > >The new time slot has been pretty difficult for folks to attend. > > >I'd like to propose a new time slot, which will hopefully be more > > >amenable to everyone. > > > > > >Tuesday 12:00 UTC > > > > > >https://www.timeanddate.com/worldclock/fixedtime.html?hour=12&min= > > 00&se > > >c=0 > > > > > >This works out to 8 am EST, around 1pm in Europe, and 8 pm in > > China. > > >Please vote by responding to this email. > > > > > >Thanks, > > >Ade > > > > > >__________________________________________________________________ > > ________ > > >OpenStack Development Mailing List (not for usage questions) > > >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:uns > > ubscribe > > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From moreira.belmiro.email.lists at gmail.com Mon Jun 18 15:30:09 2018 From: moreira.belmiro.email.lists at gmail.com (Belmiro Moreira) Date: Mon, 18 Jun 2018 17:30:09 +0200 Subject: [openstack-dev] [nova] increasing the number of allowed volumes attached per instance > 26 In-Reply-To: References: <4bf3536e-0e3b-0fc4-2894-fabd32ef23dc@gmail.com> Message-ID: Hi, this looks reasonable to me but I would prefer B. In this case the operator can configure the hard limit. I don't think we more granularity or expose it using the API. Belmiro On Fri, Jun 8, 2018 at 3:46 PM Dan Smith wrote: > > Some ideas that have been discussed so far include: > > FYI, these are already in my order of preference. > > > A) Selecting a new, higher maximum that still yields reasonable > > performance on a single compute host (64 or 128, for example). Pros: > > helps prevent the potential for poor performance on a compute host > > from attaching too many volumes. Cons: doesn't let anyone opt-in to a > > higher maximum if their environment can handle it. > > I prefer this because I think it can be done per virt driver, for > whatever actually makes sense there. If powervm can handle 500 volumes > in a meaningful way on one instance, then that's cool. I think libvirt's > limit should likely be 64ish. > > > B) Creating a config option to let operators choose how many volumes > > allowed to attach to a single instance. Pros: lets operators opt-in to > > a maximum that works in their environment. Cons: it's not discoverable > > for those calling the API. > > This is a fine compromise, IMHO, as it lets operators tune it per > compute node based on the virt driver and the hardware. If one compute > is using nothing but iSCSI over a single 10g link, then they may need to > clamp that down to something more sane. > > Like the per virt driver restriction above, it's not discoverable via > the API, but if it varies based on compute node and other factors in a > single deployment, then making it discoverable isn't going to be very > easy anyway. > > > C) Create a configurable API limit for maximum number of volumes to > > attach to a single instance that is either a quota or similar to a > > quota. Pros: lets operators opt-in to a maximum that works in their > > environment. Cons: it's yet another quota? > > Do we have any other quota limits that are per-instance like this would > be? If not, then this would likely be weird, but if so, then this would > also be an option, IMHO. However, it's too much work for what is really > not a hugely important problem, IMHO, and both of the above are > lighter-weight ways to solve this and move on. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Jun 18 15:31:08 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 18 Jun 2018 11:31:08 -0400 Subject: [openstack-dev] Puppet debugging help? In-Reply-To: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> References: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> Message-ID: Hey Lars, Do you have a full job that's running which shows those issues? Thanks, Mohammed On Mon, Jun 18, 2018 at 11:13 AM, Lars Kellogg-Stedman wrote: > Hey folks, > > I'm trying to patch puppet-keystone to support multi-valued > configuration options (like trusted_dashboard). I have a patch that > works, mostly, but I've run into a frustrating problem (frustrating > because it would seem to be orthogonal to my patches, which affect the > keystone_config provider and type). > > During the initial deploy, running tripleo::profile::base::keystone > fails with: > > "Error: Could not set 'present' on ensure: undefined method `new' > for nil:NilClass at > /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274", > > The line in question is: > > 70: if $step == 3 and $manage_domain { > 71: if hiera('heat_engine_enabled', false) { > 72: # create these seperate and don't use ::heat::keystone::domain since > 73: # that class writes out the configs > 74: keystone_domain { $heat_admin_domain: > ensure => 'present', > enabled => true > } > > The thing is, despite the error...it creates the keystone domain > *anyway*, and a subsequent run of the module will complete without any > errors. > > I'm not entirely sure that the error is telling me, since *none* of > the puppet types or providers have a "new" method as far as I can see. > Any pointers you can offer would be appreciated. > > Thanks! > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From james.page at canonical.com Mon Jun 18 15:45:31 2018 From: james.page at canonical.com (James Page) Date: Mon, 18 Jun 2018 16:45:31 +0100 Subject: [openstack-dev] [sig][upgrade] Upgrade SIG IRC meeting 1600 UTC Tuesday Message-ID: Hi All Just a quick reminder that the Upgrade SIG IRC meeting will be held at 1600 UTC tomorrow (Tuesday) in #openstack-meeting-4. If you're interested in helping improve the OpenStack upgrade experience be sure to attend! See [0] for previous meeting minutes and our standing agenda. Regards James [0] https://etherpad.openstack.org/p/upgrades-sig-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Jun 18 15:49:03 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 18 Jun 2018 09:49:03 -0600 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: <5B27D46F.10804@windriver.com> On 06/18/2018 08:16 AM, Artom Lifshitz wrote: > Hey all, > > For Rocky I'm trying to get live migration to work properly for > instances that have a NUMA topology [1]. > > A question that came up on one of patches [2] is how to handle > resources claims on the destination, or indeed whether to handle that > at all. I think getting the live migration to work at all is better than having it stay broken, so even without resource claiming on the destination it's an improvement over the status quo and I think it'd be a desirable change. However, *not* doing resource claiming means that until the migration is complete and the regular resource audit runs on the destination (which could be a minute later by default) you could end up having other instances try to use the same resources, causing various operations to fail. I think we'd want to have a very clear notice in the release notes about the limitations if we go this route. I'm a little bit worried that waiting for support in placement will result in "fully-functional" live migration with dedicated resources being punted out indefinitely. One of the reasons why the spec[1] called for using the existing resource tracker was that we don't expect placement to be functional for all NUMA-related stuff for a while yet. For what it's worth, I think the previous patch languished for a number of reasons other than the complexity of the code...the original author left, the coding style was a bit odd, there was an attempt to make it work even if the source was an earlier version, etc. I think a fresh implementation would be less complicated to review. Given the above, my personal preference would be to merge it even without claims, but then try to get the claims support merged as well. (Adding claims support later on wouldn't change any on-the-wire messaging, it would just make things work more robustly.) Chris [1] https://github.com/openstack/nova-specs/blob/master/specs/rocky/approved/numa-aware-live-migration.rst From aschultz at redhat.com Mon Jun 18 15:59:09 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 18 Jun 2018 09:59:09 -0600 Subject: [openstack-dev] Puppet debugging help? In-Reply-To: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> References: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> Message-ID: On Mon, Jun 18, 2018 at 9:13 AM, Lars Kellogg-Stedman wrote: > Hey folks, > > I'm trying to patch puppet-keystone to support multi-valued > configuration options (like trusted_dashboard). I have a patch that > works, mostly, but I've run into a frustrating problem (frustrating > because it would seem to be orthogonal to my patches, which affect the > keystone_config provider and type). > > During the initial deploy, running tripleo::profile::base::keystone > fails with: > > "Error: Could not set 'present' on ensure: undefined method `new' > for nil:NilClass at > /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274", > It's likely erroring in the keystone_domain provider. https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L115-L122 or https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L155-L161 Providers are notoriously bad at their error messaging. Usually this error happens when we get a null back from the underlying command and we're still trying to do something. This could point to a misconfiguration of keystone if it's not getting anything back. > The line in question is: > > 70: if $step == 3 and $manage_domain { > 71: if hiera('heat_engine_enabled', false) { > 72: # create these seperate and don't use ::heat::keystone::domain since > 73: # that class writes out the configs > 74: keystone_domain { $heat_admin_domain: > ensure => 'present', > enabled => true > } > > The thing is, despite the error...it creates the keystone domain > *anyway*, and a subsequent run of the module will complete without any > errors. > > I'm not entirely sure that the error is telling me, since *none* of > the puppet types or providers have a "new" method as far as I can see. > Any pointers you can offer would be appreciated. > > Thanks! > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lars at redhat.com Mon Jun 18 16:14:54 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 18 Jun 2018 12:14:54 -0400 Subject: [openstack-dev] Puppet debugging help? In-Reply-To: References: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> Message-ID: <20180618161454.irvziysxsd5h2ew3@redhat.com> On Mon, Jun 18, 2018 at 11:31:08AM -0400, Mohammed Naser wrote: > Hey Lars, > > Do you have a full job that's running which shows those issues? I don't. I have a local environment where I'm doing my testing. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From dtantsur at redhat.com Mon Jun 18 16:30:19 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 18 Jun 2018 18:30:19 +0200 Subject: [openstack-dev] [sdk] announcing first release of rust-openstack (+ call for contributors) Message-ID: Hi all, I'd like to announce my hobby project that I've been working on for some time. rust-openstack [1], as the name suggests, is an SDK for OpenStack written in Rust! I released version 0.1.0 last week, and now the project is ready for early testers and contributors. Currently only a small subset of Compute, Networking and Image API is implemented, as well as password authentication against Identity v3. If you're interested in the Rust language, this may be your chance :) I have written a short contributor's guide [2] to help understanding the code structure. Special thanks to the OpenLab project for providing the CI for the project. Cheers, Dmitry [1] https://docs.rs/openstack/latest/openstack/ [2] https://github.com/dtantsur/rust-openstack/blob/master/CONTRIBUTING.md From dklyle0 at gmail.com Mon Jun 18 16:30:52 2018 From: dklyle0 at gmail.com (David Lyle) Date: Mon, 18 Jun 2018 10:30:52 -0600 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint In-Reply-To: <20180618151453.GA18400@sm-workstation> References: <20180618020352.GF18927@thor.bakeyournoodle.com> <20180618151453.GA18400@sm-workstation> Message-ID: +1 On Mon, Jun 18, 2018 at 9:15 AM Sean McGinnis wrote: > > On Mon, Jun 18, 2018 at 12:03:52PM +1000, Tony Breeds wrote: > > Hello folks, > > Recently Ivan became the Horizon PTL and as with past PTLs (Hi Rob) > > isn't a member of the horizon-stable-maint team. Ivan is a member of > > the Cinder stable team and as such has demonstrated an understanding of > > the stable policy. Since the Dublin PTG Ivan has been doing consistent > > high quality reviews on Horizon's stable branches. > > > > As such I'm suggesting adding him to the existing stable team. > > > > Without strong objections I'll do that on (my) Monday 25th June. > > > > Yours Tony. > > Speaking with both stable and Cinder hats on, Ivan has been doing good stable > reviews and I do not have any concerns about him being in any *-stable-maint > groups. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Mon Jun 18 16:58:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 18 Jun 2018 12:58:07 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <2d6b64ac-ca85-d947-136f-78b288e35ab6@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> <2d6b64ac-ca85-d947-136f-78b288e35ab6@redhat.com> Message-ID: <5d4cbade-43a2-da47-bd2c-36e157fc757c@redhat.com> Replying to myself one more time... On 12/06/18 17:35, Zane Bitter wrote: > On 11/06/18 18:49, Zane Bitter wrote: >> It's had a week to percolate (and I've seen quite a few people viewing >> the etherpad), so here is the review: >> >> https://review.openstack.org/574479 > > In response to comments, I moved the change to the Project Team Guide > instead of the Contributor Guide (since the latter is aimed only at new > contributors, but this is for everyone). The new review is here: > > https://review.openstack.org/574888 > > The first review is still up, but it's now just adding links from the > Contributor Guide to this new doc. This is now live: https://docs.openstack.org/project-team-guide/review-the-openstack-way.html Thanks to everyone who contributed/reviewed/commented. Let's also remember to make this a living document, so we all keep learning from each other :) cheers, Zane. From doug at doughellmann.com Mon Jun 18 17:13:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 18 Jun 2018 13:13:10 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <5d4cbade-43a2-da47-bd2c-36e157fc757c@redhat.com> References: <92c5bb71-9e7b-454a-fcc7-95c5862ac0e8@redhat.com> <38313d98-14e0-205f-e432-afb24eaffc50@redhat.com> <2d6b64ac-ca85-d947-136f-78b288e35ab6@redhat.com> <5d4cbade-43a2-da47-bd2c-36e157fc757c@redhat.com> Message-ID: <1529341948-sup-7346@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-18 12:58:07 -0400: > Replying to myself one more time... > > On 12/06/18 17:35, Zane Bitter wrote: > > On 11/06/18 18:49, Zane Bitter wrote: > >> It's had a week to percolate (and I've seen quite a few people viewing > >> the etherpad), so here is the review: > >> > >> https://review.openstack.org/574479 > > > > In response to comments, I moved the change to the Project Team Guide > > instead of the Contributor Guide (since the latter is aimed only at new > > contributors, but this is for everyone). The new review is here: > > > > https://review.openstack.org/574888 > > > > The first review is still up, but it's now just adding links from the > > Contributor Guide to this new doc. > > This is now live: > > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > Thanks to everyone who contributed/reviewed/commented. Let's also > remember to make this a living document, so we all keep learning from > each other :) > > cheers, > Zane. > +1 Nice work on this initiative, Julia and Zane. Doug From Greg.Waines at windriver.com Mon Jun 18 17:23:27 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 18 Jun 2018 17:23:27 +0000 Subject: [openstack-dev] [barbican] default devstack barbican secret store ? and big picture question ? Message-ID: Hey ... a couple of NEWBY question for the Barbican Team. I just setup a devstack with Barbican @ stable/queens . Ran through the “Verify operation” commands ( https://docs.openstack.org/barbican/latest/install/verify.html ) ... Everything worked. stack at barbican:~/devstack$ openstack secret list stack at barbican:~/devstack$ openstack secret store --name mysecret --payload j4=]d21 +---------------+--------------------------------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------------------------------+ | Secret href | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-187f8d8c98d1 | | Name | mysecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+--------------------------------------------------------------------------------+ stack at barbican:~/devstack$ stack at barbican:~/devstack$ stack at barbican:~/devstack$ openstack secret list +--------------------------------------------------------------------------------+----------+---------------------------+--------+-----------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +--------------------------------------------------------------------------------+----------+---------------------------+--------+-----------------------------+-----------+------------+-------------+------+------------+ | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-187f8d8c98d1 | mysecret | 2018-06-18T14:47:45+00:00 | ACTIVE | {u'default': u'text/plain'} | aes | 256 | opaque | cbc | None | +--------------------------------------------------------------------------------+----------+---------------------------+--------+-----------------------------+-----------+------------+-------------+------+------------+ stack at barbican:~/devstack$ openstack secret get http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-187f8d8c98d1 +---------------+--------------------------------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------------------------------+ | Secret href | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-187f8d8c98d1 | | Name | mysecret | | Created | 2018-06-18T14:47:45+00:00 | | Status | ACTIVE | | Content types | {u'default': u'text/plain'} | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+--------------------------------------------------------------------------------+ stack at barbican:~/devstack$ openstack secret get http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-187f8d8c98d1 --payload +---------+---------+ | Field | Value | +---------+---------+ | Payload | j4=]d21 | +---------+---------+ stack at barbican:~/devstack$ QUESTIONS: · In this basic devstack setup, what is being used as the secret store ? o E.g. /etc/barbican/barbican.conf for devstack is simply stack at barbican:~/devstack$ more /etc/barbican/barbican.conf [DEFAULT] transport_url = rabbit://stackrabbit:admin at 10.10.10.17:5672 db_auto_create = False sql_connection = mysql+pymysql://root:admin at 127.0.0.1/barbican?charset=utf8 logging_exception_prefix = %(color)s%(asctime)s.%(msecs)03d TRACE %(name)s %(instance)s logging_debug_format_suffix = from (pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d logging_default_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s logging_context_format_string = %(asctime)s.%(msecs)03d %(color)s%(levelname)s %(name)s [%(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s use_stderr = True log_file = /opt/stack/logs/barbican.log host_href = http://10.10.10.17/key-manager debug = True [keystone_authtoken] memcached_servers = localhost:11211 signing_dir = /var/cache/barbican cafile = /opt/stack/data/ca-bundle.pem project_domain_name = Default project_name = service user_domain_name = Default password = admin username = barbican auth_url = http://10.10.10.17/identity auth_type = password [keystone_notifications] enable = True stack at barbican:~/devstack$ * What is the basic strategy here wrt Barbican providing secure secret storage ? e.g. * Secrets are stored encrypted in some secret store ? * Again, for default devstack, what is that secret store ? (assuming it is NOT the DB being used for general openstack services’ tables) * i.e. assuming it is separate DB or file or directory of files * What key is used for encryption ? ... * The UUID of the Barbican ‘secret’ object in the Barbican openstack DB Table is the ‘external reference’ for the secret ? * ? and this ‘secret’ object has the internal reference for the secret in the secret store ? * ADMIN privileges are required to access the Barbican ‘secret’ objects ? * Soooo ... the secrets are stored in encrypted format and can only be referenced / retrieved in plain text with ADMIN privileges * Is this the basis of the strategy ? Thanks in advance, Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Jun 18 17:39:33 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 18 Jun 2018 13:39:33 -0400 Subject: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db] Configure maximum number of db connections In-Reply-To: References: Message-ID: +openstack-dev since I believe this is an issue with the Heat source code. On 06/18/2018 11:19 AM, Spyros Trigazis wrote: > Hello list, > > I'm hitting quite easily this [1] exception with heat. The db server is > configured to have 1000 > max_connnections and 1000 max_user_connections and in the database > section of heat > conf I have these values set: > max_pool_size = 22 > max_overflow = 0 > Full config attached. > > I ended up with this configuration based on this formula: > num_heat_hosts=4 > heat_api_workers=2 > heat_api_cfn_workers=2 > num_engine_workers=4 > max_pool_size=22 > max_overflow=0 > num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + > num_engine_workers + heat_api_cfn_workers) > 704 > > What I have noticed is that the number of connections I expected with > the above formula is not respected. > Based on this formula each node (every node runs the heat-api, > heat-api-cfn and heat-engine) should > use up to 176 connections but they even reach 400 connections. > > Has anyone noticed a similar behavior? Looking through the Heat code, I see that there are many methods in the /heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but never actually call session.close() [1] which means that the session will not be released back to the connection pool, which might be the reason why connections keep piling up. Not sure if there's any setting in Heat that will fix this problem. Disabling connection pooling will likely not help since connections are not properly being closed and returned to the connection pool to begin with. Best, -jay [1] Heat apparently doesn't use the oslo.db enginefacade transaction context managers either, which would help with this problem since the transaction context manager would take responsibility for calling session.flush()/close() appropriately. https://github.com/openstack/oslo.db/blob/43af1cf08372006aa46d836ec45482dd4b5b5349/oslo_db/sqlalchemy/enginefacade.py#L626 From alifshit at redhat.com Mon Jun 18 17:51:25 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 18 Jun 2018 13:51:25 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: <5B27D46F.10804@windriver.com> References: <5B27D46F.10804@windriver.com> Message-ID: > For what it's worth, I think the previous patch languished for a number of > reasons other than the complexity of the code...the original author left, > the coding style was a bit odd, there was an attempt to make it work even if > the source was an earlier version, etc. I think a fresh implementation > would be less complicated to review. I'm afraid of unknowns in the resource tracker and claims mechanism. For snips and giggles, I submitted a quick patch that attempts to use a claim [1] when live migrating instances. Assuming it somehow passes CI, I have no idea if I've just opened rabbit hole of people telling me "oh but you need to do this other thing in this other place." How knows the claims code well, anyways? [1] https://review.openstack.org/576222 From kennelson11 at gmail.com Mon Jun 18 18:54:00 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 18 Jun 2018 11:54:00 -0700 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: <20180618004837.GE18927@thor.bakeyournoodle.com> References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> <1529089443-sup-6239@lrrr.local> <20180618004837.GE18927@thor.bakeyournoodle.com> Message-ID: On Sun, Jun 17, 2018 at 5:49 PM Tony Breeds wrote: > On Fri, Jun 15, 2018 at 03:05:51PM -0400, Doug Hellmann wrote: > > Excerpts from Jean-Philippe Evrard's message of 2018-06-15 17:37:02 > +0200: > > > > Not sure it'd help but one option we do is to create aliases based on > > > > the title. Though since the PTLs don't have addresses on the > openstack > > > > domain an alias may not make as much sense, it'd have to be a full > > > > account forward. It's useful for centralized spam filtering. > > > > > > I foresee this: > > > 1) We create an alias to PTL email > > > 2) PTL think that kind of emails are worth sharing with a team to > balance work > > > 3) We now have a project mailing list > > > 4) People stop using openstack-dev lists. > > > > > > But that's maybe me... > > > > > > > Yeah, setting all of that up feels like it would just be something > > else we would have to remember to do every time we have an election. > > I'm trying to reduce the number those kinds of tasks we have, so > > let's not add a new one. > > While I'm not sure that JP's scenario would eventuate I am against > adding the aliases and adding additional work for the election > officials. It's not that this would be terribly hard to automate it > just seems like duplication of data/effort whereas the change under > review is pretty straight forward. > I'm 100% in agreement with Doug and Tony about not giving more work to election officials. Speaking as an election official, its already tedious and one more thing to keep track of (even if its automated) is not something I would wish on future election officials. -Kendall (diablo_rojo) > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jun 18 19:31:00 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 18 Jun 2018 19:31:00 +0000 Subject: [openstack-dev] [tc] [ptl] PTL E-mail addresses on rendered team pages In-Reply-To: References: <20180615150050.hhz777oa35junk5c@yuggoth.org> <20180615152336.fles6tu7xerw6x2r@gentoo.org> <1529089443-sup-6239@lrrr.local> <20180618004837.GE18927@thor.bakeyournoodle.com> Message-ID: <20180618193100.jegee4vaj4expueb@yuggoth.org> On 2018-06-18 11:54:00 -0700 (-0700), Kendall Nelson wrote: [...] > I'm 100% in agreement with Doug and Tony about not giving more work to > election officials. Speaking as an election official, its already tedious > and one more thing to keep track of (even if its automated) is not > something I would wish on future election officials. Yep, TC member hat off and election official hat on for a moment, the proposed changes (there's a similar one for showing the TC member E-mail addresses) just involve us copying over one piece of additional information we already have in the election data so is really no increased burden in that regard. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From myoung at redhat.com Mon Jun 18 20:32:54 2018 From: myoung at redhat.com (Matt Young) Date: Mon, 18 Jun 2018 16:32:54 -0400 Subject: [openstack-dev] [tripleo] Sprint 15 Planning Summary (CI, Tempest squads) Message-ID: Greetings, The TripleO CI & Tempest squads have begun work on Sprint 15. Like most of our sprints these are three weeks long and are planned on a Thursday or Friday (depending on squad) and end with a retrospective on Wednesdays. Sprint 15 runs from 2018-06-14 to 2018-07-03 More information regarding our process is available in the tripleo-specs repository [1]. Ongoing meeting notes and other detail are always available in the Squad Etherpads [2,3]. # Ruck / Rover: * weshay + quiquell * https://review.rdoproject.org/etherpad/p/ruckrover-sprint15 # CI Squad Epic: https://trello.com/c/bQuQ9aWF/802-sprint-15-ci-goals Tasks: http://ow.ly/3kDB30kypjH This sprint the CI squad is transitioning to a new topic! We are spending the first of at least three sprints on the topic of migrating tripleo CI to Zuul v3. We’ve split the team into two “tiger teams” [4] for the first portion of the sprint. One is focused on issues of networking and multinode support, the other looking at generating job configuration parent jobs, from which other jobs will derive. This work will inform a number of activities in this sprint and sprints to come. This includes topics such as refactoring of TOCI gate scripts (bash → ansible/python), full native ansible tooling, and putting into place jobs to address python3, RHEL8 (future centos), RDO on RHEL, and other topics enabled by support for an RH internal deployment of Software Factory. We will also be monitoring efforts by upstream openstack-infra to transition jobs @ review.rdoproject.org to zuul v3, on standby to assist as needed. # Tempest Squad Epic: https://trello.com/c/6QKG0HkU/801-sprint-15-python-tempestconf Tasks: http://ow.ly/XnB530kyppQ The Tempest Squad this sprint is operating with UA (chandankumar) on PTO for most of the sprint. We are focused on 3 main topics: * Closing out the remaining python-tempestconf refactor tasks for core OpenStack services * Create a presentation covering tempest plugin creation * Targeted tasks addressing RefStack certification for rhos 11,12,13 For any questions please find us in #tripleo Thanks, Matt [1] https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst [2] https://etherpad.openstack.org/p/tripleo-ci-squad-meeting [3] https://etherpad.openstack.org/p/tripleo-tempest-squad-meeting [4] https://en.wikipedia.org/wiki/Tiger_team From tobias.urdin at crystone.com Mon Jun 18 22:08:07 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Mon, 18 Jun 2018 22:08:07 +0000 Subject: [openstack-dev] [cloudkitty] configuration, deployment or packaging issue? Message-ID: <1529359689223.12471@crystone.com> Hello CloudKitty team, I'm having an issue with this review not going through and being stuck after staring at it for a while now [1]. Is there any configuration[2] issue that are causing the error[3]? Or is the package broken? Thanks for helping out! Best regards [1] https://review.openstack.org/#/c/569641/ [2] http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/etc/cloudkitty/ [3] http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/cloudkitty/processor.txt.gz -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Jun 18 22:33:50 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 18 Jun 2018 16:33:50 -0600 Subject: [openstack-dev] [cloudkitty] configuration, deployment or packaging issue? In-Reply-To: <1529359689223.12471@crystone.com> References: <1529359689223.12471@crystone.com> Message-ID: On Mon, Jun 18, 2018 at 4:08 PM, Tobias Urdin wrote: > Hello CloudKitty team, > > > I'm having an issue with this review not going through and being stuck after > staring at it for a while now [1]. > > Is there any configuration[2] issue that are causing the error[3]? Or is the > package broken? > Likely due to https://review.openstack.org/#/c/538256/ which appears to change the metrics.yaml format. It doesn't look backwards compatible so the puppet module probably needs updating. > > Thanks for helping out! > > Best regards > > > [1] https://review.openstack.org/#/c/569641/ > > [2] > http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/etc/cloudkitty/ > > [3] > http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/cloudkitty/processor.txt.gz > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From linghucongsong at 163.com Tue Jun 19 02:13:47 2018 From: linghucongsong at 163.com (linghucongsong) Date: Tue, 19 Jun 2018 10:13:47 +0800 (CST) Subject: [openstack-dev] [tricircle] Zuul v3 integration status In-Reply-To: <922b0570-988e-98d2-56db-615d388de1f6@gmail.com> References: <922b0570-988e-98d2-56db-615d388de1f6@gmail.com> Message-ID: Hi Boden! Thanks for report this bug. we will talk about this bug in our meeting this week wednesday 9:00 beijing time. if you have time i would like you join it in the openstack-meeting channel. At 2018-06-15 21:56:29, "Boden Russell" wrote: >Is there anyone who can speak to the status of tricircle's adoption of >Zuul v3? > >As per [1] it doesn't seem like the project is setup properly for Zuul >v3. Thus, it's difficult/impossible to land patches like [2] that >require neutron/master + a depends on patch. > >Assuming tricircle is still being maintained, IMO we need to find a way >to get it up to speed with zuul v3; otherwise some of our neutron >efforts will be held up, or tricircle will fall behind with respect to >neutron-lib adoption. > >Thanks > > >[1] https://bugs.launchpad.net/tricircle/+bug/1776922 >[2] https://review.openstack.org/#/c/565879/ > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Jun 19 07:46:49 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Jun 2018 16:46:49 +0900 Subject: [openstack-dev] [nova] nova API meeting schedule In-Reply-To: References: <163ecbee776.119fa707b82923.1660286070948100787@ghanshyammann.com> Message-ID: <1641702b081.dc3d090430172.8443034120693950361@ghanshyammann.com> Thanks for response. We will go with options 2 which is to cancel the meeting and continue on nova channel. We will start API office hour on every Wednesday (27th June onwards) 06:00 UTC on #openstack-nova channel. I pushed the patch to free the current slot of API meeting[1] and will update the API meeting wiki page for new timing and channel info. [1] https://review.openstack.org/#/c/576398/ -gmann ---- On Mon, 11 Jun 2018 21:40:09 +0900 Chris Dent wrote ---- > On Mon, 11 Jun 2018, Ghanshyam wrote: > > > 2. If no member from USA/Europe TZ then, myself and Alex will > > conduct the API meeting as office hour on Nova channel during our > > day time (something between UTC+1 to UTC + 9). There is not much > > activity on Nova channel during our TZ so it will be ok to use > > Nova channel. In this case, we will release the current occupied > > meeting channel. > > I think this is the better option since it works well for the people > who are already actively interested. If that situation changes, you > can always do something different. And if you do some kind of > summary of anything important at the meeting (whenever the time) > then people who can't attend can be in the loop. > > I was trying to attend the API meeting for a while (back when it was > happening) but had to cut it out as its impossible to pay attention > to everything and something had to give. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tobias.urdin at crystone.com Tue Jun 19 08:02:57 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Tue, 19 Jun 2018 08:02:57 +0000 Subject: [openstack-dev] [cloudkitty] configuration, deployment or packaging issue? References: <1529359689223.12471@crystone.com> Message-ID: <19c686430f564274a8aa3b74a9b721dc@mb01.staff.ognet.se> Hello, Thanks Alex, I should probably improve my search-fu. Is that commit in the RPM packages then I assume, so we need to ship a metrics.yaml (which is kind of opinionated unless CloudKitty supplies a default one) and set the fetcher in the config file. Perhaps somebody can confirm the above. Best regards On 06/19/2018 12:35 AM, Alex Schultz wrote: > On Mon, Jun 18, 2018 at 4:08 PM, Tobias Urdin wrote: >> Hello CloudKitty team, >> >> >> I'm having an issue with this review not going through and being stuck after >> staring at it for a while now [1]. >> >> Is there any configuration[2] issue that are causing the error[3]? Or is the >> package broken? >> > Likely due to https://review.openstack.org/#/c/538256/ which appears > to change the metrics.yaml format. It doesn't look backwards > compatible so the puppet module probably needs updating. > >> Thanks for helping out! >> >> Best regards >> >> >> [1] https://review.openstack.org/#/c/569641/ >> >> [2] >> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/etc/cloudkitty/ >> >> [3] >> http://logs.openstack.org/41/569641/1/check/puppet-openstack-beaker-centos-7/ee4742c/logs/cloudkitty/processor.txt.gz >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From strigazi at gmail.com Tue Jun 19 09:17:58 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 19 Jun 2018 11:17:58 +0200 Subject: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db][magnum] Configure maximum number of db connections In-Reply-To: References: Message-ID: Hello lists, With heat's team help I figured it out. Thanks Jay for looking into it. The issue is coming from [1], where the max_overflow is set to executor_thread_pool_size if it is set to a lower value to address another issue. In my case, I had a lot of RAM and CPU so I could push for threads but I was "short" in db connections. The formula to calculate the number of connections can be like this: num_heat_hosts=4 heat_api_workers=2 heat_api_cfn_workers=2 num_engine_workers=4 executor_thread_pool_size = 22 max_pool_size=4 max_overflow=executor_thread_pool_size num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + num_engine_workers + heat_api_cfn_workers) 832 And a note for magnum deployments medium to large, see the options we have changed in heat conf and change according to your needs. The db configuration described here and changes we discovered in a previous scale test can help to have a stable magnum and heat service. For large stacks or projects with many stacks you need to change the following in these values or better, according to your needs. [Default] executor_thread_pool_size = 22 max_resources_per_stack = -1 max_stacks_per_tenant = 10000 action_retry_limit = 10 client_retry_limit = 10 engine_life_check_timeout = 600 max_template_size = 5242880 rpc_poll_timeout = 600 rpc_response_timeout = 600 num_engine_workers = 4 [database] max_pool_size = 4 max_overflow = 22 Cheers, Spyros [heat_api] workers = 2 [heat_api_cfn] workers = 2 Cheers, Spyros ps We will update the magnum docs as well [1] http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/service.py#n375 On Mon, 18 Jun 2018 at 19:39, Jay Pipes wrote: > +openstack-dev since I believe this is an issue with the Heat source code. > > On 06/18/2018 11:19 AM, Spyros Trigazis wrote: > > Hello list, > > > > I'm hitting quite easily this [1] exception with heat. The db server is > > configured to have 1000 > > max_connnections and 1000 max_user_connections and in the database > > section of heat > > conf I have these values set: > > max_pool_size = 22 > > max_overflow = 0 > > Full config attached. > > > > I ended up with this configuration based on this formula: > > num_heat_hosts=4 > > heat_api_workers=2 > > heat_api_cfn_workers=2 > > num_engine_workers=4 > > max_pool_size=22 > > max_overflow=0 > > num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + > > num_engine_workers + heat_api_cfn_workers) > > 704 > > > > What I have noticed is that the number of connections I expected with > > the above formula is not respected. > > Based on this formula each node (every node runs the heat-api, > > heat-api-cfn and heat-engine) should > > use up to 176 connections but they even reach 400 connections. > > > > Has anyone noticed a similar behavior? > > Looking through the Heat code, I see that there are many methods in the > /heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but > never actually call session.close() [1] which means that the session > will not be released back to the connection pool, which might be the > reason why connections keep piling up. > > Not sure if there's any setting in Heat that will fix this problem. > Disabling connection pooling will likely not help since connections are > not properly being closed and returned to the connection pool to begin > with. > > Best, > -jay > > [1] Heat apparently doesn't use the oslo.db enginefacade transaction > context managers either, which would help with this problem since the > transaction context manager would take responsibility for calling > session.flush()/close() appropriately. > > > https://github.com/openstack/oslo.db/blob/43af1cf08372006aa46d836ec45482dd4b5b5349/oslo_db/sqlalchemy/enginefacade.py#L626 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Tue Jun 19 09:27:51 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 19 Jun 2018 16:27:51 +0700 Subject: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team Message-ID: Hi, I’d like to promote Vitalii Solodilov to the core team of Mistral. In my opinion, Vitalii is a very talented engineer  who has been demonstrating it by providing very high quality code and reviews in the last 6-7 months. He’s one of the people who doesn’t hesitate taking responsibility for solving challenging technical tasks. It’s been a great pleasure to work with Vitalii and I hope can will keep up doing great job. Core members, please vote. Vitalii’s statistics: http://stackalytics.com/?module=mistral-group&metric=marks&user_id=mcdoker18 Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From allprog at gmail.com Tue Jun 19 09:33:47 2018 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Tue, 19 Jun 2018 11:33:47 +0200 Subject: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team In-Reply-To: References: Message-ID: +1 well deserved! Renat Akhmerov ezt írta (időpont: 2018. jún. 19., K, 11:28): > Hi, > > I’d like to promote Vitalii Solodilov to the core team of Mistral. In my > opinion, Vitalii is a very talented engineer who has been demonstrating it > by providing very high quality code and reviews in the last 6-7 months. > He’s one of the people who doesn’t hesitate taking responsibility for > solving challenging technical tasks. It’s been a great pleasure to work > with Vitalii and I hope can will keep up doing great job. > > Core members, please vote. > > Vitalii’s statistics: > http://stackalytics.com/?module=mistral-group&metric=marks&user_id=mcdoker18 > > Thanks > > Renat Akhmerov > @Nokia > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Tue Jun 19 09:47:19 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 19 Jun 2018 10:47:19 +0100 Subject: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team In-Reply-To: References: Message-ID: On 19 June 2018 at 10:27, Renat Akhmerov wrote: > Hi, > > I’d like to promote Vitalii Solodilov to the core team of Mistral. In my > opinion, Vitalii is a very talented engineer who has been demonstrating it > by providing very high quality code and reviews in the last 6-7 months. > He’s one of the people who doesn’t hesitate taking responsibility for > solving challenging technical tasks. It’s been a great pleasure to work > with Vitalii and I hope can will keep up doing great job. > > Core members, please vote. > +1 from me. Vitalii has been one of the most active reviewers and code contributors through Queens and Rocky. Vitalii’s statistics: http://stackalytics.com/?module= > mistral-group&metric=marks&user_id=mcdoker18 > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shardy at redhat.com Tue Jun 19 10:14:26 2018 From: shardy at redhat.com (Steven Hardy) Date: Tue, 19 Jun 2018 11:14:26 +0100 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018 at 4:50 PM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > > Please vote -1/+1, +1 From skaplons at redhat.com Tue Jun 19 12:25:20 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 19 Jun 2018 14:25:20 +0200 Subject: [openstack-dev] [neutron] bug deputy report Message-ID: <497BCF6B-A288-4372-803E-5D38EE3921D3@redhat.com> Hi, Last week I was on bug deputy and I basically forgot about it. So I went through bugs from last week yesterday. Below is summary of those bugs: Neutron-vpnaas bug: * libreswan ipsec driver doesn't work with libreswan versions 3.23+ - https://bugs.launchpad.net/neutron/+bug/1776840 CI related bugs: * Critical bug for stable/queens https://bugs.launchpad.net/neutron/+bug/1777190 - should be fixed already, * TestHAL3Agent.test_ha_router_restart_agents_no_packet_lost fullstack fails - https://bugs.launchpad.net/neutron/+bug/1776459 - I am checking logs for that, there are some patches related to it proposed but for now I don’t know exactly why this happens, * neutron-rally job failing for stable/pike and stable/ocata - https://bugs.launchpad.net/neutron/+bug/1777506 - I am debugging why it happens like that, DVR related bugs: * DVR: Self recover from the loss of 'fg' ports in FIP Namespace - https://bugs.launchpad.net/neutron/+bug/1776984 - Swami is already working on it, * DVR: FloatingIP create throws an error if the L3 agent is not running in the given host - https://bugs.launchpad.net/neutron/+bug/1776566 - Swami is already working on this one too, DB related issues: * Database connection was found disconnected; reconnecting: DBConnectionError - https://bugs.launchpad.net/neutron/+bug/1776896 - bug marker as Incomplete but IMO it should be closed as it doesn’t look like Neutron issue, QoS issues: * Inaccurate L3 QoS bandwidth - https://bugs.launchpad.net/neutron/+bug/1777598 - reported today, fix already proposed Docs bugs: * [Doc] [FWaaS] Configuration of FWaaS v1 is confused - https://bugs.launchpad.net/neutron/+bug/1777547 - already in progress, Already fixed issues reported this week: * neutron-netns-cleanup explodes when trying to delete an OVS internal port - https://bugs.launchpad.net/neutron/+bug/1776469 * neutron-netns-cleanup does not configure privsep correctly - https://bugs.launchpad.net/neutron/+bug/1776468, DVR scheduling checks wrong port binding profile for host in live-migration - https://bugs.launchpad.net/neutron/+bug/1776255 New RFE bugs: * support vlan transparent in neutron network - https://bugs.launchpad.net/neutron/+bug/1777585 — Slawek Kaplonski Senior software engineer Red Hat From ellorent at redhat.com Tue Jun 19 13:32:48 2018 From: ellorent at redhat.com (Felix Enrique Llorente Pastora) Date: Tue, 19 Jun 2018 15:32:48 +0200 Subject: [openstack-dev] [tripleo] CI is down stop workflowing Message-ID: Hi, We have the following bugs with fixes that need to land to unblock check/gate jobs: https://bugs.launchpad.net/tripleo/+bug/1777451 https://bugs.launchpad.net/tripleo/+bug/1777616 You can check them out at #tripleo ooolpbot. Please stop workflowing temporally until they get merged. BR. -- Quique Llorente Openstack TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Tue Jun 19 14:29:40 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Tue, 19 Jun 2018 10:29:40 -0400 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: Message-ID: <20180619142940.mnhp3k5of6iynhwp@redhat.com> On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote: > Is this the same issue Carlos is trying to fix via > https://review.openstack.org/#/c/494517/ ? That solves part of the problem, but it's not a complete solution. In particular, it doesn't solve the problem that bit me: if you're changing puppet providers (e.g., replacing provider/keystone_config/ini_setting.rb with provider/keystone_config/openstackconfig.rb), you still have the old provider sitting around causing problems because unpacking a tarball only *adds* files. > Yeah I think we've never seen this because normally the > /etc/puppet/modules tarball overwrites the symlink, effectively giving > you a new tree (the first time round at least). But it doesn't, and that's the unexpected problem: if you replace the /etc/puppet/modules/keystone symlink with a directory, then /usr/share/openstack-puppet/modules/keystone is still there, and while the manifests won't be used, the contents of the lib/ directory will still be active. > Probably we could add something to the script to enable a forced > cleanup each update: > > https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9 We could: (a) unpack the replacement puppet modules into a temporary location, then (b) for each module; rm -rf the target directory and then copy it into place But! This would require deploy_artifacts.sh to know that it was unpacking puppet modules rather than a generic tarball. > This would have to be optional, so we could add something like a > DeployArtifactsCleanupDirs parameter perhaps? If we went with the above, sure. > One more thought which just occurred to me - we could add support for > a git checkout/pull to the script? Reiterating our conversion in #tripleo, I think rather than adding a bunch of specific functionality to the DeployArtifacts feature, it would make more sense to add the ability to include some sort of user-defined pre/post tasks, either as shell scripts or as ansible playbooks or something. On the other hand, I like your suggestion of just ditching DeployArtifacts for a new composable service that defines host_prep_tasks (or re-implenting DeployArtifacts as a composable service), so I'm going to look at that as a possible alternative to what I'm currently doing. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From aschultz at redhat.com Tue Jun 19 14:30:21 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 19 Jun 2018 08:30:21 -0600 Subject: [openstack-dev] [tripleo] Proposing Alan Bishop tripleo core on storage bits In-Reply-To: References: Message-ID: On Wed, Jun 13, 2018 at 9:50 AM, Emilien Macchi wrote: > Alan Bishop has been highly involved in the Storage backends integration in > TripleO and Puppet modules, always here to update with new features, fix > (nasty and untestable third-party backends) bugs and manage all the > backports for stable releases: > https://review.openstack.org/#/q/owner:%22Alan+Bishop+%253Cabishop%2540redhat.com%253E%22 > > He's also well knowledgeable of how TripleO works and how containers are > integrated, I would like to propose him as core on TripleO projects for > patches related to storage things (Cinder, Glance, Swift, Manila, and > backends). > Since there are no objections, I have added Alan to the cores list. Thanks, -Alex > Please vote -1/+1, > Thanks! > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jistr at redhat.com Tue Jun 19 15:17:36 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Tue, 19 Jun 2018 17:17:36 +0200 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: <20180619142940.mnhp3k5of6iynhwp@redhat.com> References: <20180619142940.mnhp3k5of6iynhwp@redhat.com> Message-ID: <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> On 19.6.2018 16:29, Lars Kellogg-Stedman wrote: > On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote: >> Is this the same issue Carlos is trying to fix via >> https://review.openstack.org/#/c/494517/ ? > > That solves part of the problem, but it's not a complete solution. > In particular, it doesn't solve the problem that bit me: if you're > changing puppet providers (e.g., replacing > provider/keystone_config/ini_setting.rb with > provider/keystone_config/openstackconfig.rb), you still have the old > provider sitting around causing problems because unpacking a tarball > only *adds* files. > >> Yeah I think we've never seen this because normally the >> /etc/puppet/modules tarball overwrites the symlink, effectively giving >> you a new tree (the first time round at least). > > But it doesn't, and that's the unexpected problem: if you replace the > /etc/puppet/modules/keystone symlink with a directory, then > /usr/share/openstack-puppet/modules/keystone is still there, and while > the manifests won't be used, the contents of the lib/ directory will > still be active. > >> Probably we could add something to the script to enable a forced >> cleanup each update: >> >> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9 > > We could: > > (a) unpack the replacement puppet modules into a temporary location, > then > > (b) for each module; rm -rf the target directory and then copy it into > place > > But! This would require deploy_artifacts.sh to know that it was > unpacking puppet modules rather than a generic tarball. > >> This would have to be optional, so we could add something like a >> DeployArtifactsCleanupDirs parameter perhaps? > > If we went with the above, sure. > >> One more thought which just occurred to me - we could add support for >> a git checkout/pull to the script? > > Reiterating our conversion in #tripleo, I think rather than adding a > bunch of specific functionality to the DeployArtifacts feature, it > would make more sense to add the ability to include some sort of > user-defined pre/post tasks, either as shell scripts or as ansible > playbooks or something. > > On the other hand, I like your suggestion of just ditching > DeployArtifacts for a new composable service that defines > host_prep_tasks (or re-implenting DeployArtifacts as a composable > service), so I'm going to look at that as a possible alternative to > what I'm currently doing. > For the puppet modules specifically, we might also add another directory+mount into the docker-puppet container, which would be blank by default (unlike the existing, already populated /etc/puppet and /usr/share/openstack-puppet/modules). And we'd put that directory at the very start of modulepath. Then i *think* puppet would use a particular module from that dir *only*, not merge the contents with the rest of modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter (didn't test it though). That should get us around the "tgz only adds files" problem without any rm -rf. The above is somewhat of an orthogonal suggestion to the composable service approach, they would work well alongside i think. (And +1 on "DeployArtifacts as composable service" as something worth investigating / implementing.) Jirka From aschultz at redhat.com Tue Jun 19 16:12:54 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 19 Jun 2018 10:12:54 -0600 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> References: <20180619142940.mnhp3k5of6iynhwp@redhat.com> <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> Message-ID: On Tue, Jun 19, 2018 at 9:17 AM, Jiří Stránský wrote: > On 19.6.2018 16:29, Lars Kellogg-Stedman wrote: >> >> On Tue, Jun 19, 2018 at 02:18:38PM +0100, Steven Hardy wrote: >>> >>> Is this the same issue Carlos is trying to fix via >>> https://review.openstack.org/#/c/494517/ ? >> >> >> That solves part of the problem, but it's not a complete solution. >> In particular, it doesn't solve the problem that bit me: if you're >> changing puppet providers (e.g., replacing >> provider/keystone_config/ini_setting.rb with >> provider/keystone_config/openstackconfig.rb), you still have the old >> provider sitting around causing problems because unpacking a tarball >> only *adds* files. >> >>> Yeah I think we've never seen this because normally the >>> /etc/puppet/modules tarball overwrites the symlink, effectively giving >>> you a new tree (the first time round at least). >> >> >> But it doesn't, and that's the unexpected problem: if you replace the >> /etc/puppet/modules/keystone symlink with a directory, then >> /usr/share/openstack-puppet/modules/keystone is still there, and while >> the manifests won't be used, the contents of the lib/ directory will >> still be active. >> >>> Probably we could add something to the script to enable a forced >>> cleanup each update: >>> >>> >>> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/deploy-artifacts.sh#L9 >> >> >> We could: >> >> (a) unpack the replacement puppet modules into a temporary location, >> then >> >> (b) for each module; rm -rf the target directory and then copy it into >> place >> >> But! This would require deploy_artifacts.sh to know that it was >> unpacking puppet modules rather than a generic tarball. >> >>> This would have to be optional, so we could add something like a >>> DeployArtifactsCleanupDirs parameter perhaps? >> >> >> If we went with the above, sure. >> >>> One more thought which just occurred to me - we could add support for >>> a git checkout/pull to the script? >> >> >> Reiterating our conversion in #tripleo, I think rather than adding a >> bunch of specific functionality to the DeployArtifacts feature, it >> would make more sense to add the ability to include some sort of >> user-defined pre/post tasks, either as shell scripts or as ansible >> playbooks or something. >> >> On the other hand, I like your suggestion of just ditching >> DeployArtifacts for a new composable service that defines >> host_prep_tasks (or re-implenting DeployArtifacts as a composable >> service), so I'm going to look at that as a possible alternative to >> what I'm currently doing. >> > > For the puppet modules specifically, we might also add another > directory+mount into the docker-puppet container, which would be blank by > default (unlike the existing, already populated /etc/puppet and > /usr/share/openstack-puppet/modules). And we'd put that directory at the > very start of modulepath. Then i *think* puppet would use a particular > module from that dir *only*, not merge the contents with the rest of > modulepath, so stale files in /etc/... or /usr/share/... wouldn't matter > (didn't test it though). That should get us around the "tgz only adds files" > problem without any rm -rf. > So the described problem is only a problem with puppet facts and providers as they all get loaded from the entire module path. Normal puppet classes are less conflict-y because it takes the first it finds and stops. > The above is somewhat of an orthogonal suggestion to the composable service > approach, they would work well alongside i think. (And +1 on > "DeployArtifacts as composable service" as something worth investigating / > implementing.) > -1 to more services. We take a Heat time penalty for each new composable service we add and in this case I don't think this should be a service itself. I think for this case, it would be better suited as a host prep task than a defined service. Providing a way for users to define external host prep tasks might make more sense. Thanks, -Alex > Jirka > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zijian1012 at 163.com Tue Jun 19 16:15:30 2018 From: zijian1012 at 163.com (=?UTF-8?B?5p2O5YGl?=) Date: Wed, 20 Jun 2018 00:15:30 +0800 (GMT+08:00) Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client Message-ID: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> Hello everyone --------------- CentOS Linux release 7.3.1611 OpenStack Version: Newton # rpm -qa | egrep "(openstacksdk|openstackclient)" python-openstackclient-3.2.1-1.el7.noarch python2-openstacksdk-0.9.5-1.el7.noarch ---------------- The openstack CLI is implemented by python-openstackclient. In the python-openstackclient package, the function make_client(instance) is used to obtain the client for each service (openstackclient/xxx/client.py), I noticed that almost all core services are import their own python2-xxxclient to get the client, for example: image/client.py --> import glanceclient.v2.client.Client compute/client.py --> import novaclient.client volume/client.py --> import cinderclient.v2.client.Client But only the network service is import openstacksdk to get the client, as follows: network/client.py --> import openstack.connection.Connection So, my question is, why does the network service not use the python2-neutronclient to get the client like other core projects, but instead uses another separate project(openstacksdk)? My personal opinion, openstacksdk is a project that can be used independently, it is mainly to provide a unified sdk for developers, so there should be no interdependence between python-xxxclient and openstacksdk, right? For any help, thks -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Jun 19 16:28:54 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 19 Jun 2018 18:28:54 +0200 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client In-Reply-To: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> References: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> Message-ID: Hi, No. Not right. Idea is to unify CLI for all projects inside of the python-openstackclient and obsolete all individual python-XXXclients. This can be achieved by using the openstacksdk. Network module was just first in the row, where the progress stucked a bit. Regards, On Tue, Jun 19, 2018 at 6:15 PM, 李健 wrote: > Hello everyone > --------------- > CentOS Linux release 7.3.1611 > OpenStack Version: Newton > # rpm -qa | egrep "(openstacksdk|openstackclient)" > python-openstackclient-3.2.1-1.el7.noarch > python2-openstacksdk-0.9.5-1.el7.noarch > ---------------- > The openstack CLI is implemented by python-openstackclient. > In the python-openstackclient package, the function make_client(instance) > is used to obtain the client for each service (openstackclient/xxx/client.py), > I noticed that almost all core services are import their own > python2-xxxclient to get the client, for example: > image/client.py --> import glanceclient.v2.client.Client > compute/client.py --> import novaclient.client > volume/client.py --> import cinderclient.v2.client.Client > > But only the network service is import openstacksdk to get the client, as > follows: > network/client.py --> import openstack.connection.Connection > > So, my question is, why does the network service not use the > python2-neutronclient to get the client like other core projects, but > instead uses another separate project(openstacksdk)? > My personal opinion, openstacksdk is a project that can be used > independently, it is mainly to provide a unified sdk for developers, so > there should be no interdependence between python-xxxclient and > openstacksdk, right? > > For any help, thks > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Jun 19 16:51:38 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 19 Jun 2018 12:51:38 -0400 Subject: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db] Configure maximum number of db connections In-Reply-To: References: Message-ID: <554ff606-3930-d9d8-5152-3f21540edf33@redhat.com> On 18/06/18 13:39, Jay Pipes wrote: > +openstack-dev since I believe this is an issue with the Heat source code. > > On 06/18/2018 11:19 AM, Spyros Trigazis wrote: >> Hello list, >> >> I'm hitting quite easily this [1] exception with heat. The db server >> is configured to have 1000 >> max_connnections and 1000 max_user_connections and in the database >> section of heat >> conf I have these values set: >> max_pool_size = 22 >> max_overflow = 0 >> Full config attached. >> >> I ended up with this configuration based on this formula: >> num_heat_hosts=4 >> heat_api_workers=2 >> heat_api_cfn_workers=2 >> num_engine_workers=4 >> max_pool_size=22 >> max_overflow=0 >> num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers + >> num_engine_workers + heat_api_cfn_workers) >> 704 >> >> What I have noticed is that the number of connections I expected with >> the above formula is not respected. >> Based on this formula each node (every node runs the heat-api, >> heat-api-cfn and heat-engine) should >> use up to 176 connections but they even reach 400 connections. >> >> Has anyone noticed a similar behavior? > > Looking through the Heat code, I see that there are many methods in the > /heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but > never actually call session.close() [1] which means that the session > will not be released back to the connection pool, which might be the > reason why connections keep piling up. Thanks for looking at this Jay! Maybe I can try to explain our strategy (such as it is) here and you can tell us what we should be doing instead :) Essentially we have one session per 'task', that is used for the duration of the task. Back in the day a 'task' was the processing of an entire stack from start to finish, but with our new distributed architecture it's much more granular - either it's just the initial setup of a change to a stack, or it's the processing of a single resource. (This was a major design change, and it's quite possible that the assumptions we made at the beginning - and tbh I don't think we really knew what we were doing then either - are no longer valid.) So, for example, Heat sees an RPC request come in to update a resource, it starts a greenthread to handle it, that creates a database session that is stored in the request context. At the beginning of the request we load the data needed and update the status of the resource in the DB to IN_PROGRESS. Then we do whatever we need to do to update the resource (mostly this doesn't involve writing to the DB, but there are exceptions). Then we update the status to COMPLETE/FAILED, do some housekeeping stuff in the DB and send out RPC messages for any other work that needs to be done. IIUC that all uses the same session, although I don't know if it gets opened and closed multiple times in the process, and presumably the same object cache. Crucially, we *don't* have a way to retry if we're unable to connect to the database in any of those operations. If we can't connect at the beginning that'd be manageable, because we could (but currently don't) just send out a copy of the incoming RPC message to try again later. But once we've changed something about the resource, we *must* record that in the DB or Bad Stuff(TM) will happen. The way we handled that, as Spyros pointed out, was to adjust the size of the overflow pool to match the size of the greenthread pool. This ensures that every 'task' is able to connect to the DB, because we won't take the message out of the RPC queue until there is a greenthread, and by extension a DB connection, available. This is infinitely preferable to finding out there are no connections available after you've already accepted the message (and oslo_messaging has an annoying 'feature' of acknowledging the message before it has even passed it to the application). It means stuff that we aren't able to handle yet queues up in the message queue, where it belongs, instead of in memory. History: https://bugs.launchpad.net/heat/+bug/1491185 Unfortunately now you have to tune the size of the threadpool to trade off not utilising too little of your CPU against not opening too many DB connections. Nobody knows what the 'correct' tradeoff is, and even if we did Heat can't really tune it automatically by default because at startup it only knows the number of worker processes on the local node; it can't tell how many other nodes are [going to be] running and opening connections to the same database. Plus the number of allowed DB connections becomes the bottleneck to how much you can scale out the service horizontally. What is the canonical way of handling this kind of situation? Retry any DB operation where we can't get a connection, and close the session after every transaction? > Not sure if there's any setting in Heat that will fix this problem. > Disabling connection pooling will likely not help since connections are > not properly being closed and returned to the connection pool to begin > with. > > Best, > -jay > > [1] Heat apparently doesn't use the oslo.db enginefacade transaction > context managers either, which would help with this problem since the > transaction context manager would take responsibility for calling > session.flush()/close() appropriately. > > https://github.com/openstack/oslo.db/blob/43af1cf08372006aa46d836ec45482dd4b5b5349/oslo_db/sqlalchemy/enginefacade.py#L626 Oh, I thought we did: https://review.openstack.org/330800 cheers, Zane. From mriedemos at gmail.com Tue Jun 19 17:07:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 19 Jun 2018 12:07:26 -0500 Subject: [openstack-dev] [nova]Notification update week 25 In-Reply-To: <1529334634.13962.0@smtp.office365.com> References: <1529334634.13962.0@smtp.office365.com> Message-ID: On 6/18/2018 10:10 AM, Balázs Gibizer wrote: > * Introduce instance.lock and instance.unlock notifications > https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances This hasn't been updated in quite awhile. I wonder if someone else wants to pick that up now? -- Thanks, Matt From cdent+os at anticdent.org Tue Jun 19 17:48:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 19 Jun 2018 18:48:46 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-25 Message-ID: HTML: https://anticdent.org/tc-report-18-25.html Over the time that I've been observing the TC, there's been quite a lot of indecision about how and when to exercise power. The rules and regulations of OpenStack governance have it that the TC has pretty broad powers in terms of allowing and disallowing projects to be "official" and in terms of causing or preventing the merging of _any_ code in _any_ of those official projects. Unfortunately, the negative aspect of these powers make them the sort of powers that no one really wants to use. Instead the TC has a history of, when it wants to pro-actively change things, using techniques of gently nudging or trying to make obvious activities that would be useful. [OpenStack-wide goals](https://governance.openstack.org/tc/goals/index.html) and the [help most-needed list](https://governance.openstack.org/tc/reference/help-most-needed.html) are examples of this sort of thing. Now that OpenStack is no longer sailing high on the hype seas, resources are more scarce and some tactics and strategies are no longer as useful as they once were. Some have expressed a desire for the TC to provide a more active leadership role. One that allows the community to adapt more quickly to changing times. There's a delicate balance here that a few different conversations in the past week have highlighted. [Last Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T15:07:31), a discussion about the (vast) volume of code getting review and merged in the nova project led to some discussion on how to either enforce or support a goal of decomposing nova into smaller, less-coupled pieces. It was hard to find middle ground between outright blocking code that didn't fit with that goal and believing nothing could be done. Mixed in with that were valid concerns that the TC [shouldn't be parenting people who are adults](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:03:23) and [is unable to be effective](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-14.log.html#t2018-06-14T16:17:31). (_Note: the context of those two linked statements is very important, lest you be inclined to consider them out of context._) And then [today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:03:19), some discussion about keeping the help wanted list up to date led to thinking about ways to encourage reorganizing "[work around objectives rather than code boundaries](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T09:17:27)", despite that being a very large cultural shift that may be very difficult to make. So what is the TC (or any vaguely powered governance group) to do? We have some recent examples of the right thing: These are written works—some completed, some in-progress—that layout a vision of how things could or should be that community members can react and refer to. As concrete documents they provide what amounts to an evolving constitution of who we are or what we intend to be that people may point to as a third-party authority that they choose to accept, reject or modify without the complexity of "so and so said…". * [Written principles for peer review](https://governance.openstack.org/tc/reference/principles.html#we-value-constructive-peer-review) and [clear documentation](https://docs.openstack.org/project-team-guide/review-the-openstack-way.html) of the same. * Starting a [Technical Vision for 2018](https://etherpad.openstack.org/p/tech-vision-2018). * There should be more here. There will be more here. Many of the things that get written will start off wrong but the only way they have a chance of becoming right is if they are written in the first place. Providing ideas allows people to say "that's right" or "that's wrong" or "that's right, except...". Writing provides a focal point for including many different people in the generation and refinement of ideas and an archive of long-lived meaning and shared belief. Beliefs are what we use to choose between what matters and what does not. As the community evolves, and in some ways shrinks while demands remain high, we have to make it easier for people to find and understand, with greater alacrity, what we, as a community, choose to care about. We've done a pretty good job in the past talking about things like the [four opens](https://governance.openstack.org/tc/reference/opens.html), but now we need to be more explicit about what we are making and how we make it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lbragstad at gmail.com Tue Jun 19 18:11:34 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 19 Jun 2018 13:11:34 -0500 Subject: [openstack-dev] [all] default and implied roles changes Message-ID: Hi all, Keystone recently took a big step in implementing the default roles work that's been a hot topic over the past year [0][1][2][3][4], and a big piece in making RBAC more robust across OpenStack. We merged a patch [5] that ensures the roles described in the specification [6] exist. This was formally a cross-project specification [7], but rescoped to target keystone directly in hopes of making it a future community goal [8]. If you've noticed issues with various CI infrastructure, it could be due to the fact a couple new roles are being populated by keystone's bootstrap command. For example, if your testing infrastructure creates a role named 'Member' or 'member', you could see HTTP 409s since keystone is now creating that role by default. You can safely remove code that ensures that role exists, since keystone will now handle that for you. These types of changes have been working their way into infrastructure and deployment projects [9] this week. If you're seeing something that isn't an HTTP 409 and suspect it is related to these changes, come find us in #openstack-keystone. We'll be around to answer questions about the changes in keystone and can assist in straightening things out. [0] https://etherpad.openstack.org/p/policy-queens-ptg Queens PTG Policy Session [1] https://etherpad.openstack.org/p/queens-PTG-keystone-policy-roadmap Queens PTG Roadmap Outline [2] https://etherpad.openstack.org/p/rbac-and-policy-rocky-ptg Rocky PTG Policy Session [3] https://etherpad.openstack.org/p/baremetal-vm-rocky-ptg Rocky PTG Identity Integration Track [4] https://etherpad.openstack.org/p/YVR-rocky-default-roles Rocky Forum Default Roles Forum Session [5] https://review.openstack.org/#/c/572243/ [6] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html [7] https://review.openstack.org/#/c/523973/ [8] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130208.html [9] https://review.openstack.org/#/q/(status:open+OR+status:merged)+branch:master+topic:fix-member -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Tue Jun 19 19:06:15 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 19 Jun 2018 20:06:15 +0100 (BST) Subject: [openstack-dev] [oslo] [placement] setting oslo config opts from environment Message-ID: Every now and again I keep working on my experiments to containerize placement in a useful way [1]. At the moment I have it down to using a very small oslo_config-style conf file. I'd like to take it the rest of the way and have no file at all so that my container can be an immutable black box that's presence on the network and use of a database is all external to itself and I can add and remove them at will with very little effort and no mounts or file copies. This is the way placement has been designed from the start. Internal to itself all it really knows is what database it wants to talk to, and how to talk to keystone for auth. That's what's in the conf file. We recently added support for policy, but it is policy-in-code and the defaults are okay, so no policy file required. Placement cannot create fully qualified URLs within itself. This is good and correct: it doesn't need to. With that preamble out of the way, what I'd like to be able to do is make it so the placement service can start up and get its necessary configuration information from environment variables (which docker or k8s or whatever other orchestration you're using would set). There are plenty of ways to hack this into the existing code, but I would prefer to do it in a way that is useful and reusable by other people who want to do the same thing. So I'd like people's feedback and ideas on what they think of the following ways, and any other ideas they have. Or if oslo_config already does this and I just missed it, please set me straight. 1) I initially thought that the simplest way to do this would be to set a default when describing the options to do something like `default=os.environ.get('something', the_original_default)` but this has a bit of a flaw. It means that the conf.file wins over the environment and this is backwards from the expected [2] priority. 2) When the service starts up, after it reads its own config, but before it actually does anything, it inspects the environment for a suite of variables which it uses to clobber the settings that came from files with the values in the environment. 3) 2, but it happens in oslo_config instead of the service's own code, perhaps with a `from_env` kwarg when defining the opts. Maybe just for StrOpt, and maybe with some kind of automated env-naming scheme. 4) Something else? What do you think? Note that the main goal here is to avoid files, so solutions that are "read the environment variables to then write a custom config file" are not in this domain (although surely useful in other domains). We had some IRC discussion about this [3] if you want a bit more context. Thanks for your interest and attention. [1] https://anticdent.org/placement-container-playground-6.html [2] https://bugs.launchpad.net/oslo-incubator/+bug/1196368 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-06-19.log.html#t2018-06-19T18:30:12 -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Jun 19 19:36:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 19 Jun 2018 15:36:38 -0400 Subject: [openstack-dev] [oslo] [placement] setting oslo config opts from environment In-Reply-To: References: Message-ID: <1529435957-sup-54@lrrr.local> Excerpts from Chris Dent's message of 2018-06-19 20:06:15 +0100: > > Every now and again I keep working on my experiments to containerize > placement in a useful way [1]. At the moment I have it down to using > a very small oslo_config-style conf file. I'd like to take it the > rest of the way and have no file at all so that my container can be > an immutable black box that's presence on the network and use of a > database is all external to itself and I can add and remove them at > will with very little effort and no mounts or file copies. > > This is the way placement has been designed from the start. Internal > to itself all it really knows is what database it wants to talk to, > and how to talk to keystone for auth. That's what's in the conf > file. We recently added support for policy, but it is policy-in-code > and the defaults are okay, so no policy file required. Placement > cannot create fully qualified URLs within itself. This is good and > correct: it doesn't need to. > > With that preamble out of the way, what I'd like to be able to do is > make it so the placement service can start up and get its necessary > configuration information from environment variables (which docker > or k8s or whatever other orchestration you're using would set). > There are plenty of ways to hack this into the existing code, but I > would prefer to do it in a way that is useful and reusable by other > people who want to do the same thing. > > So I'd like people's feedback and ideas on what they think of the > following ways, and any other ideas they have. Or if oslo_config > already does this and I just missed it, please set me straight. > > 1) I initially thought that the simplest way to do this would be to > set a default when describing the options to do something like > `default=os.environ.get('something', the_original_default)` but this > has a bit of a flaw. It means that the conf.file wins over the > environment and this is backwards from the expected [2] priority. > > 2) When the service starts up, after it reads its own config, but > before it actually does anything, it inspects the environment for > a suite of variables which it uses to clobber the settings that came > from files with the values in the environment. > > 3) 2, but it happens in oslo_config instead of the service's own > code, perhaps with a `from_env` kwarg when defining the opts. Maybe > just for StrOpt, and maybe with some kind of automated env-naming > scheme. > > 4) Something else? What do you think? > > Note that the main goal here is to avoid files, so solutions that > are "read the environment variables to then write a custom config > file" are not in this domain (although surely useful in other > domains). > > We had some IRC discussion about this [3] if you want a bit more > context. Thanks for your interest and attention. > > [1] https://anticdent.org/placement-container-playground-6.html > [2] https://bugs.launchpad.net/oslo-incubator/+bug/1196368 > [3] http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-06-19.log.html#t2018-06-19T18:30:12 > I think the TripleO folks were going to look at kubernetes configmaps for passing configuration settings into containers. I don't know how far that research went. I certainly have no objection to doing the work in oslo.config. As I described on IRC today, I think we would want to implement it using the new driver feature we're working on this cycle, even if the driver is enabled automatically so users don't have to turn it on. We already special case command line options and the point of the driver interface is to give us a way to extend the lookup logic without having to add more special cases. This might be worth a short spec, just so we can make sure we're covering all of the details. For example: We will need to consider what to do with configuration settings more complicated than primitive data types like strings and numbers. Lists can probably be expressed with a separator character. Perhaps more complex types like dicts are just not supported. I would like to remove them anyway, although that doesn't seem realistic now. We also need to work out how variable names are constructed from option and group names. Doug From whayutin at redhat.com Tue Jun 19 19:45:02 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 19 Jun 2018 13:45:02 -0600 Subject: [openstack-dev] [tripleo] CI is down stop workflowing In-Reply-To: References: Message-ID: Check and gate jobs look clear. More details on a bit. Thanks Sent from my mobile On Tue, Jun 19, 2018, 07:33 Felix Enrique Llorente Pastora < ellorent at redhat.com> wrote: > Hi, > > We have the following bugs with fixes that need to land to unblock > check/gate jobs: > > https://bugs.launchpad.net/tripleo/+bug/1777451 > https://bugs.launchpad.net/tripleo/+bug/1777616 > > You can check them out at #tripleo ooolpbot. > > Please stop workflowing temporally until they get merged. > > BR. > > -- > Quique Llorente > > Openstack TripleO CI > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Tue Jun 19 19:59:37 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 19 Jun 2018 15:59:37 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: <5B27D46F.10804@windriver.com> References: <5B27D46F.10804@windriver.com> Message-ID: > Adding > claims support later on wouldn't change any on-the-wire messaging, it would > just make things work more robustly. I'm not even sure about that. Assuming [1] has at least the right idea, it looks like it's an either-or kind of thing: either we use resource tracker claims and get the new instance NUMA topology that way, or do what was in the spec and have the dest send it to the source. That being said, I still think I'm still in favor of choosing the "easy" way out. For instance, [2] should fail because we can't access the api db from the compute node. So unless there's a simpler way, using RT claims would involve changing the RPC to add parameters to check_can_live_migration_destination, which, while not necessarily bad, seems like useless complexity for a thing we know will get ripped out. [1] https://review.openstack.org/#/c/576222/ [2] https://review.openstack.org/#/c/576222/3/nova/compute/manager.py at 5897 From openstack at fried.cc Tue Jun 19 20:24:08 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 19 Jun 2018 15:24:08 -0500 Subject: [openstack-dev] [nova][oot drivers] Putting a contract out on ComputeDriver.get_traits() Message-ID: All (but especially out-of-tree compute driver maintainers)- ComputeDriver.get_traits() was introduced mere months ago [1] for initial implementation by Ironic [2] mainly because the whole update_provider_tree framework [3] wasn't fully baked yet. Now that update_provider_tree is a thing, I'm starting work to cut Ironic over to using it [4]. Since, as of this writing, Ironic still has the only in-tree implementation of get_traits [5], I'm planning to whack the ComputeDriver interface [6] and its one callout in the resource tracker [7] at the same time. If you maintain an out-of-tree driver and this is going to break you unbearably, scream now. However, be warned that I will probably just ask you to cut over to update_provider_tree. Thanks, efried [1] https://review.openstack.org/#/c/532290/ [2] https://review.openstack.org/#/q/topic:bp/ironic-driver-traits+(status:open+OR+status:merged) [3] http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/update-provider-tree.html [4] https://review.openstack.org/#/c/576588/ [5] https://github.com/openstack/nova/blob/0876b091db6f6f0d6795d5907d3d8314706729a7/nova/virt/ironic/driver.py#L737 [6] https://github.com/openstack/nova/blob/ecaadf6d6d3c94706fdd1fb24676e3bd2370f9f7/nova/virt/driver.py#L886-L895 [7] https://github.com/openstack/nova/blob/ecaadf6d6d3c94706fdd1fb24676e3bd2370f9f7/nova/compute/resource_tracker.py#L915-L926 From dtroyer at gmail.com Tue Jun 19 20:51:38 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 19 Jun 2018 15:51:38 -0500 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client In-Reply-To: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> References: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> Message-ID: On Tue, Jun 19, 2018 at 11:15 AM, 李健 wrote: > So, my question is, why does the network service not use the > python2-neutronclient to get the client like other core projects, but > instead uses another separate project(openstacksdk)? There were multiple reasons to not use neutron client lib for OSC and the SDk was good enough at the time to use ut in spite of not being at a 1.0 release. We have intended to migrate everything to use OpenStackSDK and eliminate OSC's use of the python-*client libraries completely. We are waiting on an SDK 1.0 release, it has stretched on for years longer than originally anticipated but the changes we have had to accommodate in the network commands in the past convinced me to wait until it was declared stable, even though it has been nearly stable for a while now. > My personal opinion, openstacksdk is a project that can be used > independently, it is mainly to provide a unified sdk for developers, so > there should be no interdependence between python-xxxclient and > openstacksdk, right? Correct, OpenStackSDK has no dependency on any of the python-*client libraries.. Its primary dependency is on keystoneauth for the core authentication logic, that was long ago pulled out of the keystone client package. dt -- Dean Troyer dtroyer at gmail.com From feilong at catalyst.net.nz Tue Jun 19 21:13:55 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Wed, 20 Jun 2018 09:13:55 +1200 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> Message-ID: <585ca53f-4ef1-01ce-b096-9a949130094e@catalyst.net.nz> Hi there, For people who maybe still interested in this issue. I have proposed a patch, see https://review.openstack.org/576029 And I have verified with Sonobuoy for both multi masters (3 master nodes) and single master clusters, all worked. Any comments will be appreciated. Thanks. On 21/05/18 01:22, Sergey Filatov wrote: > Hi! > I’d like to initiate a discussion about this bug: [1]. > To resolve this issue we need to generate a secret cert and pass it to > master nodes. We also need to store it somewhere to support scaling. > This issue is specific for kubernetes drivers. Currently in magnum we > have a general cert manager which is the same for all the drivers. > > What do you think about moving cert_manager logic into a > driver-specific area? > Having this common cert_manager logic forces us to generate client > cert with “admin” and “system:masters” subject & organisation names [2],  > which is really something that we need only for kubernetes drivers. > > [1] https://bugs.launchpad.net/magnum/+bug/1766546 > [2] https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64 > > > ..Sergey Filatov > > > >> On 20 Apr 2018, at 20:57, Sergey Filatov > > wrote: >> >> Hello, >> >> I looked into k8s drivers for magnum I see that each api-server on >> master node generates it’s own service-account-key-file. This causes >> issues with service-accounts authenticating on api-server. (In case >> api-server endpoint moves). >> As far as I understand we should have either all api-server keys >> synced on api-servesr or pre-generate single api-server key. >> >> What is the way for magnum to get over this issue? > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Jun 19 21:16:45 2018 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 19 Jun 2018 16:16:45 -0500 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client In-Reply-To: References: <6d0834b0.6872.16418d4679f.Coremail.zijian1012@163.com> Message-ID: <1f24d9fe-3905-3bcd-7c97-a5cc58d79ee0@inaugust.com> On 06/19/2018 03:51 PM, Dean Troyer wrote: > On Tue, Jun 19, 2018 at 11:15 AM, 李健 wrote: >> So, my question is, why does the network service not use the >> python2-neutronclient to get the client like other core projects, but >> instead uses another separate project(openstacksdk)? > > There were multiple reasons to not use neutron client lib for OSC and > the SDk was good enough at the time to use ut in spite of not being at > a 1.0 release. We have intended to migrate everything to use > OpenStackSDK and eliminate OSC's use of the python-*client libraries > completely. We are waiting on an SDK 1.0 release, it has stretched on > for years longer than originally anticipated but the changes we have > had to accommodate in the network commands in the past convinced me to > wait until it was declared stable, even though it has been nearly > stable for a while now. Soon. Really soon. I promise. >> My personal opinion, openstacksdk is a project that can be used >> independently, it is mainly to provide a unified sdk for developers, so >> there should be no interdependence between python-xxxclient and >> openstacksdk, right? > > Correct, OpenStackSDK has no dependency on any of the python-*client > libraries.. Its primary dependency is on keystoneauth for the core > authentication logic, that was long ago pulled out of the keystone > client package. > > dt > From chris.friesen at windriver.com Tue Jun 19 21:21:04 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 19 Jun 2018 15:21:04 -0600 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: <5B27D46F.10804@windriver.com> Message-ID: <5B2973C0.6040905@windriver.com> On 06/19/2018 01:59 PM, Artom Lifshitz wrote: >> Adding >> claims support later on wouldn't change any on-the-wire messaging, it would >> just make things work more robustly. > > I'm not even sure about that. Assuming [1] has at least the right > idea, it looks like it's an either-or kind of thing: either we use > resource tracker claims and get the new instance NUMA topology that > way, or do what was in the spec and have the dest send it to the > source. One way or another you need to calculate the new topology in ComputeManager.check_can_live_migrate_destination() and communicate that information back to the source so that it can be used in ComputeManager._do_live_migration(). The previous patches communicated the new topoology as part of instance. > That being said, I still think I'm still in favor of choosing the > "easy" way out. For instance, [2] should fail because we can't access > the api db from the compute node. I think you could use objects.ImageMeta.from_instance(instance) instead of request_spec.image. The limits might be an issue. > So unless there's a simpler way, > using RT claims would involve changing the RPC to add parameters to > check_can_live_migration_destination, which, while not necessarily > bad, seems like useless complexity for a thing we know will get ripped > out. I agree that it makes sense to get the "simple" option working first. If we later choose to make it work "properly" I don't think it would require undoing too much. Something to maybe factor in to what you're doing--I think there is currently a bug when migrating an instance with no numa_topology to a host with a different set of host CPUs usable for floating instances--I think it will assume it can still float over the same host CPUs as before. Once we have the ability to re-write the instance XML prior to the live-migration it would be good to fix this. I think this would require passing the set of available CPUs on the destination back to the host for use when recalculating the XML for the guest. (See the "if not guest_cpu_numa_config" case in LibvirtDriver._get_guest_numa_config() where "allowed_cpus" is specified, and LibvirtDriver._get_guest_config() where guest.cpuset is written.) Chris From aschultz at redhat.com Tue Jun 19 23:00:16 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 19 Jun 2018 17:00:16 -0600 Subject: [openstack-dev] [tripleo] CI is down stop workflowing In-Reply-To: References: Message-ID: On Tue, Jun 19, 2018 at 1:45 PM, Wesley Hayutin wrote: > Check and gate jobs look clear. > More details on a bit. > So for a recap of the last 24 hours or so... Mistral auth problems - https://bugs.launchpad.net/tripleo/+bug/1777541 - caused by https://review.openstack.org/#/c/574878/ - fixed by https://review.openstack.org/#/c/576336/ Undercloud install failure - https://bugs.launchpad.net/tripleo/+bug/1777616 - caused by https://review.openstack.org/#/c/570307/ - fixed by https://review.openstack.org/#/c/576428/ Keystone duplicate role - https://bugs.launchpad.net/tripleo/+bug/1777451 - caused by https://review.openstack.org/#/c/572243/ - fixed by https://review.openstack.org/#/c/576356 and https://review.openstack.org/#/c/576393/ The puppet issues should be prevented in the future by adding tripleo undercloud jobs back in to the appropriate modules, see https://review.openstack.org/#/q/topic:tripleo-ci+(status:open) I recommended the undercloud jobs because that gives us some basic coverage and the instack-undercloud job still uses puppet without containers. We'll likely want to replace these jobs with standalone versions at somepoint as that configuration gets more mature. We've restored any patches that were abandoned in the gate and it should be ok to recheck. Thanks, -Alex > Thanks > > Sent from my mobile > > On Tue, Jun 19, 2018, 07:33 Felix Enrique Llorente Pastora > wrote: >> >> Hi, >> >> We have the following bugs with fixes that need to land to unblock >> check/gate jobs: >> >> https://bugs.launchpad.net/tripleo/+bug/1777451 >> https://bugs.launchpad.net/tripleo/+bug/1777616 >> >> You can check them out at #tripleo ooolpbot. >> >> Please stop workflowing temporally until they get merged. >> >> BR. >> >> -- >> Quique Llorente >> >> Openstack TripleO CI >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yikunkero at gmail.com Wed Jun 20 01:29:05 2018 From: yikunkero at gmail.com (Yikun Jiang) Date: Wed, 20 Jun 2018 09:29:05 +0800 Subject: [openstack-dev] [nova]Notification update week 25 In-Reply-To: References: <1529334634.13962.0@smtp.office365.com> Message-ID: I'd like to help it. : ) Regards, Yikun ---------------------------------------- Jiang Yikun(Kero) Mail: yikunkero at gmail.com Matt Riedemann 于2018年6月20日周三 上午1:07写道: > On 6/18/2018 10:10 AM, Balázs Gibizer wrote: > > * Introduce instance.lock and instance.unlock notifications > > > https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances > > This hasn't been updated in quite awhile. I wonder if someone else wants > to pick that up now? > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Wed Jun 20 01:34:51 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 20 Jun 2018 11:34:51 +1000 Subject: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase Message-ID: <89b44820-47c0-0a8b-a363-cf3ff4e1879a@redhat.com> Could someone from the Neutron release group rebase feature/graphql branch against master/HEAD branch please? Regards, Gilles From linghucongsong at 163.com Wed Jun 20 01:38:42 2018 From: linghucongsong at 163.com (linghucongsong) Date: Wed, 20 Jun 2018 09:38:42 +0800 (CST) Subject: [openstack-dev] [tricircle] Zuul v3 integration status In-Reply-To: References: <922b0570-988e-98d2-56db-615d388de1f6@gmail.com> Message-ID: <27220cea.3678.1641ad806d0.Coremail.linghucongsong@163.com> Hi Boden! I am song, I have discussed with the ptl zhiyuan. we all think it is not a simple work to finish this. we will plan this as a bp, but maybe can not finished it in the R release, we promise must be finish it in the next openstack version. At 2018-06-19 10:13:47, "linghucongsong" wrote: Hi Boden! Thanks for report this bug. we will talk about this bug in our meeting this week wednesday 9:00 beijing time. if you have time i would like you join it in the openstack-meeting channel. At 2018-06-15 21:56:29, "Boden Russell" wrote: >Is there anyone who can speak to the status of tricircle's adoption of >Zuul v3? > >As per [1] it doesn't seem like the project is setup properly for Zuul >v3. Thus, it's difficult/impossible to land patches like [2] that >require neutron/master + a depends on patch. > >Assuming tricircle is still being maintained, IMO we need to find a way >to get it up to speed with zuul v3; otherwise some of our neutron >efforts will be held up, or tricircle will fall behind with respect to >neutron-lib adoption. > >Thanks > > >[1] https://bugs.launchpad.net/tricircle/+bug/1776922 >[2] https://review.openstack.org/#/c/565879/ > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Wed Jun 20 02:41:16 2018 From: ed at leafe.com (Ed Leafe) Date: Tue, 19 Jun 2018 21:41:16 -0500 Subject: [openstack-dev] [tc] [all] TC Report 18-25 In-Reply-To: References: Message-ID: On Jun 19, 2018, at 12:48 PM, Chris Dent wrote: > > Many of the things that get written will start off wrong but the > only way they have a chance of becoming right is if they are written > in the first place. This. Too many people are afraid of doing anything that might turn out to be the "wrong" thing that nothing gets done. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From zijian1012 at 163.com Wed Jun 20 02:42:47 2018 From: zijian1012 at 163.com (zijian1012 at 163.com) Date: Wed, 20 Jun 2018 10:42:47 +0800 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client Message-ID: <2018062010414723945413@163.com> On Tue, Jun 19, 2018 at 11:15 AM, 李健 wrote: >> So, my question is, why does the network service not use the >> python2-neutronclient to get the client like other core projects, but >> instead uses another separate project(openstacksdk)? > There were multiple reasons to not use neutron client lib for OSC and > the SDk was good enough at the time to use ut in spite of not being at > a 1.0 release. We have intended to migrate everything to use > OpenStackSDK and eliminate OSC's use of the python-*client libraries > completely. Thks for replying, just want to confirm, you mentioned "We have intended to migrate everything to use OpenStackSDK", the current use of python-*client is: 1. OSC 2. all services that need to interact with other services (e.g.: nova libraries: self.volume_api = volume_api or cinder.API()) Do you mean that both of the above will be migrated to use the OpenStack SDK? > We are waiting on an SDK 1.0 release, it has stretched on > for years longer than originally anticipated but the changes we have > had to accommodate in the network commands in the past convinced me to > wait until it was declared stable, even though it has been nearly > stable for a while now. >> My personal opinion, openstacksdk is a project that can be used >> independently, it is mainly to provide a unified sdk for developers, so >> there should be no interdependence between python-xxxclient and >> openstacksdk, right? > Correct, OpenStackSDK has no dependency on any of the python-*client > libraries.. Its primary dependency is on keystoneauth for the core > authentication logic, that was long ago pulled out of the keystone > client package. > dt -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Wed Jun 20 02:54:32 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 20 Jun 2018 12:54:32 +1000 Subject: [openstack-dev] Puppet debugging help? In-Reply-To: References: <20180618151359.bfpwu2h6w7pnqqma@redhat.com> Message-ID: <7d14fece-2375-95f4-92d8-83215e587467@redhat.com> On 19/06/18 01:59, Alex Schultz wrote: > On Mon, Jun 18, 2018 at 9:13 AM, Lars Kellogg-Stedman wrote: >> Hey folks, >> >> I'm trying to patch puppet-keystone to support multi-valued >> configuration options (like trusted_dashboard). I have a patch that >> works, mostly, but I've run into a frustrating problem (frustrating >> because it would seem to be orthogonal to my patches, which affect the >> keystone_config provider and type). >> >> During the initial deploy, running tripleo::profile::base::keystone >> fails with: >> >> "Error: Could not set 'present' on ensure: undefined method `new' >> for nil:NilClass at >> /etc/puppet/modules/tripleo/manifests/profile/base/keystone.pp:274", >> > It's likely erroring in the keystone_domain provider. > > https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L115-L122 > or > https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_domain/openstack.rb#L155-L161 > > Providers are notoriously bad at their error messaging. Usually this > error happens when we get a null back from the underlying command and > we're still trying to do something. This could point to a > misconfiguration of keystone if it's not getting anything back. Per Alex comment, the keystone_domain class is definitely involved. The provider fails: "Could not set 'present' on ensure" And the propagated error seems to be because the provider could not be set up for some dependent reason and came back empty. $ irb irb(main):001:0> nil.new NoMethodError: undefined method `new' for nil:NilClass The second pass worked because the missing "dependent" bit was set up (in the meantime) and the provider creation was satisfied. To investigate dependent cause within the provider, you could use 'notice("Value: ${variable}")' >> The line in question is: >> >> 70: if $step == 3 and $manage_domain { >> 71: if hiera('heat_engine_enabled', false) { >> 72: # create these seperate and don't use ::heat::keystone::domain since >> 73: # that class writes out the configs >> 74: keystone_domain { $heat_admin_domain: >> ensure => 'present', >> enabled => true >> } >> >> The thing is, despite the error...it creates the keystone domain >> *anyway*, and a subsequent run of the module will complete without any >> errors. >> >> I'm not entirely sure that the error is telling me, since *none* of >> the puppet types or providers have a "new" method as far as I can see. >> Any pointers you can offer would be appreciated. >> >> Thanks! >> >> -- >> Lars Kellogg-Stedman | larsks @ {irc,twitter,github} >> http://blog.oddbit.com/ | >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 From zhipengh512 at gmail.com Wed Jun 20 06:19:12 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 20 Jun 2018 14:19:12 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.06.20 Message-ID: Hi Team, Weekly meeting as usual starting UTC1400 at #openstack-cyborg Initial agenda: 1. Rocky task assignment confirmation 2. os-acc discussion -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristian.calin at orange.com Wed Jun 20 06:43:44 2018 From: cristian.calin at orange.com (cristian.calin at orange.com) Date: Wed, 20 Jun 2018 06:43:44 +0000 Subject: [openstack-dev] [ceilometer][panko][pike] elasticsearch integration Message-ID: <8417_1529477025_5B29F7A1_8417_225_1_00dd3b5081db43ca9eac794c4536ec3a@orange.com> Hello, I'm trying to run ceilometer with panko publishers in pike release and when I run the ceilometer-agent-notification I get a trace complaining about NoSuchOptError, but without the actual parameter that is missing (see trace below). I have configured panko.conf with the following: [database] connection = es://user:password at elasticsearch.service.consul.:9200 [storage] es_ssl_enable = False es_index_name = events As far as I can tell from the debug log, the storage.es_ssl_enable and storage.es_index_name parameters are not loaded, they don't show up in the "cotyledon.oslo_config_glue" output so I assume the trace relates to these parameters. Has anybody else seen this error before? PS: sorry for CC'ing the dev list but I hope to reach the right audience ================ TRACE ==================== {"asctime": "2018-06-20 05:49:09.405","process": "59","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:10.436","process": "61","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:11.409","process": "63","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:18.467","process": "57","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:18.468","process": "57","levelname": "ERROR","name": "ceilometer.pipeline", "instance": {},"message":"Unable to load publisher panko://"}: RetryError: RetryError[] 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>>Traceback (most recent call last): 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 419, in __init__ 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> self.publishers.append(publisher_manager.get(p)) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 713, in get 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> 'ceilometer.%s.publisher' % self._purpose) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/publisher/__init__.py", line 36, in get_publisher 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return loaded_driver.driver(parse_result) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/panko/publisher/database.py", line 35, in __init__ 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> self.conn = storage.get_connection_from_config(conf) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py", line 73, in get_connection_from_config 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return _inner() 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 171, in wrapped_f 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return self.call(f, *args, **kw) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 248, in call 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> start_time=start_time) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 217, in iter 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> six.raise_from(RetryError(fut), fut.exception()) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/six.py", line 718, in raise_from 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> raise value 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>>RetryError: RetryError[] 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From superuser151093 at gmail.com Wed Jun 20 06:49:25 2018 From: superuser151093 at gmail.com (super user) Date: Wed, 20 Jun 2018 15:49:25 +0900 Subject: [openstack-dev] [oslo][osc][cliff][tacker] New release of cmd2 break cliff and tacker client Message-ID: Hi everyone, New release of cmd2 0.9.0 seems to break cliff and python-tackerclient. The cmd2 library changed the way it handles parsing input commands. It now uses a different library, which means the values passed to the commands are no longer PyParsing objects and are instead Statement objects. These objects do not have a “parsed” property, so the receiving code needs to work with them differently. The patch https://review.openstack.org/571524 tries to fix this in the places within cliff where it was failing in interactive mode. Please consider reviewing this patch and have a new release for cliff so that the python-tackerclient pass the py35 tests. Thank you, Nguyen Hai -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristian.calin at orange.com Wed Jun 20 07:08:57 2018 From: cristian.calin at orange.com (cristian.calin at orange.com) Date: Wed, 20 Jun 2018 07:08:57 +0000 Subject: [openstack-dev] [ceilometer][panko][pike] elasticsearch integration In-Reply-To: <8417_1529477025_5B29F7A1_8417_225_1_00dd3b5081db43ca9eac794c4536ec3a@orange.com> References: <8417_1529477025_5B29F7A1_8417_225_1_00dd3b5081db43ca9eac794c4536ec3a@orange.com> Message-ID: <22471_1529478538_5B29FD8A_22471_266_1_b3e6711edc72473bab8136aae2bfd6a4@orange.com> Some more details, I tried running with python3 and the error I got with it is a bit more detailed: {"asctime": "2018-06-20 07:06:11.537","process": "24","levelname": "ERROR","name": "ceilometer.pipeline", "instance": {},"message":"Unable to load publisher panko://"}: tenacity.RetryError: RetryError[] 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>>Traceback (most recent call last): 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/tenacity/__init__.py", line 251, in call 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> result = fn(*args, **kwargs) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/panko/storage/__init__.py", line 71, in _inner 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return get_connection(url, conf) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/panko/storage/__init__.py", line 86, in get_connection 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return mgr.driver(url, conf) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/panko/storage/impl_elasticsearch.py", line 74, in __init__ 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> use_ssl = conf.database.es_ssl_enabled 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/oslo_config/cfg.py", line 3363, in __getattr__ 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return self._conf._get(name, self._group) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/oslo_config/cfg.py", line 2925, in _get 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> value = self._do_get(name, group, namespace) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/oslo_config/cfg.py", line 2942, in _do_get 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> info = self._get_opt_info(name, group) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/oslo_config/cfg.py", line 3099, in _get_opt_info 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> raise NoSuchOptError(opt_name, group) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>>oslo_config.cfg.NoSuchOptError: no such option es_ssl_enabled in group [database] 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>>The above exception was the direct cause of the following exception: 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>>Traceback (most recent call last): 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/ceilometer/pipeline.py", line 419, in __init__ 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> self.publishers.append(publisher_manager.get(p)) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/ceilometer/pipeline.py", line 713, in get 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> 'ceilometer.%s.publisher' % self._purpose) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/ceilometer/publisher/__init__.py", line 36, in get_publisher 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return loaded_driver.driver(parse_result) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/panko/publisher/database.py", line 35, in __init__ 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> self.conn = storage.get_connection_from_config(conf) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/panko/storage/__init__.py", line 73, in get_connection_from_config 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return _inner() 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/tenacity/__init__.py", line 171, in wrapped_f 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> return self.call(f, *args, **kw) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/tenacity/__init__.py", line 248, in call 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> start_time=start_time) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python3.5/site-packages/tenacity/__init__.py", line 217, in iter 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> six.raise_from(RetryError(fut), fut.exception()) 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> File "", line 2, in raise_from 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>>tenacity.RetryError: RetryError[] 2018-06-20 07:06:11.537 24 TRACE ceilometer.pipeline >>>>> I changed my panko.conf to: [database] connection = es://user:password at elasticsearch.service.consul.:9200 es_ssl_enable = False [storage] es_index_name = events But I get the same error which means that the es_* parameters are not properly merged from panko.conf when ceilometer-agent-notification starts up. From: cristian.calin at orange.com [mailto:cristian.calin at orange.com] Sent: Wednesday, June 20, 2018 9:44 AM To: openstack-operators at lists.openstack.org Cc: openstack-dev at lists.openstack.org Subject: [openstack-dev] [ceilometer][panko][pike] elasticsearch integration Hello, I'm trying to run ceilometer with panko publishers in pike release and when I run the ceilometer-agent-notification I get a trace complaining about NoSuchOptError, but without the actual parameter that is missing (see trace below). I have configured panko.conf with the following: [database] connection = es://user:password at elasticsearch.service.consul.:9200 [storage] es_ssl_enable = False es_index_name = events As far as I can tell from the debug log, the storage.es_ssl_enable and storage.es_index_name parameters are not loaded, they don't show up in the "cotyledon.oslo_config_glue" output so I assume the trace relates to these parameters. Has anybody else seen this error before? PS: sorry for CC'ing the dev list but I hope to reach the right audience ================ TRACE ==================== {"asctime": "2018-06-20 05:49:09.405","process": "59","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:10.436","process": "61","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:11.409","process": "63","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:18.467","process": "57","levelname": "DEBUG","name": "panko.storage", "instance": {},"message":"looking for 'es' driver in panko.storage"} {"funcName": "get_connection","source": {"p ath": "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py","lineno": "84"}} {"asctime": "2018-06-20 05:49:18.468","process": "57","levelname": "ERROR","name": "ceilometer.pipeline", "instance": {},"message":"Unable to load publisher panko://"}: RetryError: RetryError[] 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>>Traceback (most recent call last): 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 419, in __init__ 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> self.publishers.append(publisher_manager.get(p)) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/pipeline.py", line 713, in get 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> 'ceilometer.%s.publisher' % self._purpose) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/ceilometer/publisher/__init__.py", line 36, in get_publisher 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return loaded_driver.driver(parse_result) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/panko/publisher/database.py", line 35, in __init__ 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> self.conn = storage.get_connection_from_config(conf) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/panko/storage/__init__.py", line 73, in get_connection_from_config 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return _inner() 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 171, in wrapped_f 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> return self.call(f, *args, **kw) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 248, in call 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> start_time=start_time) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/tenacity/__init__.py", line 217, in iter 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> six.raise_from(RetryError(fut), fut.exception()) 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> File "/opt/ceilometer/lib/python2.7/site-packages/six.py", line 718, in raise_from 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> raise value 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>>RetryError: RetryError[] 2018-06-20 05:49:18.468 57 TRACE ceilometer.pipeline >>>>> _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschadin at sbcloud.ru Wed Jun 20 07:55:58 2018 From: aschadin at sbcloud.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTSIPPF0sfFxdfJ3g==?=) Date: Wed, 20 Jun 2018 07:55:58 +0000 Subject: [openstack-dev] [watcher] weekly meeting Message-ID: Watchers, We have meeting today at 8:00 UTC on #openstack-meeting-3 channel. Best Regards, ____ Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Wed Jun 20 07:58:09 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Wed, 20 Jun 2018 07:58:09 +0000 Subject: [openstack-dev] [heat] Need new release of heat-translator library Message-ID: Hi, Few weeks back, we had proposed a patch [1] to add support for translation of placement policies and that patch got merged. This feature will be consumed by tacker specs [2] which we are planning to implement in Rocky release and it's implementation is uploaded in patch [3]. Presently, the tests are failing on patch [3] becoz it requires newer version of heat-translator library. Could you please release a new version of heat-translator library so that we can complete specs[2] in Rocky timeframe. Thank you. Regards, Tushar Patil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From zigo at debian.org Wed Jun 20 09:23:24 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jun 2018 11:23:24 +0200 Subject: [openstack-dev] minimum libvirt version for nova-compute Message-ID: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> Hi, Trying to get puppet-openstack to validate with Debian, I got surprised that mounting encrypted volume didn't work for me, here's the stack dump with libvirt 3.0.0 from Debian Stretch: File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1463, in attach_volume guest.attach_device(conf, persistent=True, live=live) File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 303, in attach_device self._domain.attachDeviceFlags(device_xml, flags=flags) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 186, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 144, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 125, in execute six.reraise(c, e, tb) File "/usr/lib/python3/dist-packages/eventlet/support/six.py", line 625, in reraise raise value File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "/usr/lib/python3/dist-packages/libvirt.py", line 585, in attachDeviceFlags if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self) libvirt.libvirtError: internal error: unable to execute QEMU command 'object-add': Incorrect number of padding bytes (57) found on decrypted data After switching to libvirt 4.3.0 (my own backport from Debian Testing), it does work. So, while the minimum version of libvirt seems to be enough for normal operation, it isn't for encrypted volumes. Therefore, I wonder if Nova shouldn't declare a minimum version of libvirt higher than it claims at the moment. I'm stating that, especially because we had this topic a few weeks ago. Thoughts anyone? Cheers, Thomas Goirand (zigo) From balazs.gibizer at ericsson.com Wed Jun 20 09:23:14 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 20 Jun 2018 11:23:14 +0200 Subject: [openstack-dev] [nova]Notification update week 25 In-Reply-To: References: <1529334634.13962.0@smtp.office365.com> Message-ID: <1529486594.13962.4@smtp.office365.com> On Tue, Jun 19, 2018 at 7:07 PM, Matt Riedemann wrote: > On 6/18/2018 10:10 AM, Balázs Gibizer wrote: >> * Introduce instance.lock and instance.unlock notifications >> https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances > > This hasn't been updated in quite awhile. I wonder if someone else > wants to pick that up now? I'm OK if somebody picks it up. I will try to give review support. Cheers, gibi > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lyarwood at redhat.com Wed Jun 20 11:54:26 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 20 Jun 2018 12:54:26 +0100 Subject: [openstack-dev] minimum libvirt version for nova-compute In-Reply-To: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> References: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> Message-ID: <20180620115426.gqjpv6wrv2edtzz3@lyarwood.usersys.redhat.com> On 20-06-18 11:23:24, Thomas Goirand wrote: > Hi, > > Trying to get puppet-openstack to validate with Debian, I got surprised > that mounting encrypted volume didn't work for me, here's the stack dump > with libvirt 3.0.0 from Debian Stretch: > > File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", > line 1463, in attach_volume > guest.attach_device(conf, persistent=True, live=live) > File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", > line 303, in attach_device > self._domain.attachDeviceFlags(device_xml, flags=flags) > File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 186, in > doit > result = proxy_call(self._autowrap, f, *args, **kwargs) > File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 144, in > proxy_call > rv = execute(f, *args, **kwargs) > File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 125, in > execute > six.reraise(c, e, tb) > File "/usr/lib/python3/dist-packages/eventlet/support/six.py", line > 625, in reraise > raise value > File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 83, in > tworker > rv = meth(*args, **kwargs) > File "/usr/lib/python3/dist-packages/libvirt.py", line 585, in > attachDeviceFlags > if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() > failed', dom=self) > libvirt.libvirtError: internal error: unable to execute QEMU command > 'object-add': Incorrect number of padding bytes (57) found on decrypted data That's actually a bug and not a lack of support in the version of libvirt you're using: Unable to use LUKS passphrase that is exactly 16 bytes long https://bugzilla.redhat.com/show_bug.cgi?id=1447297 [libvirt] [PATCH] Fix padding of encrypted data https://www.redhat.com/archives/libvir-list/2017-May/msg00030.html > After switching to libvirt 4.3.0 (my own backport from Debian Testing), > it does work. So, while the minimum version of libvirt seems to be > enough for normal operation, it isn't for encrypted volumes. > > Therefore, I wonder if Nova shouldn't declare a minimum version of > libvirt higher than it claims at the moment. I'm stating that, > especially because we had this topic a few weeks ago. We can bump the minimum here but then we have to play a game of working out the oldest version the above fix was backported to across the various distros. I'd rather see this address by the Libvirt maintainers in Debian if I'm honest. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From yamamoto at midokura.com Wed Jun 20 11:54:56 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 20 Jun 2018 20:54:56 +0900 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180517035105.GD8215@thor.bakeyournoodle.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> Message-ID: hi, On Thu, May 17, 2018 at 12:51 PM, Tony Breeds wrote: > On Wed, May 16, 2018 at 04:14:36PM -0500, Matthew Thode wrote: >> On 18-05-16 17:07:09, Doug Hellmann wrote: >> > Excerpts from Matthew Thode's message of 2018-05-16 15:59:47 -0500: >> > > Sphinx has breaking changes (yet again) and we need to figure out how to >> > > deal with it. I think the fix will be simple for affected projects, but >> > > we should probably move forward on this. The error people are getting >> > > seems to be 'Field list ends without a blank line; unexpected unindent.' >> > > >> > > I'd like to keep on 1.7.4 and have the affected projects fix the error >> > > so we can move on, but the revert has been proposed (and approved to get >> > > gate unbroken for them). https://review.openstack.org/568248 Any >> > > advice from the community is welcome. >> > > >> > >> > Is it sphinx, or docutils? >> > >> > Do you have an example of the error? >> > >> >> From https://bugs.launchpad.net/networking-midonet/+bug/1771092 >> >> 2018-05-13 14:22:06.176410 | ubuntu-xenial | Warning, treated as error: >> 2018-05-13 14:22:06.176967 | ubuntu-xenial | /home/zuul/src/git.openstack.org/openstack/networking-midonet/midonet/neutron/db/l3_db_midonet.py:docstring of midonet.neutron.db.l3_db_midonet.MidonetL3DBMixin.get_router_for_floatingip:8:Field list ends without a blank line; unexpected unindent. >> > > Adding something like: > > (.docs) [tony at thor networking-midonet]$ ( cd ../neutron && git diff ) > diff --git a/neutron/db/l3_db.py b/neutron/db/l3_db.py > index 33b5d99b1..66794542a 100644 > --- a/neutron/db/l3_db.py > +++ b/neutron/db/l3_db.py > @@ -1091,8 +1091,8 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase, > :param internal_subnet: The subnet for the fixed-ip. > :param external_network_id: The external network for floating-ip. > > - :raises: ExternalGatewayForFloatingIPNotFound if no suitable router > - is found. > + :raises: ExternalGatewayForFloatingIPNotFound if no suitable router \ > + is found. > """ > > # Find routers(with router_id and interface address) that > (.docs) [tony at thor networking-midonet]$ ( cd ../os-vif && git diff ) > diff --git a/os_vif/plugin.py b/os_vif/plugin.py > index 56566a6..2a437a6 100644 > --- a/os_vif/plugin.py > +++ b/os_vif/plugin.py > @@ -49,10 +49,11 @@ class PluginBase(object): > Given a model of a VIF, perform operations to plug the VIF properly. > > :param vif: `os_vif.objects.vif.VIFBase` object. > - :param instance_info: `os_vif.objects.instance_info.InstanceInfo` > - object. > - :raises `processutils.ProcessExecutionError`. Plugins implementing > - this method should let `processutils.ProcessExecutionError` > + :param instance_info: `os_vif.objects.instance_info.InstanceInfo` \ > + object. > + > + :raises `processutils.ProcessExecutionError`. Plugins implementing \ > + this method should let `processutils.ProcessExecutionError` \ > bubble up. > """ > > @@ -63,9 +64,10 @@ class PluginBase(object): > > :param vif: `os_vif.objects.vif.VIFBase` object. > :param instance_info: `os_vif.objects.instance_info.InstanceInfo` > - object. > - :raises `processutils.ProcessExecutionError`. Plugins implementing > - this method should let `processutils.ProcessExecutionError` > + object. > + > + :raises `processutils.ProcessExecutionError`. Plugins implementing \ > + this method should let `processutils.ProcessExecutionError` \ > bubble up. > """ > > fixes the midonet docs build for me (locally) on sphinx 1.7.4. I'm far from a > sphinx expert but the chnages to neutron and os-vif seem correct to me. do you have a plan to submit these changes on gerrit? > > Perhaps the sphinx parser just got more strict? > > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From witold.bedyk at est.fujitsu.com Wed Jun 20 11:55:42 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 20 Jun 2018 11:55:42 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer Message-ID: Hello Telemetry Team, any opinion on this? Best greetings Witek > -----Original Message----- > From: Bedyk, Witold > Sent: Mittwoch, 13. Juni 2018 10:28 > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > publisher for Ceilometer > > Hello Telemetry Team, > > We would like to contribute a Monasca publisher to Ceilometer project [1] > and add it to the list of currently supported transports [2]. > The goal of the plugin is to send Ceilometer samples to Monasca API. > > I understand Gordon's concerns about adding maintenance overhead for > Ceilometer team which he expressed in review but the code is pretty much > self-contained and does not affect Ceilometer core. It's not our intention to > shift the maintenance effort and Monasca team should still be responsible > for this code. > > Adding this plugin will help in terms of interoperability of both projects and > can be useful for wider parts of the OpenStack community. > > Please let me know your thoughts. I hope we can get this code merged. > > Cheers > Witek > > > [1] https://review.openstack.org/562400 > [2] > https://docs.openstack.org/ceilometer/latest/contributor/architecture.html > #processing-the-data From julien at danjou.info Wed Jun 20 12:07:17 2018 From: julien at danjou.info (Julien Danjou) Date: Wed, 20 Jun 2018 14:07:17 +0200 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: (Witold Bedyk's message of "Wed, 20 Jun 2018 11:55:42 +0000") References: Message-ID: On Wed, Jun 20 2018, Bedyk, Witold wrote: Same as Gordon. You should maintain that in your own repo. There's just no bandwidth in Ceilometer right now for things like that. :( > Hello Telemetry Team, > > any opinion on this? > > Best greetings > Witek > > >> -----Original Message----- >> From: Bedyk, Witold >> Sent: Mittwoch, 13. Juni 2018 10:28 >> To: OpenStack Development Mailing List (not for usage questions) >> >> Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca >> publisher for Ceilometer >> >> Hello Telemetry Team, >> >> We would like to contribute a Monasca publisher to Ceilometer project [1] >> and add it to the list of currently supported transports [2]. >> The goal of the plugin is to send Ceilometer samples to Monasca API. >> >> I understand Gordon's concerns about adding maintenance overhead for >> Ceilometer team which he expressed in review but the code is pretty much >> self-contained and does not affect Ceilometer core. It's not our intention to >> shift the maintenance effort and Monasca team should still be responsible >> for this code. >> >> Adding this plugin will help in terms of interoperability of both projects and >> can be useful for wider parts of the OpenStack community. >> >> Please let me know your thoughts. I hope we can get this code merged. >> >> Cheers >> Witek >> >> >> [1] https://review.openstack.org/562400 >> [2] >> https://docs.openstack.org/ceilometer/latest/contributor/architecture.html >> #processing-the-data > -- Julien Danjou /* Free Software hacker https://julien.danjou.info */ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From pkovar at redhat.com Wed Jun 20 12:21:57 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 20 Jun 2018 14:21:57 +0200 Subject: [openstack-dev] [docs] Office hours instead of regular docs project meetings? Message-ID: <20180620142157.6701a2de41326adba9574ea5@redhat.com> Hi all, Due to low attendance in docs project meetings in recent months, I'd like to propose turning regular docs meetings into office hours, like many other OpenStack teams did. My idea is to hold office hours every Wednesday, same time we held our docs meetings, at 16:00 UTC, in our team channel #openstack-doc where many community members already hang out and ask their questions about OpenStack docs. Objections, comments, thoughts? Would there be interest to also hold office hours during a more APAC-friendly time slot? We'd need to volunteers to take care of it, please let me know! Thanks, pk From mriedemos at gmail.com Wed Jun 20 12:32:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 20 Jun 2018 07:32:08 -0500 Subject: [openstack-dev] minimum libvirt version for nova-compute In-Reply-To: <20180620115426.gqjpv6wrv2edtzz3@lyarwood.usersys.redhat.com> References: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> <20180620115426.gqjpv6wrv2edtzz3@lyarwood.usersys.redhat.com> Message-ID: <695a74bf-4fcf-eb3e-2711-122123b12184@gmail.com> On 6/20/2018 6:54 AM, Lee Yarwood wrote: > We can bump the minimum here but then we have to play a game of working > out the oldest version the above fix was backported to across the > various distros. I'd rather see this address by the Libvirt maintainers > in Debian if I'm honest. Just a thought, but in nova we could at least do: 1. Add a 'known issues' release note about the issue and link to the libvirt patch. and/or 2. Handle libvirtError in that case, check for the "Incorrect number of padding bytes" string in the error, and log something with a breadcrumb to the libvirt fix - that would be for people that miss the release note, or hit the issue past rocky and wouldn't have found the release note because they're on Stein+ now. -- Thanks, Matt From lyarwood at redhat.com Wed Jun 20 12:54:29 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 20 Jun 2018 13:54:29 +0100 Subject: [openstack-dev] minimum libvirt version for nova-compute In-Reply-To: <695a74bf-4fcf-eb3e-2711-122123b12184@gmail.com> References: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> <20180620115426.gqjpv6wrv2edtzz3@lyarwood.usersys.redhat.com> <695a74bf-4fcf-eb3e-2711-122123b12184@gmail.com> Message-ID: <20180620125429.q3dmbhdh34fuzgwl@lyarwood.usersys.redhat.com> On 20-06-18 07:32:08, Matt Riedemann wrote: > On 6/20/2018 6:54 AM, Lee Yarwood wrote: > > We can bump the minimum here but then we have to play a game of working > > out the oldest version the above fix was backported to across the > > various distros. I'd rather see this address by the Libvirt maintainers > > in Debian if I'm honest. > > Just a thought, but in nova we could at least do: > > 1. Add a 'known issues' release note about the issue and link to the libvirt > patch. ACK > and/or > > 2. Handle libvirtError in that case, check for the "Incorrect number of > padding bytes" string in the error, and log something with a breadcrumb to > the libvirt fix - that would be for people that miss the release note, or > hit the issue past rocky and wouldn't have found the release note because > they're on Stein+ now. Yeah that's fair, I'll submit something for both of the above today. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From yamamoto at midokura.com Wed Jun 20 12:57:05 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Wed, 20 Jun 2018 21:57:05 +0900 Subject: [openstack-dev] [neutron] Ports port_binding attribute is changing to an iterable In-Reply-To: References: Message-ID: hi, On Thu, Jun 7, 2018 at 4:32 AM, Miguel Lavalle wrote: > Dear OpenStack Networking community of projects, > > As part of the implementation of multiple port bindings in the Neutron > reference implementation > (https://specs.openstack.org/openstack/neutron-specs/specs/backlog/pike/portbinding_information_for_nova.html), > the port_binding relationship in the Port DB model is changing to be an > iterable: > > https://review.openstack.org/#/c/414251/66/neutron/plugins/ml2/models.py at 64 > > and its name is being changed to port_bindings: > > https://review.openstack.org/#/c/571041/4/neutron/plugins/ml2/models.py at 61 > > Corresponding changes are being made to the Port Oslo Versioned Object: > > https://review.openstack.org/#/c/414251/66/neutron/objects/ports.py at 285 > https://review.openstack.org/#/c/571041/4/neutron/objects/ports.py at 285 > > I did my best to find usages of these attributes in the Neutron Stadium > projects and only found them in networking-odl: > https://review.openstack.org/#/c/572212/2/networking_odl/ml2/mech_driver.py. > These are the other projects that I checked: > > networking-midonet > networking-ovn > networking-bagpipe > networking-bgpvpn > neutron-dynamic-routing > neutron-fwaas > neutron-vpnaas > networking-sfc > > I STRONGLY ENCOURAGE these projects teams to double check and see if you > might be affected. I also encourage projects in the broader OpenStack > Networking community of projects to check for possible impacts. We will be > holding these two patches until June 14th before merging them. i checked the following projects. they looked ok. networking-midonet networking-ovn neutron-vpnaas tap-as-a-service > > If you need help dealing with the change, please ping me in the Neutron > channel > > Best regards > > Miguel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From witold.bedyk at est.fujitsu.com Wed Jun 20 12:58:26 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Wed, 20 Jun 2018 12:58:26 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: References: Message-ID: Hi Julien, could you please add some transparency to the decision process on which publishers are acceptable and which not? Just two months ago you have added new Prometheus publisher. That's around the same time when our code was submitted to review. We have delivered tested code and offer its maintenance. The code is self-contained and does not touch Ceilometer core. If it's broken, just Monasca interface won't work. Please reconsider it again. Greetings Witek > -----Original Message----- > From: Julien Danjou > Sent: Mittwoch, 20. Juni 2018 14:07 > To: Bedyk, Witold > Cc: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > publisher for Ceilometer > > On Wed, Jun 20 2018, Bedyk, Witold wrote: > > Same as Gordon. You should maintain that in your own repo. > There's just no bandwidth in Ceilometer right now for things like that. > :( > > > Hello Telemetry Team, > > > > any opinion on this? > > > > Best greetings > > Witek > > > > > >> -----Original Message----- > >> From: Bedyk, Witold > >> Sent: Mittwoch, 13. Juni 2018 10:28 > >> To: OpenStack Development Mailing List (not for usage questions) > >> > >> Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > >> publisher for Ceilometer > >> > >> Hello Telemetry Team, > >> > >> We would like to contribute a Monasca publisher to Ceilometer project > >> [1] and add it to the list of currently supported transports [2]. > >> The goal of the plugin is to send Ceilometer samples to Monasca API. > >> > >> I understand Gordon's concerns about adding maintenance overhead for > >> Ceilometer team which he expressed in review but the code is pretty > >> much self-contained and does not affect Ceilometer core. It's not our > >> intention to shift the maintenance effort and Monasca team should > >> still be responsible for this code. > >> > >> Adding this plugin will help in terms of interoperability of both > >> projects and can be useful for wider parts of the OpenStack community. > >> > >> Please let me know your thoughts. I hope we can get this code merged. > >> > >> Cheers > >> Witek > >> > >> > >> [1] https://review.openstack.org/562400 > >> [2] > >> https://docs.openstack.org/ceilometer/latest/contributor/architecture > >> .html > >> #processing-the-data > > > > -- > Julien Danjou > /* Free Software hacker > https://julien.danjou.info */ From usman.awais at gmail.com Wed Jun 20 13:14:30 2018 From: usman.awais at gmail.com (Usman Awais) Date: Wed, 20 Jun 2018 18:14:30 +0500 Subject: [openstack-dev] Openstack-Zun Service Appears down Message-ID: Dear Zuners, I have installed RDO pike. I stopped openstack-nova-compute service on one of the hosts, and installed a zun-compute service. Although all the services seems to be running ok on both controller and compute but when I do openstack appcontainer service list It gives me following +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ | Id | Host | Binary | State | Disabled | Disabled Reason | Updated At | Availability Zone | +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ | 1 | node1.os.lab | zun-compute | down | False | None | 2018-06-20 13:14:31 | nova | +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ I have checked all ports in both directions they are open, including etcd ports and others. All services are running, only docker service has the warning message saying "failed to retrieve docker-runc version: exec: \"docker-runc\": executable file not found in $PATH". There is also a message at zun-compute "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The IN-predicate on "container.uuid" was invoked with an empty sequence. This results in a contradiction, which nonetheless can be expensive to evaluate. Consider alternative strategies for improved performance." Please guide... Regards, Muhammad Usman Awais -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Jun 20 13:40:43 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 20 Jun 2018 14:40:43 +0100 (BST) Subject: [openstack-dev] [oslo] [placement] setting oslo config opts from environment In-Reply-To: <1529435957-sup-54@lrrr.local> References: <1529435957-sup-54@lrrr.local> Message-ID: On Tue, 19 Jun 2018, Doug Hellmann wrote: > I certainly have no objection to doing the work in oslo.config. As > I described on IRC today, I think we would want to implement it > using the new driver feature we're working on this cycle, even if > the driver is enabled automatically so users don't have to turn it > on. We already special case command line options and the point of > the driver interface is to give us a way to extend the lookup logic > without having to add more special cases. I've started a draft spec at https://review.openstack.org/#/c/576860/ Some details still need to be filled in, but it's enough to frame the idea. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zigo at debian.org Wed Jun 20 14:23:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jun 2018 16:23:26 +0200 Subject: [openstack-dev] [puppet-openstack][announce][debian] puppet-openstack now has full Debian support Message-ID: <4627b93f-0d27-0ed3-48b9-3cf8fd07b35a@debian.org> Dear Stackers, I am glad/overjoyed/jazzed to announce the global availability of puppet-openstack for Debian. Indeed, a few minutes ago, the CI turned all green for Debian: https://review.openstack.org/#/c/576416 (note: the red one for CentOS is to be ignored, it looks like non-deterministic error) This is after 3 months of hard work, and more than 50 patches, sometimes on upstream code base (for example in Cinder, Sahara, and Neutron), often because of Python 3 or uwsgi/mod_wsgi related problems. Some of these patches aren't merged yet upstream, but are included in the Debian packages already. Also note that Debian fully supports SSL and ipv6 endpoints. I'd like here to publicly thanks all of the puppet-openstack core reviewers for their help and enthusiasm. A big thanks to mnaser, tobasco, EmilienM and mwhahaha. Guys, you've been really awesome and helpful with me. Also a big thanks to these upstream helping with fixing these bits as explained above, and especially annp for fixing the neutron-rpc-server related problems, with the patch also pending reviews at: https://review.openstack.org/#/c/555608/ All of these puppet modules are available directly in Debian in a packaged form. To get them, simply do: apt-get install openstack-puppet-modules in Debian Sid, or using the Debian backports repository at: http://stretch-queens.infomaniak.ch/debian Still to fix, is neutron-fwaas, which seems to not like either Python 3 or using neutron-api over uwsgi (I'm not sure which of these yet). Upstream neutron developers are currently investigating this. For this reason, neutron firewall extension is currently disabled for the l3-agent, but will be reactivated as soon as a proper fix is found. Also, Ceph in Debian is currently a way behind (so we have to use upstream Debian repository for Stretch), as it lacks a proper Python 3 support, and still no Luminous release uploaded to Sid. I intend to attempt to fix this, to get a chance to get this in time for Buster. Cheers, Thomas Goirand (zigo) From openstack at nemebean.com Wed Jun 20 14:25:44 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 20 Jun 2018 09:25:44 -0500 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: References: Message-ID: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> On 06/20/2018 02:58 AM, Patil, Tushar wrote: > Hi, > > Few weeks back, we had proposed a patch [1] to add support for translation of placement policies and that patch got merged. > > This feature will be consumed by tacker specs [2] which we are planning to implement in Rocky release and it's implementation is uploaded in patch [3]. Presently, the tests are failing on patch [3] becoz it requires newer version of heat-translator library. > > Could you please release a new version of heat-translator library so that we can complete specs[2] in Rocky timeframe. Note that you can propose a release to the releases repo[1] and then you just need the PTL or release liaison to sign off on it. 1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst -Ben From julien at danjou.info Wed Jun 20 14:26:23 2018 From: julien at danjou.info (Julien Danjou) Date: Wed, 20 Jun 2018 16:26:23 +0200 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: (Witold Bedyk's message of "Wed, 20 Jun 2018 12:58:26 +0000") References: Message-ID: On Wed, Jun 20 2018, Bedyk, Witold wrote: Hi Witek, It's not a transparency issue. It's a manpower issue. We are only 2 developers active on Ceilometer: me and Mehdi. Neither me nor Mehdi wants to maintain Monasca stuff; meaning, we don't want to spend time reviewing patches, having bug opened, or whatever. There's no interest for us in that. THe Prometheus publisher you mention has been written by Mehdi and we've approved it because it fits the roadmap of the Ceilometer developers that we are — and, again we're just two. We have other projects — such as Panko — that provide Ceilometer publishers and their code is in Panko, not in Ceilometer. It's totally possible and sane. Now, if you really, really, care that much about Ceilometer and its integration with Monasca, and if you have an amazing roadmap that'll make Ceilometer better and awesome, please, do start with that. Right now it just looks like more work for us with no gain. :( > could you please add some transparency to the decision process on which > publishers are acceptable and which not? Just two months ago you have added new > Prometheus publisher. That's around the same time when our code was submitted > to review. > > We have delivered tested code and offer its maintenance. The code is > self-contained and does not touch Ceilometer core. If it's broken, just Monasca > interface won't work. > > Please reconsider it again. > > Greetings > Witek > > >> -----Original Message----- >> From: Julien Danjou >> Sent: Mittwoch, 20. Juni 2018 14:07 >> To: Bedyk, Witold >> Cc: OpenStack Development Mailing List (not for usage questions) >> >> Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca >> publisher for Ceilometer >> >> On Wed, Jun 20 2018, Bedyk, Witold wrote: >> >> Same as Gordon. You should maintain that in your own repo. >> There's just no bandwidth in Ceilometer right now for things like that. >> :( >> >> > Hello Telemetry Team, >> > >> > any opinion on this? >> > >> > Best greetings >> > Witek >> > >> > >> >> -----Original Message----- >> >> From: Bedyk, Witold >> >> Sent: Mittwoch, 13. Juni 2018 10:28 >> >> To: OpenStack Development Mailing List (not for usage questions) >> >> >> >> Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca >> >> publisher for Ceilometer >> >> >> >> Hello Telemetry Team, >> >> >> >> We would like to contribute a Monasca publisher to Ceilometer project >> >> [1] and add it to the list of currently supported transports [2]. >> >> The goal of the plugin is to send Ceilometer samples to Monasca API. >> >> >> >> I understand Gordon's concerns about adding maintenance overhead for >> >> Ceilometer team which he expressed in review but the code is pretty >> >> much self-contained and does not affect Ceilometer core. It's not our >> >> intention to shift the maintenance effort and Monasca team should >> >> still be responsible for this code. >> >> >> >> Adding this plugin will help in terms of interoperability of both >> >> projects and can be useful for wider parts of the OpenStack community. >> >> >> >> Please let me know your thoughts. I hope we can get this code merged. >> >> >> >> Cheers >> >> Witek >> >> >> >> >> >> [1] https://review.openstack.org/562400 >> >> [2] >> >> https://docs.openstack.org/ceilometer/latest/contributor/architecture >> >> .html >> >> #processing-the-data >> > >> >> -- >> Julien Danjou >> /* Free Software hacker >> https://julien.danjou.info */ > > -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From zigo at debian.org Wed Jun 20 14:37:40 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jun 2018 16:37:40 +0200 Subject: [openstack-dev] [puppet-openstack][announce][debian] repository address (was: puppet-openstack now has full Debian support) In-Reply-To: <4627b93f-0d27-0ed3-48b9-3cf8fd07b35a@debian.org> References: <4627b93f-0d27-0ed3-48b9-3cf8fd07b35a@debian.org> Message-ID: On 06/20/2018 04:23 PM, Thomas Goirand wrote: > or using the Debian backports repository at: > > http://stretch-queens.infomaniak.ch/debian I really meant: deb http://stretch-queens.debian.net/debian \ strech-queens-backports main deb http://stretch-queens.debian.net/debian \ strech-queens-backports-nochange main which is the official URL for the 2 Stretch backports repositories. Please use that address, always, as we may point it somewhere in the future at some point (maybe in Infomaniak global mirror). Cheers, Thomas Goirand (zigo) From dtroyer at gmail.com Wed Jun 20 15:23:46 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 20 Jun 2018 10:23:46 -0500 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client In-Reply-To: <2018062010414723945413@163.com> References: <2018062010414723945413@163.com> Message-ID: On Tue, Jun 19, 2018 at 9:42 PM, zijian1012 at 163.com wrote: > Thks for replying, just want to confirm, you mentioned "We have intended to > migrate everything to use > OpenStackSDK", the current use of python-*client is: > 1. OSC > 2. all services that need to interact with other services (e.g.: nova > libraries: self.volume_api = volume_api or cinder.API()) > Do you mean that both of the above will be migrated to use the OpenStack > SDK? I am only directly speaking for OSC. Initially we did not think that services using the SDK would be feasible, Monty has taken it to a place where that should now be a possibility. I am willing to find out that doing so is a good idea. :) dt -- Dean Troyer dtroyer at gmail.com From zbitter at redhat.com Wed Jun 20 15:33:30 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 20 Jun 2018 11:33:30 -0400 Subject: [openstack-dev] [all] Non-OpenStack projects under the Foundation umbrella Message-ID: <36eb0792-3e12-723b-7aa5-da6bf595efca@redhat.com> You may or may not be aware that the Foundation is in the process of expanding its mission to support projects other than OpenStack. It's a confusing topic and it's hard to find information about it all in one place, so I collected everything I was able to piece together during the Summit into a blog post that I hope will help to clarify the current status: https://www.zerobanana.com/archive/2018/06/14#osf-expansion cheers, Zane. From doug at doughellmann.com Wed Jun 20 15:34:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Jun 2018 11:34:10 -0400 Subject: [openstack-dev] [python3] building tools for the transition Message-ID: <1529506563-sup-8170@lrrr.local> I want to thank Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for agreeing to be a part of the goal champion team for the python3 goal for Stein. The next step for us is to build some tools to make the process a little easier. One of the aspects of this goal that makes it difficult is that we need to change so many repositories. There are more than 480 repositories associated with official project teams. I do not think we want to edit their zuul configurations by hand. :-) It would be ideal if we had a script that could read the openstack-infra/project-config/zuul.d/projects.yaml file to find the project settings for a repository and copy those settings into the right settings file within the repository. We could then review the patch locally before proposing the change to gerrit. A second script to remove the settings from the project-config file, and then submit that second change as a patch that has a Depends-On listed for the first patch would also be useful. Another aspect that makes it difficult is that zuul is very flexible with how it reads its configuration files. The configuration can be in .zuul.yaml, zuul.yaml, .zuul.d/*.yaml, or zuul.d/*.yaml. Projects have not been consistent with the way they have named their files, and that will make writing a script to automate editing them more difficult. For example, I found automaton/.zuul.yaml, rally/zuul.yaml, oslo.config/.zuul.d, and python-ironicclient/zuul.d. When I was working on adding the lower-constraints jobs, I created some tools in https://github.com/dhellmann/openstack-requirements-stop-sync to help create the patches, and we may be able to reuse some of that code to make similar changes for this goal. https://github.com/dhellmann/openstack-requirements-stop-sync/blob/master/make_patches.sh is the script that makes the patches, and https://github.com/dhellmann/openstack-requirements-stop-sync/blob/master/add_job.py is the python script that edits the YAML file. The task for this goal is a little more complicated, since we are not just adding 1 template to the existing project settings. We may have to add several templates and jobs to the existing settings, merging the two sets together, and removing branch specifiers at the same time. And we may need to do that in several branches. Would a couple of you have time to work on the script to prepare the patches? We can work in the openstack/goal-tools repository so that we can collaborate on the code in an official OpenStack repository (instead of using my GitHub project). Doug From rico.lin.guanyu at gmail.com Wed Jun 20 15:40:26 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 20 Jun 2018 23:40:26 +0800 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> Message-ID: I send a release patch now https://review.openstack.org/#/c/576895/ Also, add Bob Haddleton to this ML who is considering as PTL for heat-translator team Ben Nemec 於 2018年6月20日 週三 下午10:26寫道: > > > On 06/20/2018 02:58 AM, Patil, Tushar wrote: > > Hi, > > > > Few weeks back, we had proposed a patch [1] to add support for > translation of placement policies and that patch got merged. > > > > This feature will be consumed by tacker specs [2] which we are planning > to implement in Rocky release and it's implementation is uploaded in patch > [3]. Presently, the tests are failing on patch [3] becoz it requires newer > version of heat-translator library. > > > > Could you please release a new version of heat-translator library so > that we can complete specs[2] in Rocky timeframe. > > Note that you can propose a release to the releases repo[1] and then you > just need the PTL or release liaison to sign off on it. > > 1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jun 20 15:52:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Jun 2018 11:52:44 -0400 Subject: [openstack-dev] [oslo][osc][cliff][tacker] New release of cmd2 break cliff and tacker client In-Reply-To: References: Message-ID: <1529509944-sup-6011@lrrr.local> Excerpts from super user's message of 2018-06-20 15:49:25 +0900: > Hi everyone, > > New release of cmd2 0.9.0 seems to break cliff and python-tackerclient. > > The cmd2 library changed the way it handles parsing input commands. It now > uses a different library, which means the values passed to the commands are > no longer PyParsing objects and are instead Statement objects. These > objects do not have a “parsed” property, so the receiving code needs to > work with them differently. > > The patch https://review.openstack.org/571524 tries to fix this in the > places within cliff where it was failing in interactive mode. > > Please consider reviewing this patch and have a new release for cliff so > that the python-tackerclient pass the py35 tests. > > Thank you, > Nguyen Hai That patch is now merged and I have requested a release of cliff (https://review.openstack.org/576897). Doug From amy at demarco.com Wed Jun 20 15:52:50 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 20 Jun 2018 10:52:50 -0500 Subject: [openstack-dev] [all] Non-OpenStack projects under the Foundation umbrella In-Reply-To: <36eb0792-3e12-723b-7aa5-da6bf595efca@redhat.com> References: <36eb0792-3e12-723b-7aa5-da6bf595efca@redhat.com> Message-ID: Nice write up, thanks Zane! Amy (spotz) On Wed, Jun 20, 2018 at 10:33 AM, Zane Bitter wrote: > You may or may not be aware that the Foundation is in the process of > expanding its mission to support projects other than OpenStack. It's a > confusing topic and it's hard to find information about it all in one > place, so I collected everything I was able to piece together during the > Summit into a blog post that I hope will help to clarify the current status: > > https://www.zerobanana.com/archive/2018/06/14#osf-expansion > > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Jun 20 16:00:26 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 20 Jun 2018 18:00:26 +0200 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: <5B27D46F.10804@windriver.com> Message-ID: On Tue, Jun 19, 2018 at 9:59 PM, Artom Lifshitz wrote: > > Adding > > claims support later on wouldn't change any on-the-wire messaging, it > would > > just make things work more robustly. > > I'm not even sure about that. Assuming [1] has at least the right > idea, it looks like it's an either-or kind of thing: either we use > resource tracker claims and get the new instance NUMA topology that > way, or do what was in the spec and have the dest send it to the > source. > > That being said, I still think I'm still in favor of choosing the > "easy" way out. For instance, [2] should fail because we can't access > the api db from the compute node. So unless there's a simpler way, > using RT claims would involve changing the RPC to add parameters to > check_can_live_migration_destination, which, while not necessarily > bad, seems like useless complexity for a thing we know will get ripped > out. > > When we reviewed the spec, we agreed as a community to say that we should still get race conditions once the series is implemented, but at least it helps operators. Quoting : "It would also be possible for another instance to steal NUMA resources from a live migrated instance before the latter’s destination compute host has a chance to claim them. Until NUMA resource providers are implemented [3] and allow for an essentially atomic schedule+claim operation, scheduling and claiming will keep being done at different times on different nodes. Thus, the potential for races will continue to exist." https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-live-migration.html#proposed-change So, my own opinion is that yes, the "easy" way out is better than no way. >From what I undertand (and let's be honest I hadn't time to look at your code yet), your series don't diverge from the proposed implementation so I don't see a problem here. If, for some reasons, you need to write an alternative, just tell us why (and ideally write a spec amendment patch so the spec is consistent with the series). -Sylvain [1] https://review.openstack.org/#/c/576222/ > [2] https://review.openstack.org/#/c/576222/3/nova/compute/manager.py at 5897 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob.haddleton at nokia.com Wed Jun 20 16:01:10 2018 From: bob.haddleton at nokia.com (HADDLETON, Robert W (Bob)) Date: Wed, 20 Jun 2018 11:01:10 -0500 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> Message-ID: <70318015-9ec9-b649-ed2c-0bbc69083727@nokia.com> This request had come to me from someone else in the Tacker team and I was working on including a couple of other patchsets in the release, but this is fine. Please tag these requests as [heat-translator] in the subject so they get flagged to me and I'm happy to work them. Bob On 6/20/2018 10:40 AM, Rico Lin wrote: > I send a release patch now https://review.openstack.org/#/c/576895/ > Also, add Bob Haddleton to this ML who is considering as PTL for > heat-translator team > > Ben Nemec > 於 > 2018年6月20日 週三 下午10:26寫道: > > > > On 06/20/2018 02:58 AM, Patil, Tushar wrote: > > Hi, > > > > Few weeks back, we had proposed a patch [1] to add support for > translation of placement policies and that patch got merged. > > > > This feature will be consumed by tacker specs [2] which we are > planning to implement in Rocky release and it's implementation is > uploaded in patch [3]. Presently, the tests are failing on patch > [3] becoz it requires newer version of heat-translator library. > > > > Could you please release a new version of heat-translator > library so that we can complete specs[2] in Rocky timeframe. > > Note that you can propose a release to the releases repo[1] and > then you > just need the PTL or release liaison to sign off on it. > > 1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bob_haddleton.vcf Type: text/x-vcard Size: 252 bytes Desc: not available URL: From strigazi at gmail.com Wed Jun 20 16:05:11 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Wed, 20 Jun 2018 18:05:11 +0200 Subject: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC Message-ID: Hello list, We are going to have a second weekly meeting for magnum for 3 weeks as a test to reach out to contributors in the Americas. You can join us tomorrow (or today for some?) at 1700UTC in #openstack-containers . Cheers, Spyros -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Jun 20 16:07:49 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 20 Jun 2018 12:07:49 -0400 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> Message-ID: <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> On 20/06/18 11:40, Rico Lin wrote: > I send a release patch now https://review.openstack.org/#/c/576895/ > Also, add Bob Haddleton to this ML who is considering as PTL for > heat-translator team Is it time to consider moving the heat-translator and tosca-parser repos from being deliverables of Heat to deliverables of Tacker? The current weird structure dates from the days of the experiment with OpenStack 'Programs' (vs. Projects). Heat cores don't really have time to be engaging with heat-translator, and Tacker is clearly the major user and the thing that keeps getting blocked on needing patches merged and releases made. > Ben Nemec > 於 > 2018年6月20日 週三 下午10:26寫道: > > > > On 06/20/2018 02:58 AM, Patil, Tushar wrote: > > Hi, > > > > Few weeks back, we had proposed a patch [1] to add support for > translation of placement policies and that patch got merged. > > > > This feature will be consumed by tacker specs [2] which we are > planning to implement in Rocky release and it's implementation is > uploaded in patch [3]. Presently, the tests are failing on patch [3] > becoz it requires newer version of heat-translator library. > > > > Could you please release a new version of heat-translator library > so that we can complete specs[2] in Rocky timeframe. > > Note that you can propose a release to the releases repo[1] and then > you > just need the PTL or release liaison to sign off on it. > > 1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tobias.urdin at crystone.com Wed Jun 20 16:10:15 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Wed, 20 Jun 2018 16:10:15 +0000 Subject: [openstack-dev] [puppet-openstack][announce][debian] puppet-openstack now has full Debian support References: <4627b93f-0d27-0ed3-48b9-3cf8fd07b35a@debian.org> Message-ID: <3daafcbbb75748d296144acf58e37ccb@mb01.staff.ognet.se> Great work Thomas! We also got a good overview on some challenges and issues regarding moving to python3 support for the Puppet modules in the future. But we can easily state that the main work will lay with making sure there is python3 supported distro packages. Which you with Debian python3 support has also paved the way with. Looking forward to all python3 release in the future, all aboard the train! Best regards Tobias On 06/20/2018 04:25 PM, Thomas Goirand wrote: > Dear Stackers, > > I am glad/overjoyed/jazzed to announce the global availability of > puppet-openstack for Debian. Indeed, a few minutes ago, the CI turned > all green for Debian: > > https://review.openstack.org/#/c/576416 > > (note: the red one for CentOS is to be ignored, it looks like > non-deterministic error) > > This is after 3 months of hard work, and more than 50 patches, sometimes > on upstream code base (for example in Cinder, Sahara, and Neutron), > often because of Python 3 or uwsgi/mod_wsgi related problems. Some of > these patches aren't merged yet upstream, but are included in the Debian > packages already. Also note that Debian fully supports SSL and ipv6 > endpoints. > > I'd like here to publicly thanks all of the puppet-openstack core > reviewers for their help and enthusiasm. A big thanks to mnaser, > tobasco, EmilienM and mwhahaha. Guys, you've been really awesome and > helpful with me. Also a big thanks to these upstream helping with fixing > these bits as explained above, and especially annp for fixing the > neutron-rpc-server related problems, with the patch also pending reviews > at: https://review.openstack.org/#/c/555608/ > > All of these puppet modules are available directly in Debian in a > packaged form. To get them, simply do: > > apt-get install openstack-puppet-modules > > in Debian Sid, or using the Debian backports repository at: > > http://stretch-queens.infomaniak.ch/debian > > Still to fix, is neutron-fwaas, which seems to not like either Python 3 > or using neutron-api over uwsgi (I'm not sure which of these yet). > Upstream neutron developers are currently investigating this. For this > reason, neutron firewall extension is currently disabled for the > l3-agent, but will be reactivated as soon as a proper fix is found. > > Also, Ceph in Debian is currently a way behind (so we have to use > upstream Debian repository for Stretch), as it lacks a proper Python 3 > support, and still no Luminous release uploaded to Sid. I intend to > attempt to fix this, to get a chance to get this in time for Buster. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Wed Jun 20 16:28:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Jun 2018 12:28:12 -0400 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> Message-ID: <1529512019-sup-6593@lrrr.local> Excerpts from Zane Bitter's message of 2018-06-20 12:07:49 -0400: > On 20/06/18 11:40, Rico Lin wrote: > > I send a release patch now https://review.openstack.org/#/c/576895/ > > Also, add Bob Haddleton to this ML who is considering as PTL for > > heat-translator team > > Is it time to consider moving the heat-translator and tosca-parser repos > from being deliverables of Heat to deliverables of Tacker? The current > weird structure dates from the days of the experiment with OpenStack > 'Programs' (vs. Projects). > > Heat cores don't really have time to be engaging with heat-translator, > and Tacker is clearly the major user and the thing that keeps getting > blocked on needing patches merged and releases made. It sounds like it. I had no idea there was any reason to look to anyone other than the Heat PTL or liaison for approval of that release. A WIP on the patch would have been OK, too, but if the Tacker team is really the one responsible we should update the repo governance. Doug From rico.lin.guanyu at gmail.com Wed Jun 20 16:31:38 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 21 Jun 2018 00:31:38 +0800 Subject: [openstack-dev] [heat][tacker][heat-translator] deliverables of heat-translator library In-Reply-To: <1529512019-sup-6593@lrrr.local> References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> <1529512019-sup-6593@lrrr.local> Message-ID: To continue the discussion in http://lists.openstack.org/pipermail/openstack-dev/2018-June/131681.html Add Tacker and heat-translator to make sure all aware this discussion On Thu, Jun 21, 2018 at 12:28 AM Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-06-20 12:07:49 -0400: > > On 20/06/18 11:40, Rico Lin wrote: > > > I send a release patch now https://review.openstack.org/#/c/576895/ > > > Also, add Bob Haddleton to this ML who is considering as PTL for > > > heat-translator team > > > > Is it time to consider moving the heat-translator and tosca-parser repos > > from being deliverables of Heat to deliverables of Tacker? The current > > weird structure dates from the days of the experiment with OpenStack > > 'Programs' (vs. Projects). > > > > Heat cores don't really have time to be engaging with heat-translator, > > and Tacker is clearly the major user and the thing that keeps getting > > blocked on needing patches merged and releases made. > > It sounds like it. I had no idea there was any reason to look to anyone > other than the Heat PTL or liaison for approval of that release. A WIP > on the patch would have been OK, too, but if the Tacker team is really > the one responsible we should update the repo governance. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jun 20 16:37:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jun 2018 16:37:14 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> <20180531130047.q2x2gmhkredaqxis@yuggoth.org> <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> Message-ID: <20180620163713.57najuohjasgkps4@yuggoth.org> On 2018-06-06 01:29:49 +0000 (+0000), Jeremy Stanley wrote: [...] > Seeing no further objections, I give you > https://review.openstack.org/572656 for the next step. That change merged just a few minutes ago, and https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services now includes: A Castellan-compatible key store OpenStack components may keep secrets in a key store, using Oslo’s Castellan library as an indirection layer. While OpenStack provides a Castellan-compatible key store service, Barbican, other key store backends are also available for Castellan. Note that in the context of the base services set Castellan is intended only to provide an interface for services to interact with a key store, and it should not be treated as a means to proxy API calls from users to that key store. In order to reduce unnecessary exposure risks, any user interaction with secret material should be left to a dedicated API instead (preferably as provided by Barbican). Thanks to everyone who helped brainstorming/polishing, and here's looking forward to a ubiquity of default security features and functionality in future OpenStack releases! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bob.haddleton at nokia.com Wed Jun 20 16:38:17 2018 From: bob.haddleton at nokia.com (HADDLETON, Robert W (Bob)) Date: Wed, 20 Jun 2018 11:38:17 -0500 Subject: [openstack-dev] [heat][tacker][heat-translator] deliverables of heat-translator library In-Reply-To: References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> <1529512019-sup-6593@lrrr.local> Message-ID: <9c426207-b1ce-abe3-1b48-5e0fefe4bf32@nokia.com> The Tacker team is dependent on tosca-parser and heat-translator but they are not the only consumers. I agree the structure is odd, and Sahdev has more of the history than I do. In the past the requests from the Tacker team have come to Sahdev/me and we have created releases as needed.  For some reason this time a request went to the Heat ML, in addition to a separate request to me directly. I'm open to changes in the structure but I don't think Tacker is the right place to put the deliverables. Bob On 6/20/2018 11:31 AM, Rico Lin wrote: > To continue the discussion in > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131681.html > > Add Tacker and heat-translator to make sure all aware this discussion > > On Thu, Jun 21, 2018 at 12:28 AM Doug Hellmann > wrote: > > Excerpts from Zane Bitter's message of 2018-06-20 12:07:49 -0400: > > On 20/06/18 11:40, Rico Lin wrote: > > > I send a release patch now > https://review.openstack.org/#/c/576895/ > > > Also, add Bob Haddleton to this ML who is considering as PTL for > > > heat-translator team > > > > Is it time to consider moving the heat-translator and > tosca-parser repos > > from being deliverables of Heat to deliverables of Tacker? The > current > > weird structure dates from the days of the experiment with > OpenStack > > 'Programs' (vs. Projects). > > > > Heat cores don't really have time to be engaging with > heat-translator, > > and Tacker is clearly the major user and the thing that keeps > getting > > blocked on needing patches merged and releases made. > > It sounds like it. I had no idea there was any reason to look to > anyone > other than the Heat PTL or liaison for approval of that release. A WIP > on the patch would have been OK, too, but if the Tacker team is really > the one responsible we should update the repo governance. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > May The Force of OpenStack Be With You, > */Rico Lin > /*irc: ricolin > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bob_haddleton.vcf Type: text/x-vcard Size: 252 bytes Desc: not available URL: From mriedemos at gmail.com Wed Jun 20 16:48:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 20 Jun 2018 11:48:05 -0500 Subject: [openstack-dev] [nova] [placement] placement update 18-24 In-Reply-To: References: Message-ID: Thanks for this, a few things inline. On 6/15/2018 10:04 AM, Chris Dent wrote: > > The flip side of this is that it highlights that we have a growing > documentation debt with the many features provided by placement and > how to make best use of them in nova (and other services that might > like to use placement). Before the end of the cycle we will need to > be sure that we set aside a considerable chunk of time to address > this gap. I'm glad you mentioned this so I didn't have to. :) The two things off the top of my head that are going to be important for docs are: 1. How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?! 2. Adding the placement database stuff to the install guide and the placement upgrade notes. I assume that for fresh installs, we want people using the placement database configuration so they don't have to move later during an upgrade, so that seems like a no-brainer. The upgrade docs for placement are per-release and should have a mention of the placement database and any notes / pointers (the spec?) on migration options. What we heard at the summit, for the most part, was 10 minutes of down time in the API for a DB copy and re-config was OK for most, and others that need zero downtime have ways of doing that as well. We don't need a definitive guide, just high level wording on options. > > There is now a [heal allocations > CLI](https://review.openstack.org/#/c/565886/) which is designed to > help people migrate away from the CachingScheduler (which doesn't > use placement). Not only that, but we can use it to heal missing/sentinel consumers from the "consumer generations" blueprint. I've started something for that here: https://review.openstack.org/#/c/574488/ > > Nova host aggregates are now magically mirrored as placement > aggregates and, amongst other things, this is used to honor the > [availability_zone hint via > placement](https://review.openstack.org/#/c/546282/). The aggregates mirror blueprint is not done until the 'nova-manage placement sync_aggregates' command is in place, which is started here: https://review.openstack.org/#/c/575912/ > > # Extraction > All of the functional changes for the placement DB stuff are merged or approved, I think there are just some cleanup follow-ups and docs needed still, but otherwise that's done I think and could probably be removed from the runway slot it's in? > > * >   Convert driver supported capabilities to compute node provider >   traits I (finally) got around to rebasing this yesterday. There are a few TODOs left but it shouldn't be too bad, hopefully I can close that out when I get back next week. -- Thanks, Matt From mordred at inaugust.com Wed Jun 20 16:51:46 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 20 Jun 2018 11:51:46 -0500 Subject: [openstack-dev] [openstackclient][openstacksdk] why does openstackclient rely on openstacksdk for get a network client In-Reply-To: References: <2018062010414723945413@163.com> Message-ID: On 06/20/2018 10:23 AM, Dean Troyer wrote: > On Tue, Jun 19, 2018 at 9:42 PM, zijian1012 at 163.com wrote: >> Thks for replying, just want to confirm, you mentioned "We have intended to >> migrate everything to use >> OpenStackSDK", the current use of python-*client is: >> 1. OSC >> 2. all services that need to interact with other services (e.g.: nova >> libraries: self.volume_api = volume_api or cinder.API()) >> Do you mean that both of the above will be migrated to use the OpenStack >> SDK? > > I am only directly speaking for OSC. Initially we did not think that > services using the SDK would be feasible, Monty has taken it to a > place where that should now be a possibility. I am willing to find > out that doing so is a good idea. :) Yes, I think this is a good idea to explore - but I also think we should be conservative with the effort. There are things we'll need to learn about and improve. We're VERY close to time for making the push to get OSC converted (we need to finish one more patch for version discovery / microversion support first - and I'd really like to get an answer for the Resource/munch/shade interaction in - but that's honestly realistically like 2 or maybe 3 patches, even though they will be 2 or 3 complex patches. I started working a bit on osc-lib patches - I'll try to get those pushed up soon. Monty From dtroyer at gmail.com Wed Jun 20 17:29:48 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 20 Jun 2018 12:29:48 -0500 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> Message-ID: [Apologies for the relay in responding...] On Fri, Jun 1, 2018 at 8:13 AM, Josephine Seifert wrote: > our team has implemented a prototype for an osc-included image signing. We > would like to propose a spec or something like this, but haven't found where > to start at. So here is a brief concept of what we want to contribute: > > https://etherpad.openstack.org/p/osc-included_image_signing > > Please advise us which steps to take next! This looks like a great addition, thanks! I am not familiar with cursive, it is not a current dependency of OSC. Also, does this depend on barbican client at all? That is not a direct dependency of OSC, If it does have a hard dependency on barbican client, we would need to handle the errors if it is not installed. We do not have a formal spec process in OSC, that etherpad[0[ and story [1] look good. Tasks 19810 and 19812 could likely be done in the same review depending on how things are structured. Go ahead and post WIP reviews and we can look at it further. To merge I'll want all of the usual tests, docs, release notes, etc but don't wait if that is not all done up front. dt [0] https://etherpad.openstack.org/p/osc-included_image_signing [1] https://storyboard.openstack.org/?#!/story/2002128 -- Dean Troyer dtroyer at gmail.com From zbitter at redhat.com Wed Jun 20 17:38:36 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 20 Jun 2018 13:38:36 -0400 Subject: [openstack-dev] [heat][tacker][heat-translator] deliverables of heat-translator library In-Reply-To: <9c426207-b1ce-abe3-1b48-5e0fefe4bf32@nokia.com> References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> <63e3312d-29ec-5d1e-d0a6-59c1664c51f8@redhat.com> <1529512019-sup-6593@lrrr.local> <9c426207-b1ce-abe3-1b48-5e0fefe4bf32@nokia.com> Message-ID: On 20/06/18 12:38, HADDLETON, Robert W (Bob) wrote: > The Tacker team is dependent on tosca-parser and heat-translator but > they are not the only consumers. > > I agree the structure is odd, and Sahdev has more of the history than I do. History lesson: At the time (2014) OpenStack was organised into 'Programs', that could contain multiple projects. It seemed to make sense to bring heat-translator into the Orchestration Program. It had its own PTL (Sahdev) and its own core team (although Heat cores from the time still have +2 rights on it), and operated essentially independently. There were discussions about eventually combining it with heatclient or even Heat itself once it was mature, but that hasn't come up in quite a while and there are no resources available to work on it now anyway. When we scrapped 'Programs', it just got converted to a deliverable of the Heat project, instead of its own separate project. In practice, however, nothing actually changed and it kept its own (technically unofficial, I think) PTL and operated independently. That's the source of the weirdness. Since then the number of core reviewers has dropped considerably and people have difficulty getting patches in and releases made. Most of the people bugging me about that have been from Tacker, hence the suggestion to move the project over there: since they are the biggest users they could help maintain it. > In the past the requests from the Tacker team have come to Sahdev/me and > we have created > releases as needed.  For some reason this time a request went to the > Heat ML, in addition to > a separate request to me directly. > > I'm open to changes in the structure but I don't think Tacker is the > right place to put the > deliverables. What would you suggest? cheers, Zane. > Bob > > On 6/20/2018 11:31 AM, Rico Lin wrote: >> To continue the discussion in >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131681.html >> >> Add Tacker and heat-translator to make sure all aware this discussion >> >> On Thu, Jun 21, 2018 at 12:28 AM Doug Hellmann > > wrote: >> >> Excerpts from Zane Bitter's message of 2018-06-20 12:07:49 -0400: >> > On 20/06/18 11:40, Rico Lin wrote: >> > > I send a release patch now >> https://review.openstack.org/#/c/576895/ >> > > Also, add Bob Haddleton to this ML who is considering as PTL for >> > > heat-translator team >> > >> > Is it time to consider moving the heat-translator and >> tosca-parser repos >> > from being deliverables of Heat to deliverables of Tacker? The >> current >> > weird structure dates from the days of the experiment with >> OpenStack >> > 'Programs' (vs. Projects). >> > >> > Heat cores don't really have time to be engaging with >> heat-translator, >> > and Tacker is clearly the major user and the thing that keeps >> getting >> > blocked on needing patches merged and releases made. >> >> It sounds like it. I had no idea there was any reason to look to >> anyone >> other than the Heat PTL or liaison for approval of that release. A WIP >> on the patch would have been OK, too, but if the Tacker team is really >> the one responsible we should update the repo governance. >> >> Doug >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> May The Force of OpenStack Be With You, >> */Rico Lin >> /*irc: ricolin >> >> >> >> > From kennelson11 at gmail.com Wed Jun 20 18:24:38 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 20 Jun 2018 11:24:38 -0700 Subject: [openstack-dev] [PTG] Updates! Message-ID: Hello Everyone! Wanted to give you some updates on PTG4 planning. We have finalized the list of SIGs/ Groups/WGs/Teams that are attending. They are as follows: - Airship - API SIG - Barbican/Security SIG - Blazar - Chef OpenStack - Cinder - Cyborg - Designate - Documentation - Edge Computing Group - First Contact SIG - Glance - Heat - Horizon - Infrastructure - Interop WG - Ironic - Kata - Keystone - Kolla - LOCI - Manila - Masakari - Mistral - Monasca - Neutron - Nova - Octavia - OpenStack Ansible - OpenStack Charms - OpenStack Helm - OpenStackClient - Operator Meetup Puppet OpenStack - QA - Oslo - Public Cloud WG - Release Management - Requirements - Sahara - Scientific SIG - Self-Healing SIG - SIG- K8s - StarlingX - Swift - TC - TripleO - Upgrades SIG - Watcher - Zuul (pending confirmation) Thierry and I are working on placing them into a strawman schedule to reduce conflicts between related or overlapping groups. We should have more on what that will look like and a draft for you all to review in the next few weeks. We also wanted to remind you all of the Travel Support Program. We are again doing a two phase selection. The first deadline is approaching: July 1st. At this point we have less than a dozen applicants so if you need it or even think you need it, I urge you to apply here[1]. Also! Reminder that we have a finite number of rooms in the hotel block so please book early to make sure you get the discounted rate before they run out. You can book those rooms here[2] (pardon the ugly URL). Can't wait to see you all there! -Kendall Nelson (diablo_rojo) P.S. Gonna try to do a game night again since you all seemed to enjoy it so much last time :) [1] https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 [2] https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodenvmw at gmail.com Wed Jun 20 19:45:24 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 20 Jun 2018 13:45:24 -0600 Subject: [openstack-dev] [tricircle] Zuul v3 integration status In-Reply-To: <27220cea.3678.1641ad806d0.Coremail.linghucongsong@163.com> References: <922b0570-988e-98d2-56db-615d388de1f6@gmail.com> <27220cea.3678.1641ad806d0.Coremail.linghucongsong@163.com> Message-ID: <7e79bab0-2c9c-ffa9-079b-a307263c49b3@gmail.com> Thanks for that. I'm a bit concerned about how to proceed with dependencies in the meantime, it's not realistic to hold all such patches until S. Perhaps we can continue this discussion in [1]? [1] https://bugs.launchpad.net/tricircle/+bug/1776922 On 6/19/18 7:38 PM, linghucongsong wrote: > we will plan this as a bp, but maybe can not finished it > > in the R release, we promise must be finish it in the next openstack > version. > From remo at rm.ht Wed Jun 20 20:56:20 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 20 Jun 2018 13:56:20 -0700 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: <585ca53f-4ef1-01ce-b096-9a949130094e@catalyst.net.nz> References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> <585ca53f-4ef1-01ce-b096-9a949130094e@catalyst.net.nz> Message-ID: <59464D3F-D22C-4F8F-A774-E13102948C2C@rm.ht> Hello guys, what will be the right channel to as a question about having K8 (magnum working with Tripleo)? I have the following errors.. http://pastebin.mattei.co/index.php/view/2d1156f1 Any tips are appreciated. Thanks Remo > On Jun 19, 2018, at 2:13 PM, Fei Long Wang wrote: > > Hi there, > > For people who maybe still interested in this issue. I have proposed a patch, see https://review.openstack.org/576029 And I have verified with Sonobuoy for both multi masters (3 master nodes) and single master clusters, all worked. Any comments will be appreciated. Thanks. > > > On 21/05/18 01:22, Sergey Filatov wrote: >> Hi! >> I’d like to initiate a discussion about this bug: [1]. >> To resolve this issue we need to generate a secret cert and pass it to master nodes. We also need to store it somewhere to support scaling. >> This issue is specific for kubernetes drivers. Currently in magnum we have a general cert manager which is the same for all the drivers. >> >> What do you think about moving cert_manager logic into a driver-specific area? >> Having this common cert_manager logic forces us to generate client cert with “admin” and “system:masters” subject & organisation names [2], >> which is really something that we need only for kubernetes drivers. >> >> [1] https://bugs.launchpad.net/magnum/+bug/1766546 >> [2] https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64 >> >> >> ..Sergey Filatov >> >> >> >>> On 20 Apr 2018, at 20:57, Sergey Filatov > wrote: >>> >>> Hello, >>> >>> I looked into k8s drivers for magnum I see that each api-server on master node generates it’s own service-account-key-file. This causes issues with service-accounts authenticating on api-server. (In case api-server endpoint moves). >>> As far as I understand we should have either all api-server keys synced on api-servesr or pre-generate single api-server key. >>> >>> What is the way for magnum to get over this issue? >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sombrafam at gmail.com Wed Jun 20 20:55:56 2018 From: sombrafam at gmail.com (Erlon Cruz) Date: Wed, 20 Jun 2018 17:55:56 -0300 Subject: [openstack-dev] [PTG] Updates! In-Reply-To: References: Message-ID: +1 on the game night! Reserve a room only for that :) Em qua, 20 de jun de 2018 às 15:25, Kendall Nelson escreveu: > Hello Everyone! > > Wanted to give you some updates on PTG4 planning. We have finalized the > list of SIGs/ Groups/WGs/Teams that are attending. They are as follows: > > - > > Airship > - > > API SIG > - > > Barbican/Security SIG > - > > Blazar > - > > Chef OpenStack > - > > Cinder > - > > Cyborg > - > > Designate > - > > Documentation > - > > Edge Computing Group > - > > First Contact SIG > - > > Glance > - > > Heat > - > > Horizon > - > > Infrastructure > - > > Interop WG > > > - > > Ironic > - > > Kata > - > > Keystone > - > > Kolla > - > > LOCI > - > > Manila > - > > Masakari > - > > Mistral > - > > Monasca > - > > Neutron > - > > Nova > - > > Octavia > - > > OpenStack Ansible > - > > OpenStack Charms > - > > OpenStack Helm > - > > OpenStackClient > > > > - > > Operator Meetup > Puppet OpenStack > - > > QA > - > > Oslo > - > > Public Cloud WG > - > > Release Management > - > > Requirements > - > > Sahara > - > > Scientific SIG > - > > Self-Healing SIG > - > > SIG- K8s > - > > StarlingX > - > > Swift > - > > TC > - > > TripleO > - > > Upgrades SIG > - > > Watcher > - > > Zuul (pending confirmation) > > Thierry and I are working on placing them into a strawman schedule to > reduce conflicts between related or overlapping groups. We should have more > on what that will look like and a draft for you all to review in the next > few weeks. > > We also wanted to remind you all of the Travel Support Program. We are > again doing a two phase selection. The first deadline is approaching: July > 1st. At this point we have less than a dozen applicants so if you need it > or even think you need it, I urge you to apply here[1]. > > Also! Reminder that we have a finite number of rooms in the hotel block so > please book early to make sure you get the discounted rate before they run > out. You can book those rooms here[2] (pardon the ugly URL). > > Can't wait to see you all there! > > -Kendall Nelson (diablo_rojo) > > P.S. Gonna try to do a game night again since you all seemed to enjoy it > so much last time :) > > [1] > https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 > > [2] > https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Wed Jun 20 21:59:30 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Wed, 20 Jun 2018 16:59:30 -0500 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <20180620163713.57najuohjasgkps4@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> <20180531130047.q2x2gmhkredaqxis@yuggoth.org> <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> <20180620163713.57najuohjasgkps4@yuggoth.org> Message-ID: Looks like I missed this so I'm late to the party, but: Ade is technically correct, Octavia doesn't explicitly depend on Barbican, as we do support castellan generically. *HOWEVER*: we don't just store and retrieve our own secrets -- we rely on loading up user created secrets. This means that for Octavia to work, even if we use castellan, we still need some way for users to interact with the secret store via an API, and what that means in openstack in still Barbican. So I would still say that Barbican is a dependency for us logistically, if not technically. For example, internally at GoDaddy we were investigating deploying Vault with a custom user-facing API/UI for allowing users to store secrets that could be consumed by Octavia with castellan (don't get me started on how dumb that is, but it's what we were investigating). The correct way to do this in an openstack environment is the openstack secret store API, which is Barbican. So, while I am personally dealing with an example of very painfully avoiding Barbican (which may have been a non-issue if Barbican were a base service), I have a tough time reconciling not including Barbican itself as a requirement. --Adam (rm_work) On Wed, Jun 20, 2018, 11:37 Jeremy Stanley wrote: > On 2018-06-06 01:29:49 +0000 (+0000), Jeremy Stanley wrote: > [...] > > Seeing no further objections, I give you > > https://review.openstack.org/572656 for the next step. > > That change merged just a few minutes ago, and > > https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services > now includes: > > A Castellan-compatible key store > > OpenStack components may keep secrets in a key store, using > Oslo’s Castellan library as an indirection layer. While > OpenStack provides a Castellan-compatible key store service, > Barbican, other key store backends are also available for > Castellan. Note that in the context of the base services set > Castellan is intended only to provide an interface for services > to interact with a key store, and it should not be treated as a > means to proxy API calls from users to that key store. In order > to reduce unnecessary exposure risks, any user interaction with > secret material should be left to a dedicated API instead > (preferably as provided by Barbican). > > Thanks to everyone who helped brainstorming/polishing, and here's > looking forward to a ubiquity of default security features and > functionality in future OpenStack releases! > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doantungbk.203 at gmail.com Wed Jun 20 22:35:38 2018 From: doantungbk.203 at gmail.com (Tung Doan) Date: Thu, 21 Jun 2018 00:35:38 +0200 Subject: [openstack-dev] [heat] Need new release of heat-translator library Message-ID: I agree with Bobh. Considering both Heat Translator and Tosca Parser under the management of Tacker could affect other projects. We have recently announced OpenStack Apmec [1] (MEC Orchestration Service) which consumed these two projects as well. In case Heat PTL does not have enough bandwidth to take care of the release of these two projects. I just wonder whether it is reasonable to release them when having only the approval of their PTL. [1] https://github.com/openstack/apmec/tree/master/apmec -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Jun 20 22:59:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 20 Jun 2018 18:59:37 -0400 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: References: Message-ID: <1529534483-sup-9862@lrrr.local> Excerpts from Tung Doan's message of 2018-06-21 00:35:38 +0200: > I agree with Bobh. Considering both Heat Translator and Tosca Parser under > the management of Tacker could affect other projects. We have recently > announced OpenStack Apmec [1] (MEC Orchestration Service) which > consumed these two projects as well. > In case Heat PTL does not have enough bandwidth to take care of the release > of these two projects. I just wonder whether it is reasonable to release > them when having only the approval of their PTL. > > [1] https://github.com/openstack/apmec/tree/master/apmec According to https://governance.openstack.org/tc/reference/projects/heat.html the Heat PTL *is* the PTL for heat-translators. Any internal team structure that implies otherwise is just that, an internal team structure. I'm really unclear on what the problem is here. The PTL requested a release; it looked fine; I approved it; it was completed. The release team tries to facilitate releases happening as quickly as possible so we don't block work anyone else is trying to do. There was no apparent reason to wait for this one. If the teams using heat-translator want to coordinate on when releases happen for some reason, then please deal with that before requesting the releases (and use a W-1 on the patch if you want us to hold off until you get approval). The release team has said we do not want to have to keep up with separate liaisons for individual deliverables because it's too much to for us to track. In the mean time, releases are cheap and we can have as many of them as we want, so if there are additional features in the pipeline that will be ready to be released soon we can just do that when they merge. And if changes are going into heat-translator that might break consuming projects, we should deal with that the way we do in other libraries and set up cross-project gating to try to catch the problems ahead of time. (Maybe that testing is already in place?) We can also use the constraint system to block "bad" releases if they do happen. But it's generally better for us to be releasing libraries and tools as often as possible, so that any breaking changes come as part of a small set and so new features are available shortly after they are implemented. Doug From doantungbk.203 at gmail.com Wed Jun 20 23:16:52 2018 From: doantungbk.203 at gmail.com (Tung Doan) Date: Thu, 21 Jun 2018 01:16:52 +0200 Subject: [openstack-dev] [heat] Need new release of heat-translator library In-Reply-To: <1529534483-sup-9862@lrrr.local> References: <1529534483-sup-9862@lrrr.local> Message-ID: Hi Doug, I posted in wrong thread :) Sorry for that. The right one is http://lists.openstack.org/pipermail/openstack-dev/2018-June/131688.html Vào Th 5, 21 thg 6, 2018 vào lúc 01:00 Doug Hellmann < doug at doughellmann.com> đã viết: > Excerpts from Tung Doan's message of 2018-06-21 00:35:38 +0200: > > I agree with Bobh. Considering both Heat Translator and Tosca Parser > under > > the management of Tacker could affect other projects. We have recently > > announced OpenStack Apmec [1] (MEC Orchestration Service) which > > consumed these two projects as well. > > In case Heat PTL does not have enough bandwidth to take care of the > release > > of these two projects. I just wonder whether it is reasonable to release > > them when having only the approval of their PTL. > > > > [1] https://github.com/openstack/apmec/tree/master/apmec > > According to > https://governance.openstack.org/tc/reference/projects/heat.html the > Heat PTL *is* the PTL for heat-translators. Any internal team structure > that implies otherwise is just that, an internal team structure. > > I'm really unclear on what the problem is here. The PTL requested a > release; it looked fine; I approved it; it was completed. > > The release team tries to facilitate releases happening as quickly > as possible so we don't block work anyone else is trying to do. > There was no apparent reason to wait for this one. If the teams > using heat-translator want to coordinate on when releases happen > for some reason, then please deal with that before requesting the > releases (and use a W-1 on the patch if you want us to hold off > until you get approval). The release team has said we do not want > to have to keep up with separate liaisons for individual deliverables > because it's too much to for us to track. > > In the mean time, releases are cheap and we can have as many of > them as we want, so if there are additional features in the pipeline > that will be ready to be released soon we can just do that when > they merge. > > And if changes are going into heat-translator that might break > consuming projects, we should deal with that the way we do in other > libraries and set up cross-project gating to try to catch the > problems ahead of time. (Maybe that testing is already in place?) > We can also use the constraint system to block "bad" releases if > they do happen. But it's generally better for us to be releasing > libraries and tools as often as possible, so that any breaking > changes come as part of a small set and so new features are available > shortly after they are implemented. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jun 21 01:38:09 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 21 Jun 2018 10:38:09 +0900 Subject: [openstack-dev] [qa]Canceling the today QA Office Hour Message-ID: <1641ffde023.e832265025215.8815680015691228983@ghanshyammann.com> Hi All, Most of the QA members are in Open Source Summit Japan, so we will skip the today office hour. -gmann From chris.friesen at windriver.com Thu Jun 21 02:26:13 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 20 Jun 2018 20:26:13 -0600 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: <5B27D46F.10804@windriver.com> Message-ID: <5B2B0CC5.8010602@windriver.com> On 06/20/2018 10:00 AM, Sylvain Bauza wrote: > When we reviewed the spec, we agreed as a community to say that we should still > get race conditions once the series is implemented, but at least it helps operators. > Quoting : > "It would also be possible for another instance to steal NUMA resources from a > live migrated instance before the latter’s destination compute host has a chance > to claim them. Until NUMA resource providers are implemented [3] > and allow for an essentially atomic > schedule+claim operation, scheduling and claiming will keep being done at > different times on different nodes. Thus, the potential for races will continue > to exist." > https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-live-migration.html#proposed-change My understanding of that quote was that we were acknowledging the fact that when using the ResourceTracker there was an unavoidable race window between the time when the scheduler selected a compute node and when the resources were claimed on that compute node in check_can_live_migrate_destination(). And in this model no resources are actually *used* until they are claimed. As I understand it, Artom is proposing to have a larger race window, essentially from when the scheduler selects a node until the resource audit runs on that node. Chris From tony at bakeyournoodle.com Thu Jun 21 03:13:38 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 21 Jun 2018 13:13:38 +1000 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> Message-ID: <20180621031338.GK18927@thor.bakeyournoodle.com> On Wed, Jun 20, 2018 at 08:54:56PM +0900, Takashi Yamamoto wrote: > do you have a plan to submit these changes on gerrit? I didn't but I have now: * https://review.openstack.org/577028 * https://review.openstack.org/577029 Feel free to edit/test as you like. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From feilong at catalyst.net.nz Thu Jun 21 03:27:14 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Thu, 21 Jun 2018 15:27:14 +1200 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: <59464D3F-D22C-4F8F-A774-E13102948C2C@rm.ht> References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> <585ca53f-4ef1-01ce-b096-9a949130094e@catalyst.net.nz> <59464D3F-D22C-4F8F-A774-E13102948C2C@rm.ht> Message-ID: <863438f4-440a-3118-ad82-4ef4ad110557@catalyst.net.nz> Hi Remo, I can't see obvious issue from the log you posted. You can pop up at #openstack-containers IRC channel as for Magnum questions. Cheers. On 21/06/18 08:56, Remo Mattei wrote: > Hello guys, what will be the right channel to as a question about > having K8 (magnum working with Tripleo)?  > > I have the following errors.. > > http://pastebin.mattei.co/index.php/view/2d1156f1 > > Any tips are appreciated.  > > Thanks  > Remo  > >> On Jun 19, 2018, at 2:13 PM, Fei Long Wang > > wrote: >> >> Hi there, >> >> For people who maybe still interested in this issue. I have proposed >> a patch, see https://review.openstack.org/576029 And I have verified >> with Sonobuoy for both multi masters (3 master nodes) and single >> master clusters, all worked. Any comments will be appreciated. Thanks. >> >> >> On 21/05/18 01:22, Sergey Filatov wrote: >>> Hi! >>> I’d like to initiate a discussion about this bug: [1]. >>> To resolve this issue we need to generate a secret cert and pass it >>> to master nodes. We also need to store it somewhere to support scaling. >>> This issue is specific for kubernetes drivers. Currently in magnum >>> we have a general cert manager which is the same for all the drivers. >>> >>> What do you think about moving cert_manager logic into a >>> driver-specific area? >>> Having this common cert_manager logic forces us to generate client >>> cert with “admin” and “system:masters” subject & organisation names >>> [2],  >>> which is really something that we need only for kubernetes drivers. >>> >>> [1] https://bugs.launchpad.net/magnum/+bug/1766546 >>> [2] https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64 >>> >>> >>> ..Sergey Filatov >>> >>> >>> >>>> On 20 Apr 2018, at 20:57, Sergey Filatov >>> > wrote: >>>> >>>> Hello, >>>> >>>> I looked into k8s drivers for magnum I see that each api-server on >>>> master node generates it’s own service-account-key-file. This >>>> causes issues with service-accounts authenticating on api-server. >>>> (In case api-server endpoint moves). >>>> As far as I understand we should have either all api-server keys >>>> synced on api-servesr or pre-generate single api-server key. >>>> >>>> What is the way for magnum to get over this issue? >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org >> ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Thu Jun 21 03:33:32 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 20 Jun 2018 20:33:32 -0700 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: <863438f4-440a-3118-ad82-4ef4ad110557@catalyst.net.nz> References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> <585ca53f-4ef1-01ce-b096-9a949130094e@catalyst.net.nz> <59464D3F-D22C-4F8F-A774-E13102948C2C@rm.ht> <863438f4-440a-3118-ad82-4ef4ad110557@catalyst.net.nz> Message-ID: <00698D32-797C-42EB-BC01-F9978EBD87F8@rm.ht> Thanks Fei, I did post the question on that channel no much noise there though.. I would really like to get this configured since we are pushing for production. Thanks > On Jun 20, 2018, at 8:27 PM, Fei Long Wang wrote: > > Hi Remo, > > I can't see obvious issue from the log you posted. You can pop up at #openstack-containers IRC channel as for Magnum questions. Cheers. > > > On 21/06/18 08:56, Remo Mattei wrote: >> Hello guys, what will be the right channel to as a question about having K8 (magnum working with Tripleo)? >> >> I have the following errors.. >> >> http://pastebin.mattei.co/index.php/view/2d1156f1 >> >> Any tips are appreciated. >> >> Thanks >> Remo >> >>> On Jun 19, 2018, at 2:13 PM, Fei Long Wang > wrote: >>> >>> Hi there, >>> >>> For people who maybe still interested in this issue. I have proposed a patch, see https://review.openstack.org/576029 And I have verified with Sonobuoy for both multi masters (3 master nodes) and single master clusters, all worked. Any comments will be appreciated. Thanks. >>> >>> >>> On 21/05/18 01:22, Sergey Filatov wrote: >>>> Hi! >>>> I’d like to initiate a discussion about this bug: [1]. >>>> To resolve this issue we need to generate a secret cert and pass it to master nodes. We also need to store it somewhere to support scaling. >>>> This issue is specific for kubernetes drivers. Currently in magnum we have a general cert manager which is the same for all the drivers. >>>> >>>> What do you think about moving cert_manager logic into a driver-specific area? >>>> Having this common cert_manager logic forces us to generate client cert with “admin” and “system:masters” subject & organisation names [2], >>>> which is really something that we need only for kubernetes drivers. >>>> >>>> [1] https://bugs.launchpad.net/magnum/+bug/1766546 >>>> [2] https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64 >>>> >>>> >>>> ..Sergey Filatov >>>> >>>> >>>> >>>>> On 20 Apr 2018, at 20:57, Sergey Filatov > wrote: >>>>> >>>>> Hello, >>>>> >>>>> I looked into k8s drivers for magnum I see that each api-server on master node generates it’s own service-account-key-file. This causes issues with service-accounts authenticating on api-server. (In case api-server endpoint moves). >>>>> As far as I understand we should have either all api-server keys synced on api-servesr or pre-generate single api-server key. >>>>> >>>>> What is the way for magnum to get over this issue? >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > -------------------------------------------------------------------------- > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > -------------------------------------------------------------------------- > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Thu Jun 21 03:44:43 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Wed, 20 Jun 2018 23:44:43 -0400 Subject: [openstack-dev] [tripleo] 'overcloud deploy' doesn't restart haproxy (Pike) Message-ID: <20180621034443.ja4lxombexjduntx@redhat.com> I've noticed that when updating the overcloud with 'overcloud deploy', the deploy process does not restart the haproxy containers when there are changes to the haproxy configuration. Is this expected behavior? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From Tushar.Patil at nttdata.com Thu Jun 21 04:08:35 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Thu, 21 Jun 2018 04:08:35 +0000 Subject: [openstack-dev] [heat] [heat-translator] Need new release of heat-translator library In-Reply-To: <70318015-9ec9-b649-ed2c-0bbc69083727@nokia.com> References: <06f6fffd-75fa-e0e1-9613-5cddf15e01b8@nemebean.com> <70318015-9ec9-b649-ed2c-0bbc69083727@nokia.com> Message-ID: <4CA384CC-E63B-4054-BC84-61F9AD1AA3BE@nttdata.com> Thank you all for your support. Updating upper constraints patch is already merged. https://review.openstack.org/#/c/576917/1 We have proposed a patch to update lower constraints of heat-translator library to 1.1.0 in openstack/requirements project. https://review.openstack.org/#/c/577021/ Regards, Tushar Patil On Jun 21, 2018, at 1:01 AM, HADDLETON, Robert W (Bob) > wrote: This request had come to me from someone else in the Tacker team and I was working on including a couple of other patchsets in the release, but this is fine. Please tag these requests as [heat-translator] in the subject so they get flagged to me and I'm happy to work them. Bob On 6/20/2018 10:40 AM, Rico Lin wrote: I send a release patch now https://review.openstack.org/#/c/576895/ Also, add Bob Haddleton to this ML who is considering as PTL for heat-translator team Ben Nemec > 於 2018年6月20日 週三 下午10:26寫道: On 06/20/2018 02:58 AM, Patil, Tushar wrote: > Hi, > > Few weeks back, we had proposed a patch [1] to add support for translation of placement policies and that patch got merged. > > This feature will be consumed by tacker specs [2] which we are planning to implement in Rocky release and it's implementation is uploaded in patch [3]. Presently, the tests are failing on patch [3] becoz it requires newer version of heat-translator library. > > Could you please release a new version of heat-translator library so that we can complete specs[2] in Rocky timeframe. Note that you can propose a release to the releases repo[1] and then you just need the PTL or release liaison to sign off on it. 1: http://git.openstack.org/cgit/openstack/releases/tree/README.rst -Ben __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Thu Jun 21 04:52:34 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Thu, 21 Jun 2018 13:52:34 +0900 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180621031338.GK18927@thor.bakeyournoodle.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> Message-ID: On Thu, Jun 21, 2018 at 12:13 PM, Tony Breeds wrote: > On Wed, Jun 20, 2018 at 08:54:56PM +0900, Takashi Yamamoto wrote: > >> do you have a plan to submit these changes on gerrit? > > I didn't but I have now: > > * https://review.openstack.org/577028 > * https://review.openstack.org/577029 > > Feel free to edit/test as you like. thank you! > > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaosorior at gmail.com Thu Jun 21 05:41:05 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Thu, 21 Jun 2018 08:41:05 +0300 Subject: [openstack-dev] [tripleo] 'overcloud deploy' doesn't restart haproxy (Pike) In-Reply-To: <20180621034443.ja4lxombexjduntx@redhat.com> References: <20180621034443.ja4lxombexjduntx@redhat.com> Message-ID: It is unfortunately a known issue and is present in queens and master as well. I think Michele (bandini on IRC) was working on it. On Thu, 21 Jun 2018, 06:45 Lars Kellogg-Stedman, wrote: > I've noticed that when updating the overcloud with 'overcloud deploy', > the deploy process does not restart the haproxy containers when there > are changes to the haproxy configuration. > > Is this expected behavior? > > -- > Lars Kellogg-Stedman | larsks @ {irc,twitter,github} > http://blog.oddbit.com/ | > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at citynetwork.eu Thu Jun 21 08:00:39 2018 From: tobias.rydberg at citynetwork.eu (Tobias Rydberg) Date: Thu, 21 Jun 2018 10:00:39 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <267af813-4983-c7a6-48e7-c30040615529@citynetwork.eu> Hi folks, Time for a new meeting for the Public Cloud WG. Agenda draft can be found at https://etherpad.openstack.org/p/publiccloud-wg, feel free to add items to that list. See you all at IRC 1400 UTC in #openstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Twitter & IRC: tobberydberg www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED From hongbin034 at gmail.com Thu Jun 21 08:14:11 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Thu, 21 Jun 2018 04:14:11 -0400 Subject: [openstack-dev] Openstack-Zun Service Appears down In-Reply-To: References: Message-ID: HI Muhammad, Here is the code (run in controller node) that decides whether a service is up https://github.com/openstack/zun/blob/master/zun/api/servicegroup.py . There are several possibilities to cause a service to be 'down': 1. The service was being 'force_down' via API (e.g. explicitly issued a command like "appcontainer service forcedown") 2. The zun compute process is not doing the heartbeat for a certain period of time (CONF.service_down_time). 3. The zun compute process is doing the heartbeat properly but the time between controller node and compute node is out of sync. In before, #3 is the common pitfall that people ran into. If it is not #3, you might want to check if the zun compute process is doing the heartbeat properly. Each zun compute process is running a periodic task to update its state in DB: https://github.com/openstack/zun/blob/master/zun/servicegroup/zun_service_periodic.py . The call of ' report_state_up ' will record this service is up in DB at current time. You might want to check if this periodic task is running properly, or if the current state is updated in the DB. Above is my best guess. Please feel free to follow it up with me or other team members if you need further assistant for this issue. Best regards, Hongbin On Wed, Jun 20, 2018 at 9:14 AM Usman Awais wrote: > Dear Zuners, > > I have installed RDO pike. I stopped openstack-nova-compute service on one > of the hosts, and installed a zun-compute service. Although all the > services seems to be running ok on both controller and compute but when I > do > > openstack appcontainer service list > > It gives me following > > > +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ > | Id | Host | Binary | State | Disabled | Disabled Reason | > Updated At | Availability Zone | > > +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ > | 1 | node1.os.lab | zun-compute | down | False | None | > 2018-06-20 13:14:31 | nova | > > +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ > > I have checked all ports in both directions they are open, including etcd > ports and others. All services are running, only docker service has the > warning message saying "failed to retrieve docker-runc version: exec: > \"docker-runc\": executable file not found in $PATH". There is also a > message at zun-compute > "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: > SAWarning: The IN-predicate on "container.uuid" was invoked with an empty > sequence. This results in a contradiction, which nonetheless can be > expensive to evaluate. Consider alternative strategies for improved > performance." > > Please guide... > > Regards, > Muhammad Usman Awais > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sferdjao at redhat.com Thu Jun 21 08:22:04 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Thu, 21 Jun 2018 10:22:04 +0200 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: <20180621082204.GA7584@redhat> On Mon, Jun 18, 2018 at 10:16:05AM -0400, Artom Lifshitz wrote: > Hey all, > > For Rocky I'm trying to get live migration to work properly for > instances that have a NUMA topology [1]. > > A question that came up on one of patches [2] is how to handle > resources claims on the destination, or indeed whether to handle that > at all. > > The previous attempt's approach [3] (call it A) was to use the > resource tracker. This is race-free and the "correct" way to do it, > but the code is pretty opaque and not easily reviewable, as evidenced > by [3] sitting in review purgatory for literally years. > > A simpler approach (call it B) is to ignore resource claims entirely > for now and wait for NUMA in placement to land in order to handle it > that way. This is obviously race-prone and not the "correct" way of > doing it, but the code would be relatively easy to review. Hello Artom, The problem I have with B approach is that. It's based on something which has not been designed for which will end-up with the same bugs that you are trying to solve (1417667, 1289064). The live migration is a sensitive operation that operators need to have trust on, if we take case of a host evacuation the result would be terrible, no? If you want continue with B, I think you will have to find at least a mechanism to update the host NUMA topology resources of the destination during the on-going migrations. But again that should be done early to avoid a too big window where an other instance can be scheduled and be assigned of the same CPU topology. Also does this really make sense when we now that at some point placement will take care of such things for NUMA resources? The A approach already handles what you need: - Test whether destination host can accept the guest CPU policy - Build new instance NUMA topology based on destination host - Hold and update NUMA topology resources of destination host - Store the destination host NUMA topology so it can be used by source ... My preference is A because it reuses something which is used for every guests that are scheduled today (not only for pci or numa things), we have trust on it, it's also used for some move operations, it limits the race window to a one we already have, and finally we limit the code introduced. Thanks, s. > For the longest time, live migration did not keep track of resources > (until it started updating placement allocations). The message to > operators was essentially "we're giving you this massive hammer, don't > break your fingers." Continuing to ignore resource claims for now is > just maintaining the status quo. In addition, there is value in > improving NUMA live migration *now*, even if the improvement is > incomplete because it's missing resource claims. "Best is the enemy of > good" and all that. Finally, making use of the resource tracker is > just work that we know will get thrown out once we start using > placement for NUMA resources. > > For all those reasons, I would favor approach B, but I wanted to ask > the community for their thoughts. > > Thanks! > > [1] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) > [2] https://review.openstack.org/#/c/567242/ > [3] https://review.openstack.org/#/c/244489/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From apetrich at redhat.com Thu Jun 21 08:51:49 2018 From: apetrich at redhat.com (Adriano Petrich) Date: Thu, 21 Jun 2018 09:51:49 +0100 Subject: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team In-Reply-To: References: Message-ID: +1 On 19 June 2018 at 10:47, Dougal Matthews wrote: > > > On 19 June 2018 at 10:27, Renat Akhmerov wrote: > >> Hi, >> >> I’d like to promote Vitalii Solodilov to the core team of Mistral. In my >> opinion, Vitalii is a very talented engineer who has been demonstrating it >> by providing very high quality code and reviews in the last 6-7 months. >> He’s one of the people who doesn’t hesitate taking responsibility for >> solving challenging technical tasks. It’s been a great pleasure to work >> with Vitalii and I hope can will keep up doing great job. >> >> Core members, please vote. >> > > +1 from me. Vitalii has been one of the most active reviewers and code > contributors through Queens and Rocky. > > > Vitalii’s statistics: http://stackalytics.com/?module=mistral-group& >> metric=marks&user_id=mcdoker18 >> >> Thanks >> >> Renat Akhmerov >> @Nokia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Thu Jun 21 09:13:51 2018 From: dougal at redhat.com (Dougal Matthews) Date: Thu, 21 Jun 2018 10:13:51 +0100 Subject: [openstack-dev] [mistral] Promoting Vitalii Solodilov to the Mistral core team In-Reply-To: References: Message-ID: On 19 June 2018 at 10:27, Renat Akhmerov wrote: > Hi, > > I’d like to promote Vitalii Solodilov to the core team of Mistral. In my > opinion, Vitalii is a very talented engineer who has been demonstrating it > by providing very high quality code and reviews in the last 6-7 months. > He’s one of the people who doesn’t hesitate taking responsibility for > solving challenging technical tasks. It’s been a great pleasure to work > with Vitalii and I hope can will keep up doing great job. > > Core members, please vote. > Thanks all for the votes and thank you Renat for nominating. I have added Vitalii to the core reviewers. Welcome aboard, you can now +2! :-) > > Vitalii’s statistics: http://stackalytics.com/?module= > mistral-group&metric=marks&user_id=mcdoker18 > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Thu Jun 21 10:27:22 2018 From: abishop at redhat.com (Alan Bishop) Date: Thu, 21 Jun 2018 06:27:22 -0400 Subject: [openstack-dev] [tripleo] 'overcloud deploy' doesn't restart haproxy (Pike) In-Reply-To: References: <20180621034443.ja4lxombexjduntx@redhat.com> Message-ID: On Thu, Jun 21, 2018 at 1:41 AM, Juan Antonio Osorio wrote: > It is unfortunately a known issue and is present in queens and master as > well. I think Michele (bandini on IRC) was working on it. > See [1], and note that [2] merged to stable/queens just a couple days ago. [1] https://bugs.launchpad.net/tripleo/+bug/1775196 [2] https://review.openstack.org/574264 Alan > > On Thu, 21 Jun 2018, 06:45 Lars Kellogg-Stedman, wrote: > >> I've noticed that when updating the overcloud with 'overcloud deploy', >> the deploy process does not restart the haproxy containers when there >> are changes to the haproxy configuration. >> >> Is this expected behavior? >> >> -- >> Lars Kellogg-Stedman | larsks @ {irc,twitter,github} >> http://blog.oddbit.com/ | >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viktor.tikkanen at nokia.com Thu Jun 21 10:44:28 2018 From: viktor.tikkanen at nokia.com (Tikkanen, Viktor (Nokia - FI/Espoo)) Date: Thu, 21 Jun 2018 10:44:28 +0000 Subject: [openstack-dev] [heat][heat-templates] Creating a role with no domain Message-ID: Hi! There was a new 'domain' property added to OS::Keystone::Role (https://storyboard.openstack.org/#!/story/1684558, https://review.openstack.org/#/c/459033/). With "openstack role create" CLI command it is still possible to create roles with no associated domains; but it seems that the same cannot be done with heat templates. An example: if I create two roles, CliRole (with "openstack role create CliRole" command) and SimpleRole with the following heat template: heat_template_version: 2015-04-30 description: Creates a role resources: role_resource: type: OS::Keystone::Role properties: name: SimpleRole the result in the keystone database will be: MariaDB [keystone]> select * from role; +----------------------------------+------------------+-------+-----------+ | id | name | extra | domain_id | +----------------------------------+------------------+-------+-----------+ | 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole | {} | default | | 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole | {} | <> | | 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {} | <> | | 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user | {} | <> | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {} | <> | | e174c27e79b84ea392d28224eb0af7c9 | admin | {} | <> | +----------------------------------+------------------+-------+-----------+ Should it be possible to create a role without associated domain with a heat template? -V. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Thu Jun 21 11:39:40 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Thu, 21 Jun 2018 17:09:40 +0530 Subject: [openstack-dev] [heat][heat-templates] Creating a role with no domain In-Reply-To: References: Message-ID: Looks like that's a bug where we create a domain specific role for 'default' domain[1], when domain is not specified. [1] https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54 You're welcome to raise a bug and propose a fix where we should be just removing the default. On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) < viktor.tikkanen at nokia.com> wrote: > Hi! > > There was a new ’domain’ property added to OS::Keystone::Role ( > *https://storyboard.openstack.org/#!/story/1684558* > , > *https://review.openstack.org/#/c/459033/* > ). > > With “openstack role create” CLI command it is still possible to create > roles with no associated domains; but it seems that the same cannot be done > with heat templates. > > An example: if I create two roles, CliRole (with “openstack role create > CliRole” command) and SimpleRole with the following heat template: > > heat_template_version: 2015-04-30 > description: Creates a role > resources: > role_resource: > type: OS::Keystone::Role > properties: > name: SimpleRole > > the result in the keystone database will be: > > MariaDB [keystone]> select * from role; > +----------------------------------+------------------+----- > --+-----------+ > | id | name | extra | domain_id > | > +----------------------------------+------------------+----- > --+-----------+ > | 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole | {} | default > | > | 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole | {} | <> > | > | 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {} | <> > | > | 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user | {} | <> > | > | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | {} | <> > | > | e174c27e79b84ea392d28224eb0af7c9 | admin | {} | <> > | > +----------------------------------+------------------+----- > --+-----------+ > > Should it be possible to create a role without associated domain with a > heat template? > > -V. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jun 21 12:51:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Jun 2018 12:51:36 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> <20180531130047.q2x2gmhkredaqxis@yuggoth.org> <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> <20180620163713.57najuohjasgkps4@yuggoth.org> Message-ID: <20180621125136.7xkvhy3ms77pfdds@yuggoth.org> On 2018-06-20 16:59:30 -0500 (-0500), Adam Harwell wrote: > Looks like I missed this so I'm late to the party, but: > > Ade is technically correct, Octavia doesn't explicitly depend on Barbican, > as we do support castellan generically. > > *HOWEVER*: we don't just store and retrieve our own secrets -- we rely on > loading up user created secrets. This means that for Octavia to work, even > if we use castellan, we still need some way for users to interact with the > secret store via an API, and what that means in openstack in still > Barbican. So I would still say that Barbican is a dependency for us > logistically, if not technically. > > For example, internally at GoDaddy we were investigating deploying Vault > with a custom user-facing API/UI for allowing users to store secrets that > could be consumed by Octavia with castellan (don't get me started on how > dumb that is, but it's what we were investigating). > The correct way to do this in an openstack environment is the openstack > secret store API, which is Barbican. So, while I am personally dealing with > an example of very painfully avoiding Barbican (which may have been a > non-issue if Barbican were a base service), I have a tough time reconciling > not including Barbican itself as a requirement. [...] The past pushback we received from operators and deployers was that they didn't want to be required to care and feed for yet one more API service. As a compromise, it was suggested that we at least provide a guaranteed means for services to handle their own secrets in a centralized and standardized manner. In practice, the wording we arrived at is intended to drive projects to strongly recommend deploying Barbican in cases where operators want to take advantage of any features of other services which require user interaction with the key store. Making a user-facing API service a "required project" from that perspective is a bigger discussion, in my opinion. I'm in favor of trying, but to me this piece is the first step in such a direction. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From alifshit at redhat.com Thu Jun 21 13:04:23 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 21 Jun 2018 09:04:23 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: <5B2B0CC5.8010602@windriver.com> References: <5B27D46F.10804@windriver.com> <5B2B0CC5.8010602@windriver.com> Message-ID: > > As I understand it, Artom is proposing to have a larger race window, > essentially > from when the scheduler selects a node until the resource audit runs on > that node. > Exactly. When writing the spec I thought we could just call the resource tracker to claim the resources when the migration was done. However, when I started looking at the code in reaction to Sahid's feedback, I noticed that there's no way to do it without the MoveClaim context (right?) Keep in mind, we're not making any race windows worse - I'm proposing keeping the status quo and fixing it later with NUMA in placement (or the resource tracker if we can swing it). The resource tracker stuff is just so... opaque. For instance, the original patch [1] uses a mutated_migration_context around the pre_live_migration call to the libvirt driver. Would I still need to do that? Why or why not? At this point we need to commit to something and roll with it, so I'm sticking to the "easy way". If it gets shut down in code review, at least we'll have certainty on how to approach this next cycle. [1] https://review.openstack.org/#/c/244489/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Jun 21 13:36:58 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 21 Jun 2018 09:36:58 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: On 06/18/2018 10:16 AM, Artom Lifshitz wrote: > Hey all, > > For Rocky I'm trying to get live migration to work properly for > instances that have a NUMA topology [1]. > > A question that came up on one of patches [2] is how to handle > resources claims on the destination, or indeed whether to handle that > at all. > > The previous attempt's approach [3] (call it A) was to use the > resource tracker. This is race-free and the "correct" way to do it, > but the code is pretty opaque and not easily reviewable, as evidenced > by [3] sitting in review purgatory for literally years. > > A simpler approach (call it B) is to ignore resource claims entirely > for now and wait for NUMA in placement to land in order to handle it > that way. This is obviously race-prone and not the "correct" way of > doing it, but the code would be relatively easy to review. > > For the longest time, live migration did not keep track of resources > (until it started updating placement allocations). The message to > operators was essentially "we're giving you this massive hammer, don't > break your fingers." Continuing to ignore resource claims for now is > just maintaining the status quo. In addition, there is value in > improving NUMA live migration *now*, even if the improvement is > incomplete because it's missing resource claims. "Best is the enemy of > good" and all that. Finally, making use of the resource tracker is > just work that we know will get thrown out once we start using > placement for NUMA resources. > > For all those reasons, I would favor approach B, but I wanted to ask > the community for their thoughts. Side question... does either approach touch PCI device management during live migration? I ask because the only workloads I've ever seen that pin guest vCPU threads to specific host processors -- or make use of huge pages consumed from a specific host NUMA node -- have also made use of SR-IOV and/or PCI passthrough. [1] If workloads that use PCI passthrough or SR-IOV VFs cannot be live migrated (due to existing complications in the lower-level virt layers) I don't see much of a point spending lots of developer resources trying to "fix" this situation when in the real world, only a mythical workload that uses CPU pinning or huge pages but *doesn't* use PCI passthrough or SR-IOV VFs would be helped by it. Best, -jay [1 I know I'm only one person, but every workload I've seen that requires pinned CPUs and/or huge pages is a VNF that has been essentially an ASIC that a telco OEM/vendor has converted into software and requires the same guarantees that the ASIC and custom hardware gave the original hardware-based workload. These VNFs, every single one of them, used either PCI passthrough or SR-IOV VFs to handle latency-sensitive network I/O. From lyarwood at redhat.com Thu Jun 21 13:42:47 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 21 Jun 2018 14:42:47 +0100 Subject: [openstack-dev] minimum libvirt version for nova-compute In-Reply-To: <20180620125429.q3dmbhdh34fuzgwl@lyarwood.usersys.redhat.com> References: <61c42853-98a5-7d22-8c5c-71a706860cfb@debian.org> <20180620115426.gqjpv6wrv2edtzz3@lyarwood.usersys.redhat.com> <695a74bf-4fcf-eb3e-2711-122123b12184@gmail.com> <20180620125429.q3dmbhdh34fuzgwl@lyarwood.usersys.redhat.com> Message-ID: <20180621134247.wqevxv3c72pqnmun@lyarwood.usersys.redhat.com> On 20-06-18 13:54:29, Lee Yarwood wrote: > On 20-06-18 07:32:08, Matt Riedemann wrote: > > On 6/20/2018 6:54 AM, Lee Yarwood wrote: > > > We can bump the minimum here but then we have to play a game of working > > > out the oldest version the above fix was backported to across the > > > various distros. I'd rather see this address by the Libvirt maintainers > > > in Debian if I'm honest. > > > > Just a thought, but in nova we could at least do: > > > > 1. Add a 'known issues' release note about the issue and link to the libvirt > > patch. > > ACK > > > and/or > > > > 2. Handle libvirtError in that case, check for the "Incorrect number of > > padding bytes" string in the error, and log something with a breadcrumb to > > the libvirt fix - that would be for people that miss the release note, or > > hit the issue past rocky and wouldn't have found the release note because > > they're on Stein+ now. > > Yeah that's fair, I'll submit something for both of the above today. libvirt: Log breadcrumb for known encryption bug https://review.openstack.org/577164 Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From sean.k.mooney at intel.com Thu Jun 21 13:50:41 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Thu, 21 Jun 2018 13:50:41 +0000 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: <4B1BB321037C0849AAE171801564DFA688B4F844@IRSMSX107.ger.corp.intel.com> > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Thursday, June 21, 2018 2:37 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [nova] NUMA-aware live migration: easy but > incomplete vs complete but hard > > On 06/18/2018 10:16 AM, Artom Lifshitz wrote: > > Hey all, > > > > For Rocky I'm trying to get live migration to work properly for > > instances that have a NUMA topology [1]. > > > > A question that came up on one of patches [2] is how to handle > > resources claims on the destination, or indeed whether to handle that > > at all. > > > > The previous attempt's approach [3] (call it A) was to use the > > resource tracker. This is race-free and the "correct" way to do it, > > but the code is pretty opaque and not easily reviewable, as evidenced > > by [3] sitting in review purgatory for literally years. > > > > A simpler approach (call it B) is to ignore resource claims entirely > > for now and wait for NUMA in placement to land in order to handle it > > that way. This is obviously race-prone and not the "correct" way of > > doing it, but the code would be relatively easy to review. > > > > For the longest time, live migration did not keep track of resources > > (until it started updating placement allocations). The message to > > operators was essentially "we're giving you this massive hammer, > don't > > break your fingers." Continuing to ignore resource claims for now is > > just maintaining the status quo. In addition, there is value in > > improving NUMA live migration *now*, even if the improvement is > > incomplete because it's missing resource claims. "Best is the enemy > of > > good" and all that. Finally, making use of the resource tracker is > > just work that we know will get thrown out once we start using > > placement for NUMA resources. > > > > For all those reasons, I would favor approach B, but I wanted to ask > > the community for their thoughts. > > Side question... does either approach touch PCI device management > during live migration? > > I ask because the only workloads I've ever seen that pin guest vCPU > threads to specific host processors -- or make use of huge pages > consumed from a specific host NUMA node -- have also made use of SR-IOV > and/or PCI passthrough. [1] > > If workloads that use PCI passthrough or SR-IOV VFs cannot be live > migrated (due to existing complications in the lower-level virt layers) > I don't see much of a point spending lots of developer resources trying > to "fix" this situation when in the real world, only a mythical > workload that uses CPU pinning or huge pages but *doesn't* use PCI > passthrough or SR-IOV VFs would be helped by it. > > Best, > -jay > > [1 I know I'm only one person, but every workload I've seen that > requires pinned CPUs and/or huge pages is a VNF that has been > essentially an ASIC that a telco OEM/vendor has converted into software > and requires the same guarantees that the ASIC and custom hardware gave > the original hardware-based workload. These VNFs, every single one of > them, used either PCI passthrough or SR-IOV VFs to handle latency- > sensitive network I/O. [Mooney, Sean K] I would generally agree but with the extention of include dpdk based vswitch like ovs-dpdk or vpp. Cpu pinned or hugepage backed guests generally also have some kind of high performance networking solution or use a hardware Acclaortor like a gpu to justify the performance assertion that pinning of cores or ram is required. Dpdk networking stack would however not require the pci remaping to be addressed though I belive that is planned to be added in stine. > > _______________________________________________________________________ > ___ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From rafal.zielinski at linguamatics.com Thu Jun 21 14:08:19 2018 From: rafal.zielinski at linguamatics.com (Rafal Zielinski) Date: Thu, 21 Jun 2018 15:08:19 +0100 Subject: [openstack-dev] [Vitrage] naming issues Message-ID: <6DEBC08B-6ED8-4744-956A-D8B92FD077C1@linguamatics.com> Hello, As suggested by eyalb on irc I am posting my problem here. Basically I have 10 nova hosts named in nags as follows: nova0 nova1 . . . nova10 I’ve made config file for the vitrage to map hosts to real hosts in Openstack named like: nova0.domain.com nova1.domain.com . . . nova10.domain.com And the issue: When provoking nagios alert on host nova10 Vitrage is displaying error on nova1, when provoking nagios alert on host nova1 vitrage is not showing any alert. Can somebody have a look at this issue ? Thank you, Rafal Zielinski From alifshit at redhat.com Thu Jun 21 14:13:45 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 21 Jun 2018 10:13:45 -0400 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: > Side question... does either approach touch PCI device management during > live migration? Nope. I'd need to do some research to see what, if anything, is needed at the lower levels (kernel, libvirt) to enable this. > I ask because the only workloads I've ever seen that pin guest vCPU threads > to specific host processors -- or make use of huge pages consumed from a > specific host NUMA node -- have also made use of SR-IOV and/or PCI > passthrough. [1] > > If workloads that use PCI passthrough or SR-IOV VFs cannot be live migrated > (due to existing complications in the lower-level virt layers) I don't see > much of a point spending lots of developer resources trying to "fix" this > situation when in the real world, only a mythical workload that uses CPU > pinning or huge pages but *doesn't* use PCI passthrough or SR-IOV VFs would > be helped by it. It's definitely a paint point for at least some of our customers - I don't know their use cases exactly, but live migration with CPU pinning but no other "high performance" features has come up a few times in our downstream bug tracker. In any case, incremental progress is better than no progress at all, so if we can improve how NUMA live migration works, we'll be in a better position to make it work with PCI devices down the road. > [Mooney, Sean K] I would generally agree but with the extention of include dpdk based vswitch like ovs-dpdk or vpp. > Cpu pinned or hugepage backed guests generally also have some kind of high performance networking solution or use a hardware > Acclaortor like a gpu to justify the performance assertion that pinning of cores or ram is required. > Dpdk networking stack would however not require the pci remaping to be addressed though I belive that is planned to be added in stine. I think Stephen Finucane's NUMA-aware vswitches work depends on mine to work with live migration - ie, it'll work just fine on its own, but to live migrate an instance with a NUMA vswitch (I know I'm abusing language here, apologies) this spec will need to be implemented first. From sferdjao at redhat.com Thu Jun 21 14:28:44 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Thu, 21 Jun 2018 16:28:44 +0200 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: Message-ID: <20180621142844.GA13765@redhat> On Thu, Jun 21, 2018 at 09:36:58AM -0400, Jay Pipes wrote: > On 06/18/2018 10:16 AM, Artom Lifshitz wrote: > > Hey all, > > > > For Rocky I'm trying to get live migration to work properly for > > instances that have a NUMA topology [1]. > > > > A question that came up on one of patches [2] is how to handle > > resources claims on the destination, or indeed whether to handle that > > at all. > > > > The previous attempt's approach [3] (call it A) was to use the > > resource tracker. This is race-free and the "correct" way to do it, > > but the code is pretty opaque and not easily reviewable, as evidenced > > by [3] sitting in review purgatory for literally years. > > > > A simpler approach (call it B) is to ignore resource claims entirely > > for now and wait for NUMA in placement to land in order to handle it > > that way. This is obviously race-prone and not the "correct" way of > > doing it, but the code would be relatively easy to review. > > > > For the longest time, live migration did not keep track of resources > > (until it started updating placement allocations). The message to > > operators was essentially "we're giving you this massive hammer, don't > > break your fingers." Continuing to ignore resource claims for now is > > just maintaining the status quo. In addition, there is value in > > improving NUMA live migration *now*, even if the improvement is > > incomplete because it's missing resource claims. "Best is the enemy of > > good" and all that. Finally, making use of the resource tracker is > > just work that we know will get thrown out once we start using > > placement for NUMA resources. > > > > For all those reasons, I would favor approach B, but I wanted to ask > > the community for their thoughts. > > Side question... does either approach touch PCI device management during > live migration? > > I ask because the only workloads I've ever seen that pin guest vCPU threads > to specific host processors -- or make use of huge pages consumed from a > specific host NUMA node -- have also made use of SR-IOV and/or PCI > passthrough. [1] Not really. There are lot of virtual switches that we do support like OVS-DPDK, Contrail Virtual Router... that support vhostuser interfaces which is one use-case. (We do support live-migration of vhostuser interface) > If workloads that use PCI passthrough or SR-IOV VFs cannot be live migrated > (due to existing complications in the lower-level virt layers) I don't see > much of a point spending lots of developer resources trying to "fix" this > situation when in the real world, only a mythical workload that uses CPU > pinning or huge pages but *doesn't* use PCI passthrough or SR-IOV VFs would > be helped by it. > > Best, > -jay > > [1 I know I'm only one person, but every workload I've seen that requires > pinned CPUs and/or huge pages is a VNF that has been essentially an ASIC > that a telco OEM/vendor has converted into software and requires the same > guarantees that the ASIC and custom hardware gave the original > hardware-based workload. These VNFs, every single one of them, used either > PCI passthrough or SR-IOV VFs to handle latency-sensitive network I/O. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From nate.johnston at redhat.com Thu Jun 21 14:48:05 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 21 Jun 2018 10:48:05 -0400 Subject: [openstack-dev] [neutron][fwaas] Investigation into debian/l3/wsgi/fwaas error Message-ID: <20180621144805.qa4y3kv4girzdlti@bishop> [bringing a side email conversation onto the main mailing list] I have been looking into the issue with neutron_fwaas having an error when running under the neutron-l3-agent on Debian when using wsgi. Here's what I have tracked it down to at this point. I am going to lay it all out there, including points that you already know, because I am going to bring in another party or two at this point. To make sure we are on solid ground, let me restate what are the parameters of the problem: 1. The error does not occur when neutron_fwaas is disabled. 2. The error does not occur if wsgi is in use. If standard eventlet is used, the error is not observed. 3. The error only occurs on debian; centos and ubuntu do not manifest the problem. As the neutron-l3-agent loads, it is trying to initialize the fwaas_v2 driver. The driver initializes without incident, and then proceeds to attempt to fetch firewall groups. Note that you do not need to exercise tempest to see this behavior; it is visible in the logs without anything else going on. Running pdb, I was able to trace the attempt to send the message deep into the RPC transmission code; I saw very little there to be suspicious of. 2018-06-20 21:06:34.761 915007 DEBUG neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 [-] Fetch firewall groups from plugin get_firewall_groups_for_project /usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py:44 ... 2018-06-20 21:07:05.551 915007 ERROR neutron.common.rpc [-] Timeout in RPC method get_firewall_groups_for_project. Waiting for 10 seconds before next attempt. If the server is not down, consider increasing the rpc_response_timeout option as Neutron server(s) may be overloaded and unable to respond quickly enough.: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 8616e98dd8d943eea1dcf99c04bd2be6 You can see, the RPC message goes into the ether and does not return. This results in the stacktraces in neutron-l3-agent.log. This example is from a later transaction. 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 [-] FWaaS router add RPC info call failed for 8c13b5d7-7b93-4b91-ae4c-c387abe96734: oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 44851518f5ee4d40a2cdbcabc27e3c92 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 Traceback (most recent call last): 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 324, in get 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 return self._queues[msg_id].get(block=True, timeout=timeout) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/eventlet/queue.py", line 313, in get 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 return waiter.wait() 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/eventlet/queue.py", line 141, in wait 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 return get_hub().switch() 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 294, in switch 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 return self.greenlet.switch() 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 queue.Empty 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 During handling of the above exception, another exception occurred: 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 Traceback (most recent call last): 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py", line 292, in add_router 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 self._process_router_update(context, new_router) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py", line 256, in _process_router_update 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 fwg_list = self.fwplugin_rpc.get_firewall_groups_for_project(ctx) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/neutron_fwaas/services/firewall/agents/l3reference/firewall_l3_agent_v2.py", line 45, in get_firewall_groups_for_project 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 host=self.host) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/neutron/common/rpc.py", line 185, in call 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 time.sleep(wait) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 self.force_reraise() 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 six.reraise(self.type_, self.value, self.tb) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/six.py", line 686, in reraise 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 raise value 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/neutron/common/rpc.py", line 162, in call 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 return self._original_context.call(ctxt, method, **kwargs) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/client.py", line 174, in call 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 retry=self.retry) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/transport.py", line 131, in _send 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 timeout=timeout, retry=retry) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 retry=retry) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 result = self._waiter.wait(msg_id, timeout) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 message = self.waiters.get(msg_id, timeout=timeout) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 File "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 'to message ID %s' % msg_id) 2018-06-20 21:23:24.772 919205 ERROR neutron_fwaas.services.firewall.agents.l3reference.firewall_l3_agent_v2 oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 44851518f5ee4d40a2cdbcabc27e3c92 So it seems the RPC message for get_firewall_groups_for_project gets lost. I can tell that there is no shortage of AMQP messaging occurring between the neutron-l3-agent and RabbitMQ. Looking at the RabbitMQ error log, I am concerned that the message is not even really being transmitted, because RabbitMQ is registering premature disconnections from only the neutron_l3_agent at times that correspond to this testing. =WARNING REPORT==== 20-Jun-2018::19:42:41 === closing AMQP connection <0.16171.21> (127.0.0.1:47942 -> 127.0.0.1:5671 - neutron-l3-agent:804510:ecc4ca7d-361f-49f7-94f3-9b4a07d102fc): client unexpectedly closed TCP connection =WARNING REPORT==== 20-Jun-2018::19:42:41 === closing AMQP connection <0.16110.21> (127.0.0.1:47934 -> 127.0.0.1:5671 - neutron-l3-agent:804510:8974dfd0-cb22-4925-bf95-ed5dcd905e41): client unexpectedly closed TCP connection =WARNING REPORT==== 20-Jun-2018::19:42:41 === closing AMQP connection <0.15410.21> (127.0.0.1:47814 -> 127.0.0.1:5671 - neutron-l3-agent:804510:ce915f0c-e46d-442f-aba2-5c204589fb0f): client unexpectedly closed TCP connection =WARNING REPORT==== 20-Jun-2018::19:42:41 === closing AMQP connection <0.15388.21> (127.0.0.1:47810 -> 127.0.0.1:5671 - neutron-l3-agent:804510:ab9b2193-199b-4312-9eca-2fbfc6cf27cd): client unexpectedly closed TCP connection I will continue to debug the issue tomorrow. I see no lonkage at this point with any of the previously listed constraints on this scenario. So I am going to copy Brian Haley for his L3 expertise, as well as the 3 FWAaaS cores to see if this directs their thoughts in any particular direction. I hope to continue the investigation tomorrow. Thanks, Nate Johnston njohnston From chris.friesen at windriver.com Thu Jun 21 15:51:46 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 21 Jun 2018 09:51:46 -0600 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: <4B1BB321037C0849AAE171801564DFA688B4F844@IRSMSX107.ger.corp.intel.com> References: <4B1BB321037C0849AAE171801564DFA688B4F844@IRSMSX107.ger.corp.intel.com> Message-ID: <5B2BC992.6080609@windriver.com> On 06/21/2018 07:50 AM, Mooney, Sean K wrote: >> -----Original Message----- >> From: Jay Pipes [mailto:jaypipes at gmail.com] >> Side question... does either approach touch PCI device management >> during live migration? >> >> I ask because the only workloads I've ever seen that pin guest vCPU >> threads to specific host processors -- or make use of huge pages >> consumed from a specific host NUMA node -- have also made use of SR-IOV >> and/or PCI passthrough. [1] >> >> If workloads that use PCI passthrough or SR-IOV VFs cannot be live >> migrated (due to existing complications in the lower-level virt layers) >> I don't see much of a point spending lots of developer resources trying >> to "fix" this situation when in the real world, only a mythical >> workload that uses CPU pinning or huge pages but *doesn't* use PCI >> passthrough or SR-IOV VFs would be helped by it. > [Mooney, Sean K] I would generally agree but with the extention of include dpdk based vswitch like ovs-dpdk or vpp. > Cpu pinned or hugepage backed guests generally also have some kind of high performance networking solution or use a hardware > Acclaortor like a gpu to justify the performance assertion that pinning of cores or ram is required. > Dpdk networking stack would however not require the pci remaping to be addressed though I belive that is planned to be added in stine. Jay, you make a good point but I'll second what Sean says...for the last few years my organization has been using a DPDK-accelerated vswitch which performs well enough for many high-performance purposes. In the general case, I think live migration while using physical devices would require coordinating the migration with the guest software. Chris From chris.friesen at windriver.com Thu Jun 21 16:53:28 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 21 Jun 2018 10:53:28 -0600 Subject: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard In-Reply-To: References: <5B27D46F.10804@windriver.com> <5B2B0CC5.8010602@windriver.com> Message-ID: <5B2BD808.4020508@windriver.com> On 06/21/2018 07:04 AM, Artom Lifshitz wrote: > As I understand it, Artom is proposing to have a larger race window, > essentially > from when the scheduler selects a node until the resource audit runs on that > node. > > > Exactly. When writing the spec I thought we could just call the resource tracker > to claim the resources when the migration was done. However, when I started > looking at the code in reaction to Sahid's feedback, I noticed that there's no > way to do it without the MoveClaim context (right?) In the previous patch, the MoveClaim is the thing that calculates the dest NUMA topology given the flavor/image, then calls hardware.numa_fit_instance_to_host() to figure out what specific host resources to consume. That claim is then associated with the migration object and the instance.migration_context, and then we call _update_usage_from_migration() to actually consume the resources on the destination. This all happens within check_can_live_migrate_destination(). As an improvement over what you've got, I think you could just kick off an early call of update_available_resource() once the migration is done. It'd be potentially computationally expensive, but it'd reduce the race window. > Keep in mind, we're not making any race windows worse - I'm proposing keeping > the status quo and fixing it later with NUMA in placement (or the resource > tracker if we can swing it). Well, right now live migration is totally broken so nobody's doing it. You're going to make it kind of work but with racy resource tracking, which could lead to people doing it then getting in trouble. We'll want to make sure there's a suitable release note for this. > The resource tracker stuff is just so... opaque. For instance, the original > patch [1] uses a mutated_migration_context around the pre_live_migration call to > the libvirt driver. Would I still need to do that? Why or why not? The mutated context applies the "new" numa_topology and PCI stuff. The reason for the mutated context for pre_live_migration() is so that the plug_vifs(instance) call will make use of the new macvtap device information. See Moshe's comment from Dec 8 2016 at https://review.openstack.org/#/c/244489/46. I think the mutated context around the call to self.driver.live_migration() is so that the new XML represents the newly-claimed pinned CPUs on the destination. > At this point we need to commit to something and roll with it, so I'm sticking > to the "easy way". If it gets shut down in code review, at least we'll have > certainty on how to approach this next cycle. Yep, I'm cool with incremental improvement. Chris From msm at redhat.com Thu Jun 21 16:56:33 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 21 Jun 2018 12:56:33 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was on the shorter side but covered several topics. We discussed the migration to StoryBoard, and noted that we need to send word to Gilles and the GraphQL experimentors that the board is in place and ready for their usage. The GraphQL work was also highlighted as there has been a review[7] posted along with an update[8] to the mailing list. This work is in its early stages, but significant progress has already been made. Kudos to Gilles and the neutron team! We talked briefly about the upcoming PTG and which days might be available for the SIG, but it is too early for such speculation and the group has tabled the idea for now. The API-SIG StoryBoard is now live[9], although it still has the old "api-wg" name. That will be updated when the infra team does the project rename for us. We encourage all new activity to take place here and we are in the process of cleaning up the older links; stay tuned for more information. There is one new guideline change that is just starting its life in the review process[10]. This is an addition to the guideline on errors and although this review is still in the early stages of development any comments are greatly appreciated. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/575120 [8] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131557.html [9] https://storyboard.openstack.org/#!/project/1039 [10] https://review.openstack.org/#/c/577118/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From gael.therond at gmail.com Thu Jun 21 17:32:49 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 21 Jun 2018 19:32:49 +0200 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <1529131252.j1u6fixa3j.tristanC@fedora> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> Message-ID: Hi everyone, sorry for the late answer but I’m currently trapped into a cluster issue with cinder-volume that doesn’t give me that much time. That being said, I’ll have some times to work on this feature during the summer (July/August) and so do some coding once I’ll have catched up with your work. Did you created a specific tree or did you created a new graphql folder within the neutron/neutron/api/ path regarding the schemas etc? Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray a écrit : > On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: > > Hello, > > > > This initial patch [1] allows to retrieve networks, subnets. > > > > This is very easy, thanks to the graphene-sqlalchemy helper. > > > > The edges, nodes layer might be confusing at first meanwhile they make > > the Schema Relay-compliant in order to offer re-fetching, pagination > > features out of the box. > > > > The next priority is to set the unit test in order to implement > mutations. > > > > Could someone help provide a base in order to respect Neutron test > > requirements? > > > > > > [1] [abandoned] > > Actually, the correct review (proposed on the feature/graphql branch) > is: > > [1] https://review.openstack.org/575898 > > > > > Thanks, > > Gilles > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Jun 21 19:17:45 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 21 Jun 2018 15:17:45 -0400 Subject: [openstack-dev] [heat][heat-templates] Creating a role with no domain In-Reply-To: References: Message-ID: On 21/06/18 07:39, Rabi Mishra wrote: > Looks like that's a bug where we create a domain specific role for > 'default' domain[1], when domain is not specified. > > [1] > https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/keystone/role.py#L54 You can _probably_ pass domain: null in your template. Worth a try, anyway. - ZB > You're welcome to raise a bug and propose a fix where we should be just > removing the default. > > On Thu, Jun 21, 2018 at 4:14 PM, Tikkanen, Viktor (Nokia - FI/Espoo) > > wrote: > > Hi! > There was a new ’domain’ property added to OS::Keystone::Role > (_https://storyboard.openstack.org/#!/story/1684558_ > , > _https://review.openstack.org/#/c/459033/_ > ). > With “openstack role create” CLI command it is still possible to > create roles with no associated domains; but it seems that the same > cannot be done with heat templates. > An example: if I create two roles, CliRole (with “openstack role > create CliRole” command)  and SimpleRole with the following heat > template: > heat_template_version: 2015-04-30 > description: Creates a role > resources: >   role_resource: >     type: OS::Keystone::Role >     properties: >       name: SimpleRole > the result in the keystone database will be: > MariaDB [keystone]> select * from role; > +----------------------------------+------------------+-------+-----------+ > | id    | name             | extra | domain_id | > +----------------------------------+------------------+-------+-----------+ > | 5de0eee4990e4a59b83dae93af9c0951 | SimpleRole       | {}    | > default   | > | 79472e6e1bf341208bd88e1c2dcf7f85 | CliRole          | {}    | > <>  | > | 7dd5e4ea87e54a13897eb465fdd0e950 | heat_stack_owner | {}    | > <>  | > | 80fd61edbe8842a7abb47fd7c91ba9d7 | heat_stack_user  | {}    | > <>  | > | 9fe2ff9ee4384b1894a90878d3e92bab | _member_         | {}    | > <>  | > | e174c27e79b84ea392d28224eb0af7c9 | admin            | {}    | > <>  | > +----------------------------------+------------------+-------+-----------+ > Should it be possible to create a role without associated domain > with a heat template? > -V. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards, > Rabi Mishra > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Thu Jun 21 19:38:50 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 21 Jun 2018 15:38:50 -0400 Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: <1529534483-sup-9862@lrrr.local> References: <1529534483-sup-9862@lrrr.local> Message-ID: On 20/06/18 18:59, Doug Hellmann wrote: > According to > https://governance.openstack.org/tc/reference/projects/heat.html the > Heat PTL*is* the PTL for heat-translators. Any internal team structure > that implies otherwise is just that, an internal team structure. Yes, correct. > I'm really unclear on what the problem is here. From my perspective (wearing my Heat hat), the problem is that the official team structure no longer represents reality. The folks who were working on both heat and heat-translator are long gone. Bob is responsive to direct email, but heat-translator is effectively in maintenance mode at best. A few weeks back I made the mistake of reviewing a patch (Gerrit confirms that it was literally the first patch I have ever reviewed in heat-translator) to update the docs PTI since (a) I know a bit about that, and (b) I technically have +2 rights. Immediately people started pinging me every day for reviews and adding stuff to my review queue, some of which was labelled 'trivial' right there in the patch headline, until I asked them to knock it off. That's how much demand there is for maintainers. Apparently heat-translator has a healthy ecosystem of contributors and users, but not of maintainers, and it remaining a deliverable of the Heat project is doing nothing to alleviate the latter problem. I'd like to find it a home that _would_ help. cheers, Zane. From zigo at debian.org Thu Jun 21 19:52:54 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 21 Jun 2018 21:52:54 +0200 Subject: [openstack-dev] [neutron][fwaas] How to reproduce in Debian (was: Investigation into debian/l3/wsgi/fwaas error) In-Reply-To: <20180621144805.qa4y3kv4girzdlti@bishop> References: <20180621144805.qa4y3kv4girzdlti@bishop> Message-ID: <06e5a40a-2946-892f-f465-c870e393370d@debian.org> On 06/21/2018 04:48 PM, Nate Johnston wrote: > I will continue to debug the issue tomorrow. I see no lonkage at this > point with any of the previously listed constraints on this scenario. > So I am going to copy Brian Haley for his L3 expertise, as well as the 3 > FWAaaS cores to see if this directs their thoughts in any particular > direction. I hope to continue the investigation tomorrow. > > Thanks, > > Nate Johnston > njohnston Hi there, As per IRC discussion, let me explain to everyone the difference between what I've done in Debian, and what's in the other distributions. First, I would like to highlight the fact that this isn't at all Debian specific. 1/ Why doing: neutron-server -> neutron-api + neutron-rpc-server On other distros, we use Python 2, therefore neutron-server can be in use, and that works with or without SSL. In Debian, since we've switched to Python 3, an Eventlet based API daemon cannot work with SSL, due to a bug in Eventlet itself. That bug has been known since 2015, with no fix coming. What happens is that when a client connects to the API server, due to Eventlet's monkey patching, non-blocking stuff produce an SSL handshake. As a consequence, the only way to run Neutron with Python 3 and SSL, is to avoid neutron-server, and use either uwsgi, or mod_uwsgi. In Debian, most daemons are using uwsgi when possible. In such mode, the WSGI application is /usr/bin/neutron-api. But this WSGI application, as it's not a daemon (but an API only, served by a web server), cannot have a a thread to listen to the RPC bus. So instead, there's neutron-rpc-server to do that job. 2/ Bugs already fixed but not merged in neutron for this mode An Nguyen Phong (annp on IRC) has fixed stuff in neutron for that mode of operation described above, but it's not yet merged: https://review.openstack.org/#/c/555608/ Without this patch, the l3 agent doesn't know about ml2 extensions, it's impossible to pass startup --config-file= parameters correctly, and the openvswitch agent never applies security-group firewall rules. Please consider reviewing this patch and merging it. 3/ How to reproduce the Debian environment You can always simply install stuff by hand with packages, but that's boringly long to do. The easiest way is to pop a fresh Stretch, and have puppet to run in it to install everything for you. Here's the steps: a) Boot-up a stretch machine with access to the net. b) git clone https://github.com/openstack/puppet-openstack-integration c) cd puppet-openstack-integration d) git review -d 577281 This will re-enable FWaaS for the l3 agent. Hopefully, we'll get to the point where this patch can be applied and FWaaS re-enabled. e) edit all-in-one.sh line 69: --- a/all-in-one.sh +++ b/all-in-one.sh @@ -66,7 +66,7 @@ export GEM_HOME=`pwd`/.bundled_gems gem install bundler --no-rdoc --no-ri --verbose set -e -./run_tests.sh +SCENARIO=scenario001 ./run_tests.sh RESULT=$? set +e if [ $RESULT -ne 0 ]; then f) git commit -a -m "test scenario001" g) ./all-in-one.sh Note that you may as well test scenario 2 & 4 which are also using OVS, or scenario 3 that is using linuxbridge. After approx one hour, you'll get a full Debian all-in-one installation using packages. If you're not used to it, all the code is in /usr/lib/python3/dist-packages. You may edit code there for your tests. If you need to re-run a single test, you can do this: cp /tmp/openstack/tempest/etc/tempest.conf /etc/tempest cd /var/lib/tempest tempest_debian_shell_wrapper \ tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops Just have a look at /usr/bin/tempest_debian_shell_wrapper, it's a tiny small shell script to run tests easily. Also, feel free to attempt switching to firewall_v2 in configuration files in /etc/neutron, and then restart the daemons. By default, it's still v1, but if it works with v2, we'll happily apply patches in puppet-openstack for that (which will apply for all distros). If you need me, just type "zigo" on IRC (I'm in most channels, including #openstack-neutron and #openstack-fwaas), and I'll reply if it's office hours in Geneva/France, or late in my evening. I hope the above helps, Cheers, Thomas Goirand (zigo) From zbitter at redhat.com Thu Jun 21 20:21:17 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 21 Jun 2018 16:21:17 -0400 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> <20180531130047.q2x2gmhkredaqxis@yuggoth.org> <20180606012949.b5lxxvcotahkhwv6@yuggoth.org> <20180620163713.57najuohjasgkps4@yuggoth.org> Message-ID: On 20/06/18 17:59, Adam Harwell wrote: > Looks like I missed this so I'm late to the party, but: > > Ade is technically correct, Octavia doesn't explicitly depend on > Barbican, as we do support castellan generically. > > *HOWEVER*: we don't just store and retrieve our own secrets -- we rely > on loading up user created secrets. This means that for Octavia to work, > even if we use castellan, we still need some way for users to interact > with the secret store via an API, and what that means in openstack in > still Barbican. So I would still say that Barbican is a dependency for > us logistically, if not technically. Right, yeah, I'd call that a dependency on Barbican. There are reportedly, however, other use cases where the keys are generated internally that don't depend on Barbican but can benefit from Castellan. It might be a worthwhile exercise to make a list of all of the proposed features that have been blocked on this and whether they require user interaction with the key store. > For example, internally at GoDaddy we were investigating deploying Vault > with a custom user-facing API/UI for allowing users to store secrets > that could be consumed by Octavia with castellan (don't get me started > on how dumb that is, but it's what we were investigating). > The correct way to do this in an openstack environment is the openstack > secret store API, which is Barbican. This is the correct answer, and thank you for being awesome :) > So, while I am personally dealing > with an example of very painfully avoiding Barbican (which may have been > a non-issue if Barbican were a base service), I have a tough time > reconciling not including Barbican itself as a requirement. On the bright side, getting everyone to deploy either Barbican or Vault makes it easier even for the folks who chose Vault to deploy Barbican later. I don't think we've given up on making Barbican a base service, just recognised that it's a longer-term effort whereas this is something we can do to start down the path right now. cheers, Zane. >    --Adam (rm_work) > > On Wed, Jun 20, 2018, 11:37 Jeremy Stanley > wrote: > > On 2018-06-06 01:29:49 +0000 (+0000), Jeremy Stanley wrote: > [...] > > Seeing no further objections, I give you > > https://review.openstack.org/572656 for the next step. > > That change merged just a few minutes ago, and > https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services > now includes: > >     A Castellan-compatible key store > >     OpenStack components may keep secrets in a key store, using >     Oslo’s Castellan library as an indirection layer. While >     OpenStack provides a Castellan-compatible key store service, >     Barbican, other key store backends are also available for >     Castellan. Note that in the context of the base services set >     Castellan is intended only to provide an interface for services >     to interact with a key store, and it should not be treated as a >     means to proxy API calls from users to that key store. In order >     to reduce unnecessary exposure risks, any user interaction with >     secret material should be left to a dedicated API instead >     (preferably as provided by Barbican). > > Thanks to everyone who helped brainstorming/polishing, and here's > looking forward to a ubiquity of default security features and > functionality in future OpenStack releases! > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Thu Jun 21 20:37:51 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 21 Jun 2018 15:37:51 -0500 Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: References: <1529534483-sup-9862@lrrr.local> Message-ID: <20180621203751.GB17928@sm-workstation> > > Apparently heat-translator has a healthy ecosystem of contributors and > users, but not of maintainers, and it remaining a deliverable of the Heat > project is doing nothing to alleviate the latter problem. I'd like to find > it a home that _would_ help. > I'd be interested to hear thoughts if this is somewhere where we (the TC) should step in and make a few people cores on this project? Or are the existing contributors a healthy amount but not involved enough to trust to be cores? From sean.mcginnis at gmx.com Thu Jun 21 22:48:17 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 21 Jun 2018 17:48:17 -0500 Subject: [openstack-dev] [release] Release countdown for week R-9, June 25-29 Message-ID: <20180621224817.GA8810@sm-workstation> A nice and short one this week... Development Focus ----------------- Teams should be focused on implementing planned work for the cycle. It is also a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- We have a few deadlines coming up as we get closer to the end of the cycle: * Non-client libraries (generally, any library that is not python-${PROJECT}client) must have a final release by July 19. Only critical bugfixes will be allowed past this point. Please make sure any important feature works has required library changes by this time. * Client libraries must have a final release by July 26. Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From tdecacqu at redhat.com Thu Jun 21 23:21:12 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 21 Jun 2018 23:21:12 +0000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> Message-ID: <1529622920.4qf1x4aqgc.tristanC@fedora> Hi Flint, On June 21, 2018 5:32 pm, Flint WALRUS wrote: > Hi everyone, sorry for the late answer but I’m currently trapped into a > cluster issue with cinder-volume that doesn’t give me that much time. > > That being said, I’ll have some times to work on this feature during the > summer (July/August) and so do some coding once I’ll have catched up with > your work. > That's great to hear! The next step is to understand how to deal with oslo policy and control objects access/modification. > Did you created a specific tree or did you created a new graphql folder > within the neutron/neutron/api/ path regarding the schemas etc? There is a feature/graphql branch were an initial patch[1] adds a new neutron/api/graphql directory as well as a new test_graphql.py functional tests. The api-paste is also updated to expose the '/graphql' http endpoint. Not sure if we want to keep on updating that change, or propose further code as new change on top of this skeleton? Regards, -Tristan > Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray a > écrit : > >> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: >> > Hello, >> > >> > This initial patch [1] allows to retrieve networks, subnets. >> > >> > This is very easy, thanks to the graphene-sqlalchemy helper. >> > >> > The edges, nodes layer might be confusing at first meanwhile they make >> > the Schema Relay-compliant in order to offer re-fetching, pagination >> > features out of the box. >> > >> > The next priority is to set the unit test in order to implement >> mutations. >> > >> > Could someone help provide a base in order to respect Neutron test >> > requirements? >> > >> > >> > [1] [abandoned] >> >> Actually, the correct review (proposed on the feature/graphql branch) >> is: >> >> [1] https://review.openstack.org/575898 >> >> > >> > Thanks, >> > Gilles >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From joe at topjian.net Fri Jun 22 03:00:16 2018 From: joe at topjian.net (Joe Topjian) Date: Thu, 21 Jun 2018 21:00:16 -0600 Subject: [openstack-dev] [sahara] Anti-Affinity Broke Message-ID: Hello, I originally posted this to the general openstack list to get a sanity check on what I was seeing. Jeremy F reached out and confirmed that, so I'm going to re-post the details here to begin a discussion. >From what I can see, anti-affinity is not working at all in Sahara. I was able to get it working locally by making the following changes: 1. ng.count is either invalid, always returns 0, or isn't being set somewhere else. https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L276 Instead, I've used ng_count = self.node_groups_extra[ng.id]['node_count'] 2. An uninitialized Python key: https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L283 3. Incorrect bounds in range(): https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L255-L256 I believe this should be: for i in range(0, self.cluster.anti_affinity_ratio): resources.update(self._serialize_aa_server_group(i+1)) https://github.com/openstack/sahara/blob/master/sahara/service/heat/templates.py#L278 I believe this should be: for i in range(0, ng_count): With the above in place, anti-affinity began working. The above changes were all quick fixes to get it up and running, so there might be better ways of going about this. I can also create something on StoryBoard for this, too. Thanks, Joe -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Jun 22 04:44:21 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 22 Jun 2018 14:44:21 +1000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <1529622920.4qf1x4aqgc.tristanC@fedora> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> <1529622920.4qf1x4aqgc.tristanC@fedora> Message-ID: <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> On 22/06/18 09:21, Tristan Cacqueray wrote: > Hi Flint, > > On June 21, 2018 5:32 pm, Flint WALRUS wrote: >> Hi everyone, sorry for the late answer but I’m currently trapped into a >> cluster issue with cinder-volume that doesn’t give me that much time. >> >> That being said, I’ll have some times to work on this feature during the >> summer (July/August) and so do some coding once I’ll have catched up >> with >> your work. >> > That's great to hear! The next step is to understand how to deal with > oslo policy and control objects access/modification. > >> Did you created a specific tree or did you created a new graphql folder >> within the neutron/neutron/api/ path regarding the schemas etc? > > There is a feature/graphql branch were an initial patch[1] adds a new > neutron/api/graphql directory as well as a new test_graphql.py > functional tests. > The api-paste is also updated to expose the '/graphql' http endpoint. > > Not sure if we want to keep on updating that change, or propose further > code as new change on top of this skeleton? > Makes sense to merge it, I think we have the base we needed to get going. I'll make it green so we can get merge it. > Regards, > -Tristan > > >> Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray a >> écrit : >> >>> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: >>> > Hello, >>> > >>> > This initial patch [1]  allows to retrieve networks, subnets. >>> > >>> > This is very easy, thanks to the graphene-sqlalchemy helper. >>> > >>> > The edges, nodes layer might be confusing at first meanwhile they >>> make >>> > the Schema Relay-compliant in order to offer re-fetching, pagination >>> > features out of the box. >>> > >>> > The next priority is to set the unit test in order to implement >>> mutations. >>> > >>> > Could someone help provide a base in order to respect Neutron test >>> > requirements? >>> > >>> > >>> > [1] [abandoned] >>> >>> Actually, the correct review (proposed on the feature/graphql branch) >>> is: >>> >>> [1] https://review.openstack.org/575898 >>> >>> > >>> > Thanks, >>> > Gilles >>> > >>> > >>> __________________________________________________________________________ >>> >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Fri Jun 22 05:57:38 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 22 Jun 2018 07:57:38 +0200 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> <1529622920.4qf1x4aqgc.tristanC@fedora> <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> Message-ID: Hi everyone, Thanks for the updates and support, that appreciated. @Gilles, did you already implemented all the service types? What is left to do? You already want to merge the feature branch with master? @tristan I’d like to work on the feature branch but I’ll wait for gilles answers as I don’t want to mess up having pieces of code everywhere. Thanks! Le ven. 22 juin 2018 à 06:44, Gilles Dubreuil a écrit : > > On 22/06/18 09:21, Tristan Cacqueray wrote: > > Hi Flint, > > On June 21, 2018 5:32 pm, Flint WALRUS wrote: > > Hi everyone, sorry for the late answer but I’m currently trapped into a > cluster issue with cinder-volume that doesn’t give me that much time. > > That being said, I’ll have some times to work on this feature during the > summer (July/August) and so do some coding once I’ll have catched up with > your work. > > That's great to hear! The next step is to understand how to deal with > oslo policy and control objects access/modification. > > Did you created a specific tree or did you created a new graphql folder > within the neutron/neutron/api/ path regarding the schemas etc? > > > There is a feature/graphql branch were an initial patch[1] adds a new > neutron/api/graphql directory as well as a new test_graphql.py > functional tests. > The api-paste is also updated to expose the '/graphql' http endpoint. > > Not sure if we want to keep on updating that change, or propose further > code as new change on top of this skeleton? > > > Makes sense to merge it, I think we have the base we needed to get going. > I'll make it green so we can get merge it. > > > Regards, > -Tristan > > > Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray > a > écrit : > > On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: > > Hello, > > > > This initial patch [1] allows to retrieve networks, subnets. > > > > This is very easy, thanks to the graphene-sqlalchemy helper. > > > > The edges, nodes layer might be confusing at first meanwhile they make > > the Schema Relay-compliant in order to offer re-fetching, pagination > > features out of the box. > > > > The next priority is to set the unit test in order to implement > mutations. > > > > Could someone help provide a base in order to respect Neutron test > > requirements? > > > > > > [1] [abandoned] > > Actually, the correct review (proposed on the feature/graphql branch) > is: > > [1] https://review.openstack.org/575898 > > > > > Thanks, > > Gilles > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Jun 22 06:42:02 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 22 Jun 2018 16:42:02 +1000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> <1529622920.4qf1x4aqgc.tristanC@fedora> <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> Message-ID: On 22/06/18 15:57, Flint WALRUS wrote: > Hi everyone, > > Thanks for the updates and support, that appreciated. > > @Gilles, did you already implemented all the service types? We have query types for networks and subnets for now. Before we add more we are going to focus on oslo policies so we can access and modify those items in respect of the existing security approach. Then we will have a solid foundation to add more types. > > What is left to do? You already want to merge the feature branch with > master? The feature branch graphql is the Proof of Concept and won't be merged to master until we have it full ready to share/demonstrate it to others. So we're pushing patches against that branch. The initial one to be hopefully merged soon. > > @tristan I’d like to work on the feature branch but I’ll wait for > gilles answers as I don’t want to mess up having pieces of code > everywhere. > > Thanks! > Le ven. 22 juin 2018 à 06:44, Gilles Dubreuil > a écrit : > > > On 22/06/18 09:21, Tristan Cacqueray wrote: >> Hi Flint, >> >> On June 21, 2018 5:32 pm, Flint WALRUS wrote: >>> Hi everyone, sorry for the late answer but I’m currently trapped >>> into a >>> cluster issue with cinder-volume that doesn’t give me that much >>> time. >>> >>> That being said, I’ll have some times to work on this feature >>> during the >>> summer (July/August) and so do some coding once I’ll have >>> catched up with >>> your work. >>> >> That's great to hear! The next step is to understand how to deal >> with >> oslo policy and control objects access/modification. >> >>> Did you created a specific tree or did you created a new graphql >>> folder >>> within the neutron/neutron/api/ path regarding the schemas etc? >> >> There is a feature/graphql branch were an initial patch[1] adds a >> new >> neutron/api/graphql directory as well as a new test_graphql.py >> functional tests. >> The api-paste is also updated to expose the '/graphql' http >> endpoint. >> >> Not sure if we want to keep on updating that change, or propose >> further >> code as new change on top of this skeleton? >> > > Makes sense to merge it, I think we have the base we needed to get > going. > I'll make it green so we can get merge it. > > >> Regards, >> -Tristan >> >> >>> Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray >>> a >>> écrit : >>> >>>> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: >>>> > Hello, >>>> > >>>> > This initial patch [1]  allows to retrieve networks, subnets. >>>> > >>>> > This is very easy, thanks to the graphene-sqlalchemy helper. >>>> > >>>> > The edges, nodes layer might be confusing at first meanwhile >>>> they make >>>> > the Schema Relay-compliant in order to offer re-fetching, >>>> pagination >>>> > features out of the box. >>>> > >>>> > The next priority is to set the unit test in order to implement >>>> mutations. >>>> > >>>> > Could someone help provide a base in order to respect Neutron >>>> test >>>> > requirements? >>>> > >>>> > >>>> > [1] [abandoned] >>>> >>>> Actually, the correct review (proposed on the feature/graphql >>>> branch) >>>> is: >>>> >>>> [1] https://review.openstack.org/575898 >>>> >>>> > >>>> > Thanks, >>>> > Gilles >>>> > >>>> > >>>> __________________________________________________________________________ >>>> >>>> > OpenStack Development Mailing List (not for usage questions) >>>> > Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>> >>>> > >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> > >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> >>>> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Fri Jun 22 08:01:11 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 22 Jun 2018 10:01:11 +0200 Subject: [openstack-dev] [sahara] Anti-Affinity Broke In-Reply-To: References: Message-ID: <1854445.BzLhQUzhMP@whitebase.usersys.redhat.com> On Friday, 22 June 2018 05:00:16 CEST Joe Topjian wrote: > Hello, > > I originally posted this to the general openstack list to get a sanity > check on what I was seeing. Jeremy F reached out and confirmed that, so I'm > going to re-post the details here to begin a discussion. Hi, thanks for investigating the issue; it's not the most trivial thing to test without a real CI system based on baremetal, and we don't have one at this time. > I can also create something on StoryBoard for this, too. Yes, that would be preferred; could you please open it describing the symptoms that you found in addition to the workarounds? Ciao -- Luigi From gael.therond at gmail.com Fri Jun 22 08:01:36 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 22 Jun 2018 10:01:36 +0200 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> <1529622920.4qf1x4aqgc.tristanC@fedora> <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> Message-ID: Hi Gilles, I just had a look at your patch, cool, thanks for the work. ok, that good to start with a limited subsets of query types indeed, you're right. Ok, perfect for the patch to branch, I don't know why but I had the feeling that you were requesting for the branch to be merged back and not for the patch :D I just read my emails too quickly I suppose. About your code, I feel that we should extract the schemas from the base.py under neutron/api/graphql/schemas/ right now before the code became to large, that would then allows for a better granularity. Thanks. Le ven. 22 juin 2018 à 08:42, Gilles Dubreuil a écrit : > > > On 22/06/18 15:57, Flint WALRUS wrote: > > Hi everyone, > > Thanks for the updates and support, that appreciated. > > @Gilles, did you already implemented all the service types? > > > We have query types for networks and subnets for now. > Before we add more we are going to focus on oslo policies so we can access > and modify those items in respect of the existing security approach. > Then we will have a solid foundation to add more types. > > > > What is left to do? You already want to merge the feature branch with > master? > > > The feature branch graphql is the Proof of Concept and won't be merged to > master until we have it full ready to share/demonstrate it to others. > So we're pushing patches against that branch. The initial one to be > hopefully merged soon. > > > > @tristan I’d like to work on the feature branch but I’ll wait for gilles > answers as I don’t want to mess up having pieces of code everywhere. > > Thanks! > Le ven. 22 juin 2018 à 06:44, Gilles Dubreuil a > écrit : > >> >> On 22/06/18 09:21, Tristan Cacqueray wrote: >> >> Hi Flint, >> >> On June 21, 2018 5:32 pm, Flint WALRUS wrote: >> >> Hi everyone, sorry for the late answer but I’m currently trapped into a >> cluster issue with cinder-volume that doesn’t give me that much time. >> >> That being said, I’ll have some times to work on this feature during the >> summer (July/August) and so do some coding once I’ll have catched up with >> your work. >> >> That's great to hear! The next step is to understand how to deal with >> oslo policy and control objects access/modification. >> >> Did you created a specific tree or did you created a new graphql folder >> within the neutron/neutron/api/ path regarding the schemas etc? >> >> >> There is a feature/graphql branch were an initial patch[1] adds a new >> neutron/api/graphql directory as well as a new test_graphql.py >> functional tests. >> The api-paste is also updated to expose the '/graphql' http endpoint. >> >> Not sure if we want to keep on updating that change, or propose further >> code as new change on top of this skeleton? >> >> >> Makes sense to merge it, I think we have the base we needed to get going. >> I'll make it green so we can get merge it. >> >> >> Regards, >> -Tristan >> >> >> Le sam. 16 juin 2018 à 08:42, Tristan Cacqueray >> a >> écrit : >> >> On June 15, 2018 10:42 pm, Gilles Dubreuil wrote: >> > Hello, >> > >> > This initial patch [1] allows to retrieve networks, subnets. >> > >> > This is very easy, thanks to the graphene-sqlalchemy helper. >> > >> > The edges, nodes layer might be confusing at first meanwhile they make >> > the Schema Relay-compliant in order to offer re-fetching, pagination >> > features out of the box. >> > >> > The next priority is to set the unit test in order to implement >> mutations. >> > >> > Could someone help provide a base in order to respect Neutron test >> > requirements? >> > >> > >> > [1] [abandoned] >> >> Actually, the correct review (proposed on the feature/graphql branch) >> is: >> >> [1] https://review.openstack.org/575898 >> >> > >> > Thanks, >> > Gilles >> > >> > >> __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> -- >> Gilles Dubreuil >> Senior Software Engineer - Red Hat - Openstack DFG Integration >> Email: gilles at redhat.com >> GitHub/IRC: gildub >> Mobile: +61 400 894 219 <+61%20400%20894%20219> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 <+61%20400%20894%20219> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Fri Jun 22 08:16:12 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 22 Jun 2018 08:16:12 +0000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <1529131252.j1u6fixa3j.tristanC@fedora> <1529622920.4qf1x4aqgc.tristanC@fedora> <3fc9b17a-5046-3f92-f366-bd1876654306@redhat.com> Message-ID: <1529655250.tk79avv3wz.tristanC@fedora> On June 22, 2018 8:01 am, Flint WALRUS wrote: > About your code, I feel that we should extract the schemas from the base.py > under neutron/api/graphql/schemas/ right now before the code became to > large, that would then allows for a better granularity. > > Thanks. > Since this is the graphql branch, maybe we should use the neutron model directly by adding the graphene SQLAlchemyObjectType parent and the Meta class. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From usman.awais at gmail.com Fri Jun 22 09:04:35 2018 From: usman.awais at gmail.com (Usman Awais) Date: Fri, 22 Jun 2018 14:04:35 +0500 Subject: [openstack-dev] Openstack-Zun Service Appears down In-Reply-To: References: Message-ID: Hi Hongbin, Many thanks, got it running, that was awesom... :) The problem was unsynched time. I installed chrony and it started working. Now I am running into another problem; the networking of the container. The container gets started, I can shell into it through appcontainer API, it even gets the correct IP address of my private network (named priv-net) in openstack, through DHCP. But when I ping any address, even the address of the priv-net's gateway, it does nothing. I have following configuration neutron-openvswitch-agent is running neutron-ovs-cleanup is running neutron-destroy-patch-ports is running kuryr-libnetwork is running docker is running zun-compute is running The eth0 network card has standard configuration of an OVSBridge. When I create a new container it also creates taps and patch ports on the compute node. Now I am going to try to use kuryr script to test what happens with "bridged" and "host" networks. Muhammad Usman Awais On Thu, Jun 21, 2018 at 1:14 PM, Hongbin Lu wrote: > HI Muhammad, > > Here is the code (run in controller node) that decides whether a service > is up https://github.com/openstack/zun/blob/master/zun/api/servicegroup.py > . There are several possibilities to cause a service to be 'down': > 1. The service was being 'force_down' via API (e.g. explicitly issued a > command like "appcontainer service forcedown") > 2. The zun compute process is not doing the heartbeat for a certain period > of time (CONF.service_down_time). > 3. The zun compute process is doing the heartbeat properly but the time > between controller node and compute node is out of sync. > > In before, #3 is the common pitfall that people ran into. If it is not #3, > you might want to check if the zun compute process is doing the heartbeat > properly. Each zun compute process is running a periodic task to update its > state in DB: https://github.com/openstack/zun/blob/master/zun/ > servicegroup/zun_service_periodic.py . The call of ' report_state_up ' > will record this service is up in DB at current time. You might want to > check if this periodic task is running properly, or if the current state is > updated in the DB. > > Above is my best guess. Please feel free to follow it up with me or other > team members if you need further assistant for this issue. > > Best regards, > Hongbin > > On Wed, Jun 20, 2018 at 9:14 AM Usman Awais wrote: > >> Dear Zuners, >> >> I have installed RDO pike. I stopped openstack-nova-compute service on >> one of the hosts, and installed a zun-compute service. Although all the >> services seems to be running ok on both controller and compute but when I >> do >> >> openstack appcontainer service list >> >> It gives me following >> >> +----+--------------+-------------+-------+----------+------ >> -----------+---------------------+-------------------+ >> | Id | Host | Binary | State | Disabled | Disabled Reason | >> Updated At | Availability Zone | >> +----+--------------+-------------+-------+----------+------ >> -----------+---------------------+-------------------+ >> | 1 | node1.os.lab | zun-compute | down | False | None | >> 2018-06-20 13:14:31 | nova | >> +----+--------------+-------------+-------+----------+------ >> -----------+---------------------+-------------------+ >> >> I have checked all ports in both directions they are open, including etcd >> ports and others. All services are running, only docker service has the >> warning message saying "failed to retrieve docker-runc version: exec: >> \"docker-runc\": executable file not found in $PATH". There is also a >> message at zun-compute "/usr/lib64/python2.7/site- >> packages/sqlalchemy/sql/default_comparator.py:161: SAWarning: The >> IN-predicate on "container.uuid" was invoked with an empty sequence. This >> results in a contradiction, which nonetheless can be expensive to >> evaluate. Consider alternative strategies for improved performance." >> >> Please guide... >> >> Regards, >> Muhammad Usman Awais >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Jun 22 11:53:56 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 22 Jun 2018 13:53:56 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 18 June 2018 Message-ID: <1529668436.812212.1416856536.294995F8@webmail.messagingengine.com> # Keystone Team Update - Week of 18 June 2018 ## News ### Default Roles Fallout Our change to automatically create the 'reader' and 'member' roles during bootstrap[1] caused some problems in the CI of other projects[2]. One problem was that projects were manually creating a 'Member' role, and with the database backend being case-insensitve, there would be a conflict with the 'member' role that keystone is now creating. The immediate fix is to ensure the clients in CI are checking for the 'member' role rather than the 'Member' role before trying to create either role, but in the longer term, clients would benefit from decoupling the API case sensitivity from the configuration of the database backend[3]. Another problem was a bug related to implied roles in trusts[4]. If a role implies another, but a trust is created with both roles explicitly, the token will contain duplicate role names, which breaks the usage of trusts and hit Sahara. This issue would have existed before, but was only discovered now that we have implied roles by default. [1] https://review.openstack.org/572243 [2] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-24 [3] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-175 [4] https://bugs.launchpad.net/keystone/+bug/1778109 ### Limits Schema Restructuring Morgan discovered some problems with the database schemas[5] for registered limits and project limits and proposed that we can improve performance and reduce data duplication by doing some restructuring and adding some indexes. The migration path to the new schema is tricky[6] and we're still trying to come up with a strategy that avoids triggers[7]. [5] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-184 [6] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-06-19.log.html#t2018-06-19T21:04:05 [7] https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad ### No-nitpicking Culture Following the community discussion on fostering a healthier culture by avoiding needlessly nitpicking changes[8], the keystone team had a thoughtful discussion on what constitutes nitpicking and how we should be voting on changes[9]. Context is always important, and considering who the author is, how significant the imperfection is, and how much effort it will take the author to correct it should to be considered when deciding whether to ask them to change something about their patch versus proposing yor own fix in a folllowup. I've always been proud of keystone's no-nitpicking culture and it's encouraging to see continuous introspection. [8] https://governance.openstack.org/tc/reference/principles.html [9] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-06-19.log.html#t2018-06-19T21:18:01 ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 16 changes this week, including client support for limits and a major bugfix for implied roles. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 57 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots, so their authors are waiting for any feedback. ## Bugs This week we opened 5 new bugs and closed 4. Bugs opened (5) Bug #1777671 (keystone:Medium) opened by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1777671 Bug #1777892 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1777892 Bug #1777893 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1777893 Bug #1778023 (keystone:Undecided) opened by kirandevraaj https://bugs.launchpad.net/keystone/+bug/1778023 Bug #1778109 (keystone:Undecided) opened by Jeremy Freudberg https://bugs.launchpad.net/keystone/+bug/1778109 Bugs closed (2) Bug #1758460 (keystone:Low) https://bugs.launchpad.net/keystone/+bug/1758460 Bug #1774654 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1774654 Bugs fixed (2) Bug #1754184 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1754184 Bug #1774229 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1774229 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html This week is our feature proposal freeze deadline. All our major efforts seem to have at least one initial patch proposed for them. The keystone feature freeze is only 3 weeks away. The final release for non-client libraries is the week after that[10], so we need to ensure that all the work needed especially for keystonemiddleware is completed by them. [10] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131732.html ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From witold.bedyk at est.fujitsu.com Fri Jun 22 12:43:03 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 22 Jun 2018 12:43:03 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: References: Message-ID: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> Hi Julien and Mehdi, I obviously care more about Monasca and integration with other OpenStack projects. If the publisher wouldn't be an important piece of the puzzle I wouldn't be pushing this. I have stressed a couple of times that we are ready to take the complete responsibility for the code and its maintenance. If manpower is an issue, what about an idea of adding one or two of us to the core reviewers group? We don't have the expertise to approve the changes in the core agent, but we could help on simple maintenance tasks and of course keeping our publisher running and bugfixing. I know it's not how it normally works, but in that case it seems to be a clear win-win situation. What do you say? Wish you a nice weekend Witek > -----Original Message----- > From: Julien Danjou > Sent: Mittwoch, 20. Juni 2018 16:26 > To: Bedyk, Witold > Cc: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > publisher for Ceilometer > > On Wed, Jun 20 2018, Bedyk, Witold wrote: > > Hi Witek, > > It's not a transparency issue. It's a manpower issue. We are only 2 developers > active on Ceilometer: me and Mehdi. Neither me nor Mehdi wants to > maintain Monasca stuff; meaning, we don't want to spend time reviewing > patches, having bug opened, or whatever. There's no interest for us in that. > > THe Prometheus publisher you mention has been written by Mehdi and > we've approved it because it fits the roadmap of the Ceilometer developers > that we are — and, again we're just two. > > We have other projects — such as Panko — that provide Ceilometer > publishers and their code is in Panko, not in Ceilometer. It's totally possible > and sane. > > Now, if you really, really, care that much about Ceilometer and its integration > with Monasca, and if you have an amazing roadmap that'll make Ceilometer > better and awesome, please, do start with that. > > Right now it just looks like more work for us with no gain. :( > > > could you please add some transparency to the decision process on > > which publishers are acceptable and which not? Just two months ago you > > have added new Prometheus publisher. That's around the same time > when > > our code was submitted to review. > > > > We have delivered tested code and offer its maintenance. The code is > > self-contained and does not touch Ceilometer core. If it's broken, > > just Monasca interface won't work. > > > > Please reconsider it again. > > > > Greetings > > Witek > > > > > >> -----Original Message----- > >> From: Julien Danjou > >> Sent: Mittwoch, 20. Juni 2018 14:07 > >> To: Bedyk, Witold > >> Cc: OpenStack Development Mailing List (not for usage questions) > >> > >> Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > >> publisher for Ceilometer > >> > >> On Wed, Jun 20 2018, Bedyk, Witold wrote: > >> > >> Same as Gordon. You should maintain that in your own repo. > >> There's just no bandwidth in Ceilometer right now for things like that. > >> :( > >> > >> > Hello Telemetry Team, > >> > > >> > any opinion on this? > >> > > >> > Best greetings > >> > Witek > >> > > >> > > >> >> -----Original Message----- > >> >> From: Bedyk, Witold > >> >> Sent: Mittwoch, 13. Juni 2018 10:28 > >> >> To: OpenStack Development Mailing List (not for usage questions) > >> >> > >> >> Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > >> >> publisher for Ceilometer > >> >> > >> >> Hello Telemetry Team, > >> >> > >> >> We would like to contribute a Monasca publisher to Ceilometer > >> >> project [1] and add it to the list of currently supported transports [2]. > >> >> The goal of the plugin is to send Ceilometer samples to Monasca API. > >> >> > >> >> I understand Gordon's concerns about adding maintenance overhead > >> >> for Ceilometer team which he expressed in review but the code is > >> >> pretty much self-contained and does not affect Ceilometer core. > >> >> It's not our intention to shift the maintenance effort and Monasca > >> >> team should still be responsible for this code. > >> >> > >> >> Adding this plugin will help in terms of interoperability of both > >> >> projects and can be useful for wider parts of the OpenStack > community. > >> >> > >> >> Please let me know your thoughts. I hope we can get this code > merged. > >> >> > >> >> Cheers > >> >> Witek > >> >> > >> >> > >> >> [1] https://review.openstack.org/562400 > >> >> [2] > >> >> https://docs.openstack.org/ceilometer/latest/contributor/architect > >> >> ure > >> >> .html > >> >> #processing-the-data > >> > > >> > >> -- > >> Julien Danjou > >> /* Free Software hacker > >> https://julien.danjou.info */ > > > > > > -- > Julien Danjou > ;; Free Software hacker > ;; https://julien.danjou.info From mark at stackhpc.com Fri Jun 22 13:32:59 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 22 Jun 2018 14:32:59 +0100 Subject: [openstack-dev] [kayobe] Kayobe update Message-ID: Hi all, I thought it might be useful to start an irregular update on Kayobe, covering recent and upcoming events. This follows on from the update blog article [1] that I wrote a couple of months ago (which includes a video and asciinema demo!). # OpenStack-related The switch to using OpenStack infrastructure was completed successfully, and went relatively smoothly. The more thorough CI testing enabled by Zuul has helped to increase confidence in new changes, and has caught a few bugs that might previously have slipped through the net. We have a good baseline of test coverage, but should continue working to ensure coverage continues to improve. # Queens & beyond Kayobe tends to trail the OpenStack and Kolla release cycles somewhat, largely driven by demand for new Kayobe features on stable OpenStack releases. We recently added support support for deploying the Queens release to the Kayobe master branch, and now enforce a stable policy on the stable/pike branch. There is a patch [2] up to switch to master branches of dependencies that allows us to keep on top of the required changes for Rocky. # Recently added features * Custom Ansible playbooks [3] allow us to keep the core of Kayobe relatively lightweight, while supporting arbitrary extensions. * Automatic naming of bare metal compute nodes based on inventory & IPMI address [4] * Separate cleaning network for ironic [5] # Upcoming features * Deployment pipeline configuration [6] will provide enhancements to the kayobe-config structure to support deploying to multiple environments (think dev/staging/prod) from a single kayobe-config repository * Ansible 2.5 support [7] * Admin/SSH network [8] allows us to stop using the provisioning network for SSH access * Support for deploying Monasca [9] (most work is in kolla) # Will Szumski Based on Will's recent contributions to Kayobe, and how quickly he's come up to speed, he's been added to kayobe-core. Congratulations Will! As always, feel free to drop into IRC with questions: #openstack-kayobe [1] http://www.stackhpc.com/kayobe-update.html [2] https://review.openstack.org/#/c/568804 [3] https://kayobe.readthedocs.io/en/latest/custom-ansible-playbooks.html [4] https://storyboard.openstack.org/#!/story/2002176 [5] https://storyboard.openstack.org/#!/story/2002097 [6] https://storyboard.openstack.org/#!/story/2002009 [7] https://storyboard.openstack.org/#!/story/2001649 [8] https://storyboard.openstack.org/#!/story/2002096 [9] https://storyboard.openstack.org/#!/story/2001627 Cheers, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Fri Jun 22 13:47:38 2018 From: julien at danjou.info (Julien Danjou) Date: Fri, 22 Jun 2018 15:47:38 +0200 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> (Witold Bedyk's message of "Fri, 22 Jun 2018 12:43:03 +0000") References: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> Message-ID: On Fri, Jun 22 2018, Bedyk, Witold wrote: > I know it's not how it normally works, but in that case it seems to be a clear > win-win situation. I'm sorry, I really don't see what's the win for Ceilometer or its developer. Could you elaborate? -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From witold.bedyk at est.fujitsu.com Fri Jun 22 14:40:01 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 22 Jun 2018 14:40:01 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: References: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> Message-ID: <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> You've said lacking manpower is currently the main issue in Ceilometer which stops you from accepting new publishers and that you don't want to add maintenance overhead. We're offering help on maintaining the project. Cheers Witek > -----Original Message----- > From: Julien Danjou > Sent: Freitag, 22. Juni 2018 15:48 > To: Bedyk, Witold > Cc: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > publisher for Ceilometer > > On Fri, Jun 22 2018, Bedyk, Witold wrote: > > > I know it's not how it normally works, but in that case it seems to be > > a clear win-win situation. > > I'm sorry, I really don't see what's the win for Ceilometer or its developer. > Could you elaborate? > > -- > Julien Danjou > ;; Free Software hacker > ;; https://julien.danjou.info From cdent+os at anticdent.org Fri Jun 22 14:45:24 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 22 Jun 2018 15:45:24 +0100 (BST) Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> References: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> Message-ID: On Fri, 22 Jun 2018, Bedyk, Witold wrote: > You've said lacking manpower is currently the main issue in > Ceilometer which stops you from accepting new publishers and that > you don't want to add maintenance overhead. I've lost track of the details of the thread, can you remind me why keeping the plugin as an external (perhaps packaged with monasca itself) is not a good option? As I understood things, that was the benefit of the plugin architecture. > We're offering help on maintaining the project. I think this could potentially be a great option, if everyone involved thinks it is a good idea, but it is somewhat orthogonal to the question above about being an external plugin. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Fri Jun 22 14:46:41 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 22 Jun 2018 09:46:41 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 18 June 2018 In-Reply-To: <1529668436.812212.1416856536.294995F8@webmail.messagingengine.com> References: <1529668436.812212.1416856536.294995F8@webmail.messagingengine.com> Message-ID: <6598808f-e0ac-752d-67dc-6fe4e7609561@fried.cc> Also: keystoneauth1 3.9.0 was released. Its new feature is the ability to set raise_exc on the Adapter object so you don't have to do it per request. Here's a patch that makes use of the feature: https://review.openstack.org/#/c/577437/ -efried On 06/22/2018 06:53 AM, Colleen Murphy wrote: > # Keystone Team Update - Week of 18 June 2018 > > ## News > > ### Default Roles Fallout > > Our change to automatically create the 'reader' and 'member' roles during bootstrap[1] caused some problems in the CI of other projects[2]. One problem was that projects were manually creating a 'Member' role, and with the database backend being case-insensitve, there would be a conflict with the 'member' role that keystone is now creating. The immediate fix is to ensure the clients in CI are checking for the 'member' role rather than the 'Member' role before trying to create either role, but in the longer term, clients would benefit from decoupling the API case sensitivity from the configuration of the database backend[3]. > > Another problem was a bug related to implied roles in trusts[4]. If a role implies another, but a trust is created with both roles explicitly, the token will contain duplicate role names, which breaks the usage of trusts and hit Sahara. This issue would have existed before, but was only discovered now that we have implied roles by default. > > [1] https://review.openstack.org/572243 > [2] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-24 > [3] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-175 > [4] https://bugs.launchpad.net/keystone/+bug/1778109 > > ### Limits Schema Restructuring > > Morgan discovered some problems with the database schemas[5] for registered limits and project limits and proposed that we can improve performance and reduce data duplication by doing some restructuring and adding some indexes. The migration path to the new schema is tricky[6] and we're still trying to come up with a strategy that avoids triggers[7]. > > [5] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-19-16.00.log.html#l-184 > [6] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-06-19.log.html#t2018-06-19T21:04:05 > [7] https://etherpad.openstack.org/p/keystone-unified-limit-migration-notepad > > ### No-nitpicking Culture > > Following the community discussion on fostering a healthier culture by avoiding needlessly nitpicking changes[8], the keystone team had a thoughtful discussion on what constitutes nitpicking and how we should be voting on changes[9]. Context is always important, and considering who the author is, how significant the imperfection is, and how much effort it will take the author to correct it should to be considered when deciding whether to ask them to change something about their patch versus proposing yor own fix in a folllowup. I've always been proud of keystone's no-nitpicking culture and it's encouraging to see continuous introspection. > > [8] https://governance.openstack.org/tc/reference/principles.html > [9] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-06-19.log.html#t2018-06-19T21:18:01 > > ## Recently Merged Changes > > Search query: https://bit.ly/2IACk3F > > We merged 16 changes this week, including client support for limits and a major bugfix for implied roles. > > ## Changes that need Attention > > Search query: https://bit.ly/2wv7QLK > > There are 57 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots, so their authors are waiting for any feedback. > > ## Bugs > > This week we opened 5 new bugs and closed 4. > > Bugs opened (5) > Bug #1777671 (keystone:Medium) opened by Morgan Fainberg https://bugs.launchpad.net/keystone/+bug/1777671 > Bug #1777892 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1777892 > Bug #1777893 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1777893 > Bug #1778023 (keystone:Undecided) opened by kirandevraaj https://bugs.launchpad.net/keystone/+bug/1778023 > Bug #1778109 (keystone:Undecided) opened by Jeremy Freudberg https://bugs.launchpad.net/keystone/+bug/1778109 > > Bugs closed (2) > Bug #1758460 (keystone:Low) https://bugs.launchpad.net/keystone/+bug/1758460 > Bug #1774654 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1774654 > > Bugs fixed (2) > Bug #1754184 (keystone:Medium) fixed by wangxiyuan https://bugs.launchpad.net/keystone/+bug/1754184 > Bug #1774229 (keystone:Medium) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1774229 > > ## Milestone Outlook > > https://releases.openstack.org/rocky/schedule.html > > This week is our feature proposal freeze deadline. All our major efforts seem to have at least one initial patch proposed for them. > > The keystone feature freeze is only 3 weeks away. The final release for non-client libraries is the week after that[10], so we need to ensure that all the work needed especially for keystonemiddleware is completed by them. > > [10] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131732.html > > ## Help with this newsletter > > Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter > Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From julien at danjou.info Fri Jun 22 14:51:06 2018 From: julien at danjou.info (Julien Danjou) Date: Fri, 22 Jun 2018 16:51:06 +0200 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> (Witold Bedyk's message of "Fri, 22 Jun 2018 14:40:01 +0000") References: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> Message-ID: On Fri, Jun 22 2018, Bedyk, Witold wrote: > You've said lacking manpower is currently the main issue in Ceilometer which > stops you from accepting new publishers and that you don't want to add > maintenance overhead. > > We're offering help on maintaining the project. Oh cool, I misunderstood. I though you were offering only help for maintaining your publisher. That sounds great. We never worked with each other yet, so it'd be hard for us to add you right away to the core team — but we'd be more than happy to see how you can help and bring you in ASAP! -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From witold.bedyk at est.fujitsu.com Fri Jun 22 14:59:01 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 22 Jun 2018 14:59:01 +0000 Subject: [openstack-dev] [telemetry][ceilometer][monasca] Monasca publisher for Ceilometer In-Reply-To: References: <4afe855aaec3468d8b81091a84e0e181@R01UKEXCASM126.r01.fujitsu.local> <0816b423428448caa36a53583f4000a1@R01UKEXCASM126.r01.fujitsu.local> Message-ID: Sounds great. Going to get a cold beer to celebrate that ;) Witek > -----Original Message----- > From: Julien Danjou > Sent: Freitag, 22. Juni 2018 16:51 > To: Bedyk, Witold > Cc: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] [telemetry][ceilometer][monasca] Monasca > publisher for Ceilometer > > On Fri, Jun 22 2018, Bedyk, Witold wrote: > > > You've said lacking manpower is currently the main issue in Ceilometer > > which stops you from accepting new publishers and that you don't want > > to add maintenance overhead. > > > > We're offering help on maintaining the project. > > Oh cool, I misunderstood. I though you were offering only help for > maintaining your publisher. That sounds great. > We never worked with each other yet, so it'd be hard for us to add you right > away to the core team — but we'd be more than happy to see how you can > help and bring you in ASAP! > > -- > Julien Danjou > ;; Free Software hacker > ;; https://julien.danjou.info From sundar.nadathur at intel.com Fri Jun 22 15:06:49 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 22 Jun 2018 08:06:49 -0700 Subject: [openstack-dev] [Nova] [Cyborg] [Glance] Updated spec for Cyborg-Nova-Glance interaction, including os-acc Message-ID: Hello folks, The os-acc spec [1] has been updated substantially. Please review the new version is https://review.openstack.org/#/c/577438/ . The background for the update is that several important aspects were raised as comments on the previous spec ([1], [2]). An alternative workflow for attaching accelerators to instances was proposed [3], to which I responded with [4] and [5]. Finally, with another IRC discussion [6], it was concluded that the design/flow in [4], [5] fits the bill. The new version of the os-acc spec incorporates that discussion. The main points that were raised and addressed are these: * Some architectures like Power treat devices differently. The os-acc framework must provide for plugins to handle such variation. Done. * The os-acc framework should be more closely patterned after the os-vif framework and Neutron flow. This is a bit debatable since Neutron ports and Cyborg accelerators differ in some key respects, though the os-acc library can be structured like os-vif. I have attempted to compare and contrast the os-vif and os-acc approaches. This discussion is important because we may have programmable NICs based on FPGAs. Then Cyborg, Neutron and Nova are going to get tangled in a triangle. (If you throw Glance in for FPGA images, that leads quickly to a quadrilateral. Add Cinder for storage-related FPGA devices, and we get pulled into a pentagram. Geometry is scary. Just saying. ;-} ) * Not enough detail in [1]. Mea culpa. Hopefully fixed now. [1] https://review.openstack.org/#/c/566798/ [2] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-06-14.log.html#t2018-06-14T18:38:28 [3] https://review.openstack.org/#/c/575545/1/specs/rocky/approved/nova-cyborg-flow.rst [4] https://etherpad.openstack.org/p/os-acc-discussion [5] https://docs.google.com/drawings/d/1gbfimiyA1f5sTeobN9mpavEkHT7Z_ScNUqimOkdIYGA/edit [6] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-06-18.log.html#t2018-06-18T22:07:02 Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Jun 22 15:13:05 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 22 Jun 2018 16:13:05 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-25 Message-ID: HTML: https://anticdent.org/placement-update-18-25.html This is placement update 18-25, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This is a "contract" version, meaning that the spec and other lists do no have new additions, they are updated to remove what has been merged or abandoned. This is done to encourage people to review existing stuff before jumping on whatever the new shiny is. With this edition I'm adding a Documentation theme as it was something that's been pointed out recently as a gap. # Most Important Nested allocation candidates are getting very close, but remain a critical piece of functionality. After that is making sure that we are progressing on the /reshapher functionality and bringing the various virt drivers into line with all this nice new functionality (which mostly means ProviderTrees). All that nice new functionality means bugs. Experiment. Break stuff. Find bugs. Fix them. # What's Changed The optional placement database changes merged. This means that if [placement_database]/connection is set, that's the target database for placement data, instead of the nova_api database connection. Support for consumer generations in allocations has merged. The PlacementDirect non-HTTP interface to placement has merged. Most placement unit tests no longer rely on the nova base test classs. The spec for Reshape Provider Trees (the thing driving /reshaper) merged. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17, one more than last week. We've got some work either starting or killing these. * [In progress placement bugs](https://goo.gl/vzGGDQ) 8, -1 on last time. # Questions In [IRC yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-06-21.log.html#t2018-06-21T13:21:14) we had an extensive discussion about being able to set custom resource classes on the resource provider representing a compute node, outside the virt driver. At the moment the virt driver will clobber it. Is this what we always want? # Specs Total last week: 13. Now: 12 Spec-freeze has passed, so presumably exceptions will be required for these. For those that are just not going to happen in Rocky, I guess we can start pushing them into Stein. * VMware: place instances on resource pool (using update_provider_tree) * Proposes NUMA topology with RPs * Account for host agg allocation ratio in placement * Support default allocation ratios This one has two +2, but is pending some decision on spec-freeze. * Spec on preemptible servers * Standardize CPU resource tracking * Propose counting quota usage from placement * Add history behind nullable project_id and user_id * Placement: any traits in allocation_candidate query * Placement: support mixing required traits with any traits * [WIP] Support Placement in Cinder * Count quota based on resource class # Main Themes ## Documentation This is a section for reminding us to document all the fun stuff we are enabling. Open areas include: * Documenting optional placement database. A bug, [1778227](https://bugs.launchpad.net/nova/+bug/1778227) has been created to track this. * "How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?!", says an enthusiastic Matt Riedemann. * The when's and where's of re-shaping and VGPUs. ## Nested providers in allocation candidates As far as I can tell the main thing left here is to turn it on in a microversion. That code is at: * There have been some tweaks to account for the behavior leakage discussed last week. ## Consumer Generations There are a couple of patches left on the consumer generation topic: * Is someone already working on code for making use of this in the resource tracker? ## Reshape Provider Trees This allows moving inventory and allocations that were on resource provider A to resource provider B in an atomic fashion. The blueprint topic is: * There are WIPs for the HTTP parts and the resource tracker parts, on that topic. ## Mirror Host Aggregates I thought this was done but there's one thing left. A command line tool: * ## Extraction The optional placement database stuff has merged, and is running in the nova-next job. As mentioned above there are documentation tasks to do with this. A while back, Jay made a first pass at an [os-resource-classes](https://github.com/jaypipes/os-resource-classes/), which needs some additional eyes on it. I personally thought it might be heavier than required. If you have ideas please share them. An area we will need to prepare for is dealing with the various infra and co-gating issues that will come up once placement is extracted. We also need to think about how to manage the fixtures currently made available by nova that we might need or want to use in placement. Some of them might be worth sharing. How should we do that? # Other 23 entries last week. 18 now. Nice merging. But we've added quite a few, we just don't see them because this is a contract week. * Purge comp_node and res_prvdr records during deletion of cells/hosts * A huge pile of improvements to osc-placement * Get resource provider by uuid or name (osc-placement) * placement: Make API history doc more consistent * Tighten up ReportClient use of generation * Add unit test for non-placement resize * cover migration cases with functional tests * Bug fixes for sharing resource providers * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * normalize_name helper (in os-traits) * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams # End Hi. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sundar.nadathur at intel.com Fri Jun 22 15:13:32 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 22 Jun 2018 08:13:32 -0700 Subject: [openstack-dev] [Nova] [Cyborg] [Glance] Updated spec for Cyborg-Nova-Glance interaction, including os-acc In-Reply-To: References: Message-ID: <02f058b0-46bd-c817-c4f4-29c06ab730e7@intel.com> s/review the new version is/review the new version/ Regards, Sundar On 6/22/2018 8:06 AM, Nadathur, Sundar wrote: > Hello folks, > The os-acc spec [1] has been updated substantially. Please review the > new version is https://review.openstack.org/#/c/577438/ . > > The background for the update is that several important aspects were > raised as comments on the previous spec ([1], [2]). An alternative > workflow for attaching accelerators to instances was proposed [3], to > which I responded with [4] and [5]. Finally, with another IRC > discussion [6], it was concluded that the design/flow in [4], [5] fits > the bill. The new version of the os-acc spec incorporates that discussion. > > The main points that were raised and addressed are these: > > * Some architectures like Power treat devices differently. The os-acc > framework must provide for plugins to handle such variation. Done. > > * The os-acc framework should be more closely patterned after the > os-vif framework and Neutron flow. This is a bit debatable since > Neutron ports and Cyborg accelerators differ in some key respects, > though the os-acc library can be structured like os-vif. I have > attempted to compare and contrast the os-vif and os-acc approaches. > > This discussion is important because we may have programmable NICs > based on FPGAs. Then Cyborg, Neutron and Nova are going to get tangled > in a triangle. (If you throw Glance in for FPGA images, that leads > quickly to a quadrilateral. Add Cinder for storage-related FPGA > devices, and we get pulled into a pentagram. Geometry is scary. Just > saying. ;-} ) > > * Not enough detail in [1]. Mea culpa. Hopefully fixed now. > > [1] https://review.openstack.org/#/c/566798/ > > [2] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-06-14.log.html#t2018-06-14T18:38:28 > > > [3] > https://review.openstack.org/#/c/575545/1/specs/rocky/approved/nova-cyborg-flow.rst > > > [4] https://etherpad.openstack.org/p/os-acc-discussion > > [5] > https://docs.google.com/drawings/d/1gbfimiyA1f5sTeobN9mpavEkHT7Z_ScNUqimOkdIYGA/edit > > > [6] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-06-18.log.html#t2018-06-18T22:07:02 > > > Regards, > Sundar > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Fri Jun 22 17:25:39 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 22 Jun 2018 13:25:39 -0400 Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: <20180621203751.GB17928@sm-workstation> References: <1529534483-sup-9862@lrrr.local> <20180621203751.GB17928@sm-workstation> Message-ID: On 21/06/18 16:37, Sean McGinnis wrote: >> >> Apparently heat-translator has a healthy ecosystem of contributors and >> users, but not of maintainers, and it remaining a deliverable of the Heat >> project is doing nothing to alleviate the latter problem. I'd like to find >> it a home that _would_ help. >> > > I'd be interested to hear thoughts if this is somewhere where we (the TC) > should step in and make a few people cores on this project? Let's save that remedy for projects that are unresponsive. > Or are the existing > contributors a healthy amount but not involved enough to trust to be cores? heat-translator cores are aware of the problem and are theoretically on the lookout for new cores, but I presume there's nobody with the track record of reviews to nominate yet. - ZB From sean.mcginnis at gmx.com Fri Jun 22 17:52:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 22 Jun 2018 12:52:55 -0500 Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: References: <1529534483-sup-9862@lrrr.local> <20180621203751.GB17928@sm-workstation> Message-ID: <20180622175255.GA18811@sm-workstation> > > > Apparently heat-translator has a healthy ecosystem of contributors and > > > users, but not of maintainers, and it remaining a deliverable of the Heat > > > project is doing nothing to alleviate the latter problem. I'd like to find > > > it a home that _would_ help. > > > > > > > I'd be interested to hear thoughts if this is somewhere where we (the TC) > > should step in and make a few people cores on this project? > > Let's save that remedy for projects that are unresponsive. > > > Or are the existing > > contributors a healthy amount but not involved enough to trust to be cores? > > heat-translator cores are aware of the problem and are > theoretically on the lookout for new cores, but I presume there's nobody > with the track record of reviews to nominate yet. > > - ZB > Great, thanks Zane. I don't think the TC should step on any toes, so I'm happy that it doesn't appear to be to the point where that would be necessary. From ed at leafe.com Fri Jun 22 20:47:41 2018 From: ed at leafe.com (Ed Leafe) Date: Fri, 22 Jun 2018 15:47:41 -0500 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> Message-ID: <348CAB95-8910-44DF-8DBE-B2CA85430589@leafe.com> On Jun 15, 2018, at 5:42 PM, Gilles Dubreuil wrote: > > This initial patch [1] allows to retrieve networks, subnets. > > This is very easy, thanks to the graphene-sqlalchemy helper. > > The edges, nodes layer might be confusing at first meanwhile they make the Schema Relay-compliant in order to offer re-fetching, pagination features out of the box. > > The next priority is to set the unit test in order to implement mutations. > > Could someone help provide a base in order to respect Neutron test requirements? Glad to see some progress! We have migrated the API-SIG from Launchpad to StoryBoard [0], specifically so that your group has a place to record stories, tasks, etc. Please feel free to use this to help coordinated your work. [0] https://storyboard.openstack.org/#!/project/1039 -- Ed Leafe From gael.therond at gmail.com Fri Jun 22 20:53:45 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 22 Jun 2018 22:53:45 +0200 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: <348CAB95-8910-44DF-8DBE-B2CA85430589@leafe.com> References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <348CAB95-8910-44DF-8DBE-B2CA85430589@leafe.com> Message-ID: Oh! That’s its truly a sweet sweet attention, that will indeed really help us to focus on what we have to do without having to goes through an extensive email back and forth :-) Thanks a lot!! Le ven. 22 juin 2018 à 22:48, Ed Leafe a écrit : > On Jun 15, 2018, at 5:42 PM, Gilles Dubreuil wrote: > > > > This initial patch [1] allows to retrieve networks, subnets. > > > > This is very easy, thanks to the graphene-sqlalchemy helper. > > > > The edges, nodes layer might be confusing at first meanwhile they make > the Schema Relay-compliant in order to offer re-fetching, pagination > features out of the box. > > > > The next priority is to set the unit test in order to implement > mutations. > > > > Could someone help provide a base in order to respect Neutron test > requirements? > > Glad to see some progress! > > We have migrated the API-SIG from Launchpad to StoryBoard [0], > specifically so that your group has a place to record stories, tasks, etc. > Please feel free to use this to help coordinated your work. > > [0] https://storyboard.openstack.org/#!/project/1039 > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri Jun 22 23:51:52 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sat, 23 Jun 2018 09:51:52 +1000 Subject: [openstack-dev] [neutron][api[[graphql] A starting point In-Reply-To: References: <847cb345-1bc7-f3cf-148a-051c4a306a4b@redhat.com> <348CAB95-8910-44DF-8DBE-B2CA85430589@leafe.com> Message-ID: Great, thanks to the API SIG! On 23/06/18 06:53, Flint WALRUS wrote: > Oh! That’s its truly a sweet sweet attention, that will indeed really > help us to focus on what we have to do without having to goes through > an extensive email back and forth :-) > > Thanks a lot!! > Le ven. 22 juin 2018 à 22:48, Ed Leafe > a écrit : > > On Jun 15, 2018, at 5:42 PM, Gilles Dubreuil > wrote: > > > > This initial patch [1]  allows to retrieve networks, subnets. > > > > This is very easy, thanks to the graphene-sqlalchemy helper. > > > > The edges, nodes layer might be confusing at first meanwhile > they make the Schema Relay-compliant in order to offer > re-fetching, pagination features out of the box. > > > > The next priority is to set the unit test in order to implement > mutations. > > > > Could someone help provide a base in order to respect Neutron > test requirements? > > Glad to see some progress! > > We have migrated the API-SIG from Launchpad to StoryBoard [0], > specifically so that your group has a place to record stories, > tasks, etc. Please feel free to use this to help coordinated your > work. > > [0] https://storyboard.openstack.org/#!/project/1039 > > > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sat Jun 23 04:29:25 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 23 Jun 2018 00:29:25 -0400 Subject: [openstack-dev] Openstack-Zun Service Appears down In-Reply-To: References: Message-ID: Hi Muhammad, I am not sure what is the exact problem, but here is the list of things you might want to check: 1. Make sure the security group is open. This document explains how to find the security group(s) of the container: https://docs.openstack.org/zun/latest/admin/security-groups.html#find-container-s-security-groups . 2. Check if you can ping the container from outside $ NET_ID=$(openstack network show private | awk '/ id /{print $4}') $ sudo ip netns | grep $NET_ID qdhcp-6d688072-a0c3-4f1c-979e-2d1882564931 $ sudo ip netns exec qdhcp-6d688072-a0c3-4f1c-979e-2d1882564931 ping 10.0.0.9 PING 10.0.0.9 (10.0.0.9) 56(84) bytes of data. 64 bytes from 10.0.0.9: icmp_seq=1 ttl=64 time=0.845 ms 64 bytes from 10.0.0.9: icmp_seq=2 ttl=64 time=0.258 ms ... 3. Check if you can ping outside from the container $ zun list ... | 2c5d01ef-11f9-46f6-8ef2-da59914b6a10 | pi-24-container | nginx | Running | None | 10.0.0.9, fd66:a11d:c60:0:f816:3eff:fe9c:8f46 | [80] | ... $ CONTAINER_ID=$(zun show pi-24-container | awk '/ uuid /{print $4}') $ docker ps | grep $CONTAINER_ID f9cd8aa9a911 nginx:latest "nginx -g 'daemon of…" 21 minutes ago Up 21 minutes zun-2c5d01ef-11f9-46f6-8ef2-da59914b6a10 $ docker inspect -f {{.State.Pid}} f9cd8aa9a911 15001 $ sudo nsenter -t 15001 -n ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=49 time=1.46 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=49 time=0.957 ms 4. traceroute $ sudo nsenter -t 15001 -n traceroute 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 ip-10-0-0-1.ec2.internal (10.0.0.1) 3.173 ms 3.154 ms 3.139 ms 2 ip-172-24-4-1.ec2.internal (172.24.4.1) 3.086 ms 3.074 ms 3.064 ms ... 5. ping & tcpdump on various interfaces. This is for checking where the traffic is blocked. $ sudo ip netns exec qdhcp-6d688072-a0c3-4f1c-979e-2d1882564931 tcpdump -i tap2813b3ae-8d ... See if you can find something by performing above steps. In any case, you might consider restarting the neutron processes which might fix everything magically. If no, I would need more details about your setup. Best regards, Hongbin On Fri, Jun 22, 2018 at 5:05 AM Usman Awais wrote: > Hi Hongbin, > > Many thanks, got it running, that was awesom... :) The problem was > unsynched time. I installed chrony and it started working. > > Now I am running into another problem; the networking of the container. > The container gets started, I can shell into it through appcontainer API, > it even gets the correct IP address of my private network (named priv-net) > in openstack, through DHCP. But when I ping any address, even the address > of the priv-net's gateway, it does nothing. I have following configuration > > neutron-openvswitch-agent is running > neutron-ovs-cleanup is running > neutron-destroy-patch-ports is running > kuryr-libnetwork is running > docker is running > zun-compute is running > > The eth0 network card has standard configuration of an OVSBridge. > > When I create a new container it also creates taps and patch ports on the > compute node. Now I am going to try to use kuryr script to test what > happens with "bridged" and "host" networks. > > Muhammad Usman Awais > > > > On Thu, Jun 21, 2018 at 1:14 PM, Hongbin Lu wrote: > >> HI Muhammad, >> >> Here is the code (run in controller node) that decides whether a service >> is up >> https://github.com/openstack/zun/blob/master/zun/api/servicegroup.py . >> There are several possibilities to cause a service to be 'down': >> 1. The service was being 'force_down' via API (e.g. explicitly issued a >> command like "appcontainer service forcedown") >> 2. The zun compute process is not doing the heartbeat for a certain >> period of time (CONF.service_down_time). >> 3. The zun compute process is doing the heartbeat properly but the time >> between controller node and compute node is out of sync. >> >> In before, #3 is the common pitfall that people ran into. If it is not >> #3, you might want to check if the zun compute process is doing the >> heartbeat properly. Each zun compute process is running a periodic task to >> update its state in DB: >> https://github.com/openstack/zun/blob/master/zun/servicegroup/zun_service_periodic.py >> . The call of ' report_state_up ' will record this service is up in DB >> at current time. You might want to check if this periodic task is running >> properly, or if the current state is updated in the DB. >> >> Above is my best guess. Please feel free to follow it up with me or other >> team members if you need further assistant for this issue. >> >> Best regards, >> Hongbin >> >> On Wed, Jun 20, 2018 at 9:14 AM Usman Awais >> wrote: >> >>> Dear Zuners, >>> >>> I have installed RDO pike. I stopped openstack-nova-compute service on >>> one of the hosts, and installed a zun-compute service. Although all the >>> services seems to be running ok on both controller and compute but when I >>> do >>> >>> openstack appcontainer service list >>> >>> It gives me following >>> >>> >>> +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ >>> | Id | Host | Binary | State | Disabled | Disabled Reason | >>> Updated At | Availability Zone | >>> >>> +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ >>> | 1 | node1.os.lab | zun-compute | down | False | None | >>> 2018-06-20 13:14:31 | nova | >>> >>> +----+--------------+-------------+-------+----------+-----------------+---------------------+-------------------+ >>> >>> I have checked all ports in both directions they are open, including >>> etcd ports and others. All services are running, only docker service has >>> the warning message saying "failed to retrieve docker-runc version: exec: >>> \"docker-runc\": executable file not found in $PATH". There is also a >>> message at zun-compute >>> "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/default_comparator.py:161: >>> SAWarning: The IN-predicate on "container.uuid" was invoked with an empty >>> sequence. This results in a contradiction, which nonetheless can be >>> expensive to evaluate. Consider alternative strategies for improved >>> performance." >>> >>> Please guide... >>> >>> Regards, >>> Muhammad Usman Awais >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gong.yongsheng at 99cloud.net Sat Jun 23 05:13:59 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Sat, 23 Jun 2018 13:13:59 +0800 (CST) Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: References: <1529534483-sup-9862@lrrr.local> <20180621203751.GB17928@sm-workstation> Message-ID: <474fef98.c18.1642b103336.Coremail.gong.yongsheng@99cloud.net> I know tacker project is using heat-translator and tosca-parser projects. Who can tell if other projects in OpenStack tents are using them? regards, yong sheng gong At 2018-06-23 01:25:39, "Zane Bitter" wrote: >On 21/06/18 16:37, Sean McGinnis wrote: >>> >>> Apparently heat-translator has a healthy ecosystem of contributors and >>> users, but not of maintainers, and it remaining a deliverable of the Heat >>> project is doing nothing to alleviate the latter problem. I'd like to find >>> it a home that _would_ help. >>> >> >> I'd be interested to hear thoughts if this is somewhere where we (the TC) >> should step in and make a few people cores on this project? > > Let's save that remedy for projects that are unresponsive. > >> Or are the existing >> contributors a healthy amount but not involved enough to trust to be cores? > > heat-translator cores are aware of the problem and >are theoretically on the lookout for new cores, but I presume there's >nobody with the track record of reviews to nominate yet. > >- ZB > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jun 23 12:52:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 23 Jun 2018 12:52:40 +0000 Subject: [openstack-dev] [heat][heat-translator][tacker] Need new release of heat-translator library In-Reply-To: <474fef98.c18.1642b103336.Coremail.gong.yongsheng@99cloud.net> References: <1529534483-sup-9862@lrrr.local> <20180621203751.GB17928@sm-workstation> <474fef98.c18.1642b103336.Coremail.gong.yongsheng@99cloud.net> Message-ID: <20180623125239.kv3uxhfdbjzfrou3@yuggoth.org> On 2018-06-23 13:13:59 +0800 (+0800), 龚永生 wrote: > I know tacker project is using heat-translator and tosca-parser > projects. Who can tell if other projects in OpenStack tents are > using them? [...] It's not an exhaustive search (for example, it doesn't include stable or feature branches, nor open reviews) but http://codesearch.openstack.org/?q=heat.translator suggests that Murano and StarlingX may be relying on heat-translator and both of those plus Apmec show up for http://codesearch.openstack.org/?q=tosca.parser (also indications that Senlin may be interested in adding support in the future). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doka.ua at gmx.com Sat Jun 23 14:38:38 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Sat, 23 Jun 2018 17:38:38 +0300 Subject: [openstack-dev] [cinder] making volume available without stopping VM Message-ID: Dear friends, I did some tests with making volume available without stopping VM. I'm using CEPH and these steps produce the following results: 1) openstack volume set --state available [UUID] - nothing changed inside both VM (volume is still connected) and CEPH 2) openstack volume set --size [new size] --state in-use [UUID] - nothing changed inside VM (volume is still connected and has an old size) - size of CEPH volume changed to the new value 3) during these operations I was copying a lot of data from external source and all md5 sums are the same on both VM and source 4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft or hard): openstack server reboot [--hard] [VM uuid] ) - note: NOT after 'reboot' from inside VM It seems, that all these manipilations with cinder just update internal parameters of cinder/CEPH subsystems, without immediate effect for VMs. Is it safe to use this mechanism in this particular environent (e.g. CEPH as backend)? From practical point of view, it's useful when somebody, for example, update project in batch mode, and will then manually reboot every VM, affected by the update, in appropriate time with minimized downtime (it's just reboot, not manual stop/update/start). Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From jsbryant at electronicjungle.net Sat Jun 23 17:41:30 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Sat, 23 Jun 2018 12:41:30 -0500 Subject: [openstack-dev] [cinder] making volume available without stopping VM In-Reply-To: References: Message-ID: On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka wrote: > Dear friends, > > I did some tests with making volume available without stopping VM. I'm > using CEPH and these steps produce the following results: > > 1) openstack volume set --state available [UUID] > - nothing changed inside both VM (volume is still connected) and CEPH > 2) openstack volume set --size [new size] --state in-use [UUID] > - nothing changed inside VM (volume is still connected and has an old size) > - size of CEPH volume changed to the new value > 3) during these operations I was copying a lot of data from external > source and all md5 sums are the same on both VM and source > 4) changes on VM happens upon any kind of power-cycle (e.g. reboot > (either soft or hard): openstack server reboot [--hard] [VM uuid] ) > - note: NOT after 'reboot' from inside VM > > It seems, that all these manipilations with cinder just update internal > parameters of cinder/CEPH subsystems, without immediate effect for VMs. > Is it safe to use this mechanism in this particular environent (e.g. > CEPH as backend)? > > From practical point of view, it's useful when somebody, for example, > update project in batch mode, and will then manually reboot every VM, > affected by the update, in appropriate time with minimized downtime > (it's just reboot, not manual stop/update/start). > > Thank you. > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Volodymyr, We have had similar issues with extending attached volumes that are iSCSI based. In that case the VM has to be forced to rescan the scsi bus. In this case I am not sure if there needs to be a change to Libvirt or to rbd or something else. I would recommend reaching out to John Bernard for help. Jay > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Sun Jun 24 07:37:02 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Sun, 24 Jun 2018 15:37:02 +0800 Subject: [openstack-dev] [kolla] Propose move the weekly meeting one hour earlier In-Reply-To: References: Message-ID: The patch is already merged. The next weekly meeting will be at `date -d "1500 UTC"` on Wednesday. On Wed, Jun 13, 2018 at 3:38 PM Jeffrey Zhang wrote: > As we have more and more developer located in Asia and Europe timezone > rather > than Americas'. Current weekly meeting time is not proper. This was > discussed > at the last meeting and as a result, seems one hour earlier then now is > better > than now. > > So I propose to move the weekly meeting from UTC 16:00 to UTC 15:00 on > Wednesday. Feel free to vote on the patch[0] > > This patch will be opened until next weekly meeting, 20 June. > > [0] https://review.openstack.org/575011 > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Sun Jun 24 07:42:23 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sun, 24 Jun 2018 02:42:23 -0500 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet Message-ID: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> The short of it is that we are currently using eventlet 0.20.0. The bot proposes 0.22.1 which fails updates, I think we need to start bugging projects that fail the cross test jobs. What do you think? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ifatafekn at gmail.com Sun Jun 24 08:38:37 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Sun, 24 Jun 2018 11:38:37 +0300 Subject: [openstack-dev] [Vitrage] naming issues In-Reply-To: <6DEBC08B-6ED8-4744-956A-D8B92FD077C1@linguamatics.com> References: <6DEBC08B-6ED8-4744-956A-D8B92FD077C1@linguamatics.com> Message-ID: Hi, Can you please send me your nagios_conf file, and the logs of vitrage-collector and vitrage-graph? I’ll have a look. Thanks, Ifat On Thu, Jun 21, 2018 at 5:08 PM, Rafal Zielinski < rafal.zielinski at linguamatics.com> wrote: > Hello, > > As suggested by eyalb on irc I am posting my problem here. > > Basically I have 10 nova hosts named in nags as follows: > > nova0 > nova1 > . > . > . > nova10 > > I’ve made config file for the vitrage to map hosts to real hosts in > Openstack named like: > > nova0.domain.com > nova1.domain.com > . > . > . > nova10.domain.com > > And the issue: > When provoking nagios alert on host nova10 Vitrage is displaying error on > nova1, when provoking nagios alert on host nova1 vitrage is not showing any > alert. > > Can somebody have a look at this issue ? > > Thank you, > Rafal Zielinski > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Jun 25 00:06:02 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 25 Jun 2018 10:06:02 +1000 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint In-Reply-To: <20180618020352.GF18927@thor.bakeyournoodle.com> References: <20180618020352.GF18927@thor.bakeyournoodle.com> Message-ID: <20180625000601.GA21570@thor.bakeyournoodle.com> On Mon, Jun 18, 2018 at 12:03:52PM +1000, Tony Breeds wrote: > Without strong objections I'll do that on (my) Monday 25th June. Done. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From doka.ua at gmx.com Mon Jun 25 07:11:24 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Mon, 25 Jun 2018 10:11:24 +0300 Subject: [openstack-dev] [cinder] making volume available without stopping VM In-Reply-To: References: Message-ID: <2b3b65dc-d284-2f78-3ab2-57ae0f9f5ecc@gmx.com> Hi Jay, > We have had similar issues with extending attached volumes that are > iSCSI based. In that case the VM has to be forced to rescan the scsi bus. > > In this case I am not sure if there needs to be a change to Libvirt or > to rbd or something else. > > I would recommend reaching out to John Bernard for help. In fact, I'm ok with delayed resize (upon power-cycle), and it's not an issue for me that VM don't detect changes immediately. What I want to understand is that changes to Cinder (and, thus, underlying changes to CEPH) are safe for VM while it's in active state. Hopefully, Jon will help with this question. Thank you! On 6/23/18 8:41 PM, Jay Bryant wrote: > > > On Sat, Jun 23, 2018, 9:39 AM Volodymyr Litovka > wrote: > > Dear friends, > > I did some tests with making volume available without stopping VM. > I'm > using CEPH and these steps produce the following results: > > 1) openstack volume set --state available [UUID] > - nothing changed inside both VM (volume is still connected) and CEPH > 2) openstack volume set --size [new size] --state in-use [UUID] > - nothing changed inside VM (volume is still connected and has an > old size) > - size of CEPH volume changed to the new value > 3) during these operations I was copying a lot of data from external > source and all md5 sums are the same on both VM and source > 4) changes on VM happens upon any kind of power-cycle (e.g. reboot > (either soft or hard): openstack server reboot [--hard] [VM uuid] ) > - note: NOT after 'reboot' from inside VM > > It seems, that all these manipilations with cinder just update > internal > parameters of cinder/CEPH subsystems, without immediate effect for > VMs. > Is it safe to use this mechanism in this particular environent (e.g. > CEPH as backend)? > >  From practical point of view, it's useful when somebody, for > example, > update project in batch mode, and will then manually reboot every VM, > affected by the update, in appropriate time with minimized downtime > (it's just reboot, not manual stop/update/start). > > Thank you. > > -- > Volodymyr Litovka >    "Vision without Execution is Hallucination." -- Thomas Edison > -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Jun 25 07:20:18 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 25 Jun 2018 09:20:18 +0200 Subject: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC In-Reply-To: References: Message-ID: Hello again, After Thursday's meeting I want to summarize what we discussed and add some pointers. - Work on using the out-of-tree cloud provider and move to the new model of defining it https://storyboard.openstack.org/#!/story/1762743 https://review.openstack.org/#/c/577477/ - Configure kubelet and kube-proxy on master nodes This story of the master node label can be extened https://storyboard.openstack.org/#!/story/2002618 or we can add a new one - Simplify CNI configuration, we have calico and flannel. Ideally we should a single config script for each one. We could move flannel to the kubernetes hosted version that uses kubernetes objects for storage. (it is the recommended way by flannel and how it is done with kubeadm) - magum support in gophercloud https://github.com/gophercloud/gophercloud/issues/1003 - *needs discussion *update version of heat templates (pike or queens) This need its own tread - Post deployment scripts for clusters, I have this since some time for my but doing it in heat is slightly (not a lot) complicated. Most magnum users favor the simpler solution of passing a url of a manifest or script to the cluster (at least let's add sha512sum). - Simplify addition of custom labels/parameters. To avoid patcing magnum, it would be more ops friendly to have a generic field of custom parameters Not discussed in the last meeting but we should in the next ones: - Allow cluster scaling from different users in the same project https://storyboard.openstack.org/#!/story/2002648 - Add the option to remove node from a resource group for swarm clusters like in kubernetes https://storyboard.openstack.org/#!/story/2002677 Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday 1700UTC. You can always consult this page [1] for future meetings. Cheers, Spyros [1] https://wiki.openstack.org/wiki/Meetings/Containers On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis wrote: > Hello list, > > We are going to have a second weekly meeting for magnum for 3 weeks > as a test to reach out to contributors in the Americas. > > You can join us tomorrow (or today for some?) at 1700UTC in > #openstack-containers . > > Cheers, > Spyros > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Jun 25 08:35:52 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 25 Jun 2018 11:35:52 +0300 Subject: [openstack-dev] [stable][horizon] Adding Ivan Kolodyazhny to horizon-stable-maint In-Reply-To: <20180625000601.GA21570@thor.bakeyournoodle.com> References: <20180618020352.GF18927@thor.bakeyournoodle.com> <20180625000601.GA21570@thor.bakeyournoodle.com> Message-ID: Thank you Tony and Team! Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Jun 25, 2018 at 3:06 AM, Tony Breeds wrote: > On Mon, Jun 18, 2018 at 12:03:52PM +1000, Tony Breeds wrote: > > > Without strong objections I'll do that on (my) Monday 25th June. > > Done. > > Yours Tony. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifatafekn at gmail.com Mon Jun 25 08:42:45 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Mon, 25 Jun 2018 11:42:45 +0300 Subject: [openstack-dev] [vitrage] Vitrage was migrated to StoryBoard Message-ID: Hi, During the weekend we have completed the Vitrage migration to StoryBoard. Vitrage bugs and blueprints should be handled in StoryBoard from now on. All Vitrage bugs have been migrated and have the same bug number as in launchpad. Regarding the blueprints, we migrated the ones that we are currently working on in Rocky, but not the future ones. We created an etherpad with guidelines regarding the new process [1]. Feel free to comment and ask if something is unclear. Special thanks to Kendall Nelson and fungi for their help. [1] https://etherpad.openstack.org/p/vitrage-storyboard-migration Ifat and Anna -------------- next part -------------- An HTML attachment was scrubbed... URL: From lennyb at mellanox.com Mon Jun 25 10:31:29 2018 From: lennyb at mellanox.com (Lenny Berkhovsky) Date: Mon, 25 Jun 2018 10:31:29 +0000 Subject: [openstack-dev] [ironic] SOFT_REBOOT powers off/on the server Message-ID: Hi, Is there a reason for powering off and on[1] instead of resetting server during SOFT_REBOOT? I have a patchset[2] that resets the server instead of power cycling it. [1] https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ipmitool.py#L820 elif power_state == states.SOFT_REBOOT: _soft_power_off(task, driver_info, timeout=timeout) driver_utils.ensure_next_boot_device(task, driver_info) _power_on(task, driver_info, timeout=timeout) [2] https://review.openstack.org/#/c/577748/ Lenny Verkhovsky ( aka lennyb ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jun 25 12:38:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 08:38:55 -0400 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: References: Message-ID: <1529930278-sup-8192@lrrr.local> Excerpts from zuul's message of 2018-06-25 12:14:34 +0000: > Build failed. > > - xstatic-check-version http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/ : FAILURE in 1m 31s > - release-openstack-python release-openstack-python : SKIPPED > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > "git tag version (1.1.5.1) does not match package version (1.1.5.0)" http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/job-output.txt.gz#_2018-06-25_12_14_11_034599 From openstack at sheep.art.pl Mon Jun 25 12:57:17 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 25 Jun 2018 14:57:17 +0200 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: <1529930278-sup-8192@lrrr.local> References: <1529930278-sup-8192@lrrr.local> Message-ID: Any idea where it took the 1.1.5.0 version from? On Mon, Jun 25, 2018 at 2:38 PM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-06-25 12:14:34 +0000: > > Build failed. > > > > - xstatic-check-version http://logs.openstack.org/59/ > 592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic- > check-version/a501dba/ : FAILURE in 1m 31s > > - release-openstack-python release-openstack-python : SKIPPED > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > "git tag version (1.1.5.1) does not match package version (1.1.5.0)" > > http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41a > c8a67d909b/release/xstatic-check-version/a501dba/job- > output.txt.gz#_2018-06-25_12_14_11_034599 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jun 25 12:59:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 08:59:23 -0400 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> Message-ID: <1529931478-sup-5247@lrrr.local> Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > The short of it is that we are currently using eventlet 0.20.0. The bot > proposes 0.22.1 which fails updates, I think we need to start bugging > projects that fail the cross test jobs. What do you think? > By "bugging" do you mean we should file bugs, or something else? Which version of eventlet is actually being used in the various distros? Doug From aj at suse.com Mon Jun 25 13:07:36 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 25 Jun 2018 15:07:36 +0200 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: References: <1529930278-sup-8192@lrrr.local> Message-ID: On 2018-06-25 14:57, Radomir Dopieralski wrote: > Any idea where it took the 1.1.5.0 version from? git grep 1.1.5 shows at least: setup.cfg:description = Angular-Material 1.1.5 (XStatic packaging standard) xstatic/pkg/angular_material/__init__.py:VERSION = '1.1.5' # version of the packaged files, please use the upstream Not sure whether that's the right place, I suggest you build locally and check the build tarball, Andreas > On Mon, Jun 25, 2018 at 2:38 PM, Doug Hellmann > wrote: > > Excerpts from zuul's message of 2018-06-25 12:14:34 +0000: > > Build failed. > > > > - xstatic-check-version > http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/ > > : FAILURE in 1m 31s > > - release-openstack-python release-openstack-python : SKIPPED > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > "git tag version (1.1.5.1) does not match package version (1.1.5.0)" > > http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/job-output.txt.gz#_2018-06-25_12_14_11_034599 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From doug at doughellmann.com Mon Jun 25 13:36:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 09:36:34 -0400 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: References: Message-ID: <1529933751-sup-1272@lrrr.local> Are we planning to have space for goal teams to answer implementation questions? Doug Excerpts from Kendall Nelson's message of 2018-06-20 11:24:38 -0700: > Hello Everyone! > > Wanted to give you some updates on PTG4 planning. We have finalized the > list of SIGs/ Groups/WGs/Teams that are attending. They are as follows: > > - > > Airship > - > > API SIG > - > > Barbican/Security SIG > - > > Blazar > - > > Chef OpenStack > - > > Cinder > - > > Cyborg > - > > Designate > - > > Documentation > - > > Edge Computing Group > - > > First Contact SIG > - > > Glance > - > > Heat > - > > Horizon > - > > Infrastructure > - > > Interop WG > > > - > > Ironic > - > > Kata > - > > Keystone > - > > Kolla > - > > LOCI > - > > Manila > - > > Masakari > - > > Mistral > - > > Monasca > - > > Neutron > - > > Nova > - > > Octavia > - > > OpenStack Ansible > - > > OpenStack Charms > - > > OpenStack Helm > - > > OpenStackClient > > > > - > > Operator Meetup > Puppet OpenStack > - > > QA > - > > Oslo > - > > Public Cloud WG > - > > Release Management > - > > Requirements > - > > Sahara > - > > Scientific SIG > - > > Self-Healing SIG > - > > SIG- K8s > - > > StarlingX > - > > Swift > - > > TC > - > > TripleO > - > > Upgrades SIG > - > > Watcher > - > > Zuul (pending confirmation) > > Thierry and I are working on placing them into a strawman schedule to > reduce conflicts between related or overlapping groups. We should have more > on what that will look like and a draft for you all to review in the next > few weeks. > > We also wanted to remind you all of the Travel Support Program. We are > again doing a two phase selection. The first deadline is approaching: July > 1st. At this point we have less than a dozen applicants so if you need it > or even think you need it, I urge you to apply here[1]. > > Also! Reminder that we have a finite number of rooms in the hotel block so > please book early to make sure you get the discounted rate before they run > out. You can book those rooms here[2] (pardon the ugly URL). > > Can't wait to see you all there! > > -Kendall Nelson (diablo_rojo) > > P.S. Gonna try to do a game night again since you all seemed to enjoy it so > much last time :) > > [1] > https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 > > [2] > https://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%2C%20Openstack%5Edensa%60opnopna%7Copnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes From openstack at sheep.art.pl Mon Jun 25 13:36:30 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 25 Jun 2018 15:36:30 +0200 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: References: <1529930278-sup-8192@lrrr.local> Message-ID: A fix for it is in review: https://review.openstack.org/577820 On Mon, Jun 25, 2018 at 3:07 PM, Andreas Jaeger wrote: > On 2018-06-25 14:57, Radomir Dopieralski wrote: > > Any idea where it took the 1.1.5.0 version from? > > git grep 1.1.5 shows at least: > > setup.cfg:description = Angular-Material 1.1.5 (XStatic packaging standard) > xstatic/pkg/angular_material/__init__.py:VERSION = '1.1.5' # version of > the packaged files, please use the upstream > > Not sure whether that's the right place, I suggest you build locally and > check the build tarball, > > Andreas > > > On Mon, Jun 25, 2018 at 2:38 PM, Doug Hellmann > > wrote: > > > > Excerpts from zuul's message of 2018-06-25 12:14:34 +0000: > > > Build failed. > > > > > > - xstatic-check-version > > http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41a > c8a67d909b/release/xstatic-check-version/a501dba/ > > c8a67d909b/release/xstatic-check-version/a501dba/> > > : FAILURE in 1m 31s > > > - release-openstack-python release-openstack-python : SKIPPED > > > - announce-release announce-release : SKIPPED > > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > > > > "git tag version (1.1.5.1) does not match package version (1.1.5.0)" > > > > http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41a > c8a67d909b/release/xstatic-check-version/a501dba/job- > output.txt.gz#_2018-06-25_12_14_11_034599 > > c8a67d909b/release/xstatic-check-version/a501dba/job- > output.txt.gz#_2018-06-25_12_14_11_034599> > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Jun 25 13:48:53 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 25 Jun 2018 22:48:53 +0900 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: References: <1529930278-sup-8192@lrrr.local> Message-ID: First of all, thanks for the release team and Radomir. Apart from the fix, is there any good way to detect this kind of errors in individual project reviews? xstatic-cores have new members and all of them are not necessarily super familiar with the xstatic process. In addition, updates in xstatic repos do not happen often, so we xstatic-cores (including horizon-cores) tend to forget minor corner cases.... I believe this failure is a symptom that we should add more checks in xstatic repo gates rather than "noop" :( Akihiro 2018年6月25日(月) 22:37 Radomir Dopieralski : > A fix for it is in review: https://review.openstack.org/577820 > > On Mon, Jun 25, 2018 at 3:07 PM, Andreas Jaeger wrote: > >> On 2018-06-25 14:57, Radomir Dopieralski wrote: >> > Any idea where it took the 1.1.5.0 version from? >> >> git grep 1.1.5 shows at least: >> >> setup.cfg:description = Angular-Material 1.1.5 (XStatic packaging >> standard) >> xstatic/pkg/angular_material/__init__.py:VERSION = '1.1.5' # version of >> the packaged files, please use the upstream >> >> Not sure whether that's the right place, I suggest you build locally and >> check the build tarball, >> >> Andreas >> >> > On Mon, Jun 25, 2018 at 2:38 PM, Doug Hellmann > > > wrote: >> > >> > Excerpts from zuul's message of 2018-06-25 12:14:34 +0000: >> > > Build failed. >> > > >> > > - xstatic-check-version >> > >> http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/ >> > < >> http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/ >> > >> > : FAILURE in 1m 31s >> > > - release-openstack-python release-openstack-python : SKIPPED >> > > - announce-release announce-release : SKIPPED >> > > - propose-update-constraints propose-update-constraints : SKIPPED >> > > >> > >> > "git tag version (1.1.5.1) does not match package version (1.1.5.0)" >> > >> > >> http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/job-output.txt.gz#_2018-06-25_12_14_11_034599 >> > < >> http://logs.openstack.org/59/592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic-check-version/a501dba/job-output.txt.gz#_2018-06-25_12_14_11_034599 >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > < >> http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Jun 25 13:55:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 09:55:01 -0400 Subject: [openstack-dev] [tc] Technical Committee update for 25 June Message-ID: <1529934851-sup-728@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == Other approved changes: * add champion section to goal template https://review.openstack.org/575934 * a "castellan-compatible key store" is now part of the base services list https://governance.openstack.org/tc/reference/base-services.html#current-list-of-base-services Office hour logs from last week: * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-19-09.00.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-20-01.09.html * http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-06-21-15.00.html A few folks expressed concern that using the meeting bot to record the office hours made them more like a meeting. It would be useful to have some feedback from the community about whether having the logs separate is helfpul, or if linking to the timestamp in the regular channel logs would be sufficient. == Ongoing Discussions == The Adjutant team application was updated. * https://review.openstack.org/553643 Thierry started drafting a "technical design tenets" chapter for the project team guide as a place to explain topics like base services and other things we expect to be common development patterns in the community. * https://storyboard.openstack.org/#!/story/2002611 == TC member actions/focus/discussions for the coming week(s) == Project team "health check" interviews continue. Our goal is to check in with each team between now and the PTG, and record notes in the wiki. * https://wiki.openstack.org/wiki/OpenStack_health_tracker Remember that we agreed to send status updates on initiatives separately to openstack-dev every two weeks. If you are working on something for which there has not been an update in a couple of weeks, please consider summarizing the status. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From sean.mcginnis at gmail.com Mon Jun 25 14:13:17 2018 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Mon, 25 Jun 2018 09:13:17 -0500 Subject: [openstack-dev] [Release-job-failures] Release of openstack/xstatic-angular-material failed In-Reply-To: References: Message-ID: This release failed due to the version set in source being different than the tag being requested. I've proposed https://review.openstack.org/577828 to add checks to our validation job to catch these issues. Hopefully that will prevent this from happening again. For this release though, we will need a new 1.1.5.2 release requested after the version has been updated in the repo source. On Mon, Jun 25, 2018 at 7:14 AM, wrote: > Build failed. > > - xstatic-check-version http://logs.openstack.org/59/ > 592a9d4c90f37cd33c6f861120f41ac8a67d909b/release/xstatic- > check-version/a501dba/ : FAILURE in 1m 31s > - release-openstack-python release-openstack-python : SKIPPED > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Mon Jun 25 14:35:26 2018 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 25 Jun 2018 10:35:26 -0400 Subject: [openstack-dev] [manila] Rocky Review Focus Message-ID: <20180625143526.g25vfb7vtj47tqrp@barron.net> It's less than a month till Milestone 3 so I've posted an etherpad with the new Manila driver and feature work that we've agreed to try to merge in Rocky: https://etherpad.openstack.org/p/manila-rocky-review-focus These are making good progress but in general need more review attention. Please take a look and add your name to the etherpad next to particular reviews so that we can all see where we need more reviewers. Please don't feel like you need to be a current core reviewer or established in the manila community to add your name to the etherpad or to do reviews! -- Tom Barron From prometheanfire at gentoo.org Mon Jun 25 14:38:28 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 25 Jun 2018 09:38:28 -0500 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <1529931478-sup-5247@lrrr.local> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> <1529931478-sup-5247@lrrr.local> Message-ID: <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> On 18-06-25 08:59:23, Doug Hellmann wrote: > Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > > The short of it is that we are currently using eventlet 0.20.0. The bot > > proposes 0.22.1 which fails updates, I think we need to start bugging > > projects that fail the cross test jobs. What do you think? > > > > By "bugging" do you mean we should file bugs, or something else? > Yes, to start, it'd look something like this. https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > Which version of eventlet is actually being used in the various > distros? > For Gentoo it's 0.20.1 right now, but that's mainly because I haven't updated it myself (because Openstack). -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Mon Jun 25 14:58:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 10:58:26 -0400 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> <1529931478-sup-5247@lrrr.local> <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> Message-ID: <1529938623-sup-784@lrrr.local> Excerpts from Matthew Thode's message of 2018-06-25 09:38:28 -0500: > On 18-06-25 08:59:23, Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > > > The short of it is that we are currently using eventlet 0.20.0. The bot > > > proposes 0.22.1 which fails updates, I think we need to start bugging > > > projects that fail the cross test jobs. What do you think? > > > > > > > By "bugging" do you mean we should file bugs, or something else? > > > > Yes, to start, it'd look something like this. > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 OK, tracking issues like that will work. You might also need a storyboard story for projects that have already migrated. > > > Which version of eventlet is actually being used in the various > > distros? > > > > For Gentoo it's 0.20.1 right now, but that's mainly because I haven't > updated it myself (because Openstack). > From thierry at openstack.org Mon Jun 25 15:15:44 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 25 Jun 2018 17:15:44 +0200 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: <1529933751-sup-1272@lrrr.local> References: <1529933751-sup-1272@lrrr.local> Message-ID: <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> Doug Hellmann wrote: > Are we planning to have space for goal teams to answer > implementation questions? At previous PTGs the "goal rooms" ended up not being used (or very very poorly-attended), so our current plan was to not allocate specific space, but leverage the "Ask me anything" project help room to answer those questions as well. It shall be a large room, so plenty of space to do that... and probably nicer compared to waiting in a smaller, dedicated but mostly empty room. That said, if we have people signed up to run a specific room, and people interested in joining that, I'm happy allocating space on Monday or Tuesday for that... Otherwise there is always the option to schedule in reservable space using ptgbot on the fly :) -- Thierry Carrez (ttx) From prometheanfire at gentoo.org Mon Jun 25 15:19:50 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 25 Jun 2018 10:19:50 -0500 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <1529938623-sup-784@lrrr.local> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> <1529931478-sup-5247@lrrr.local> <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> <1529938623-sup-784@lrrr.local> Message-ID: <20180625151950.rmsyjtdhnu6nylyo@gentoo.org> On 18-06-25 10:58:26, Doug Hellmann wrote: > Excerpts from Matthew Thode's message of 2018-06-25 09:38:28 -0500: > > On 18-06-25 08:59:23, Doug Hellmann wrote: > > > Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > > > > The short of it is that we are currently using eventlet 0.20.0. The bot > > > > proposes 0.22.1 which fails updates, I think we need to start bugging > > > > projects that fail the cross test jobs. What do you think? > > > > > > > > > > By "bugging" do you mean we should file bugs, or something else? > > > > > > > Yes, to start, it'd look something like this. > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > OK, tracking issues like that will work. You might also need a > storyboard story for projects that have already migrated. > Ya, I'm not sure what to do there. Requirements hasn't migrated because other projects haven't migrated (we really need to be on both systems unfortunately). Is it possible to half migrate? -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Mon Jun 25 15:23:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 11:23:45 -0400 Subject: [openstack-dev] [oslo][python3] testing the python3-first goal steps with oslo libraries Message-ID: <1529940148-sup-7352@lrrr.local> We have already started some of the work to move Oslo to python3-first, and I would like to work through the remaining steps as a way of testing the goal documentation. Please take a look at the documentation [1] and let me know if you have any concerns or questions about us doing that. Thanks, Doug [1] https://review.openstack.org/#/c/575933/6 From doug at doughellmann.com Mon Jun 25 15:26:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 11:26:29 -0400 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> References: <1529933751-sup-1272@lrrr.local> <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> Message-ID: <1529940299-sup-4219@lrrr.local> Excerpts from Thierry Carrez's message of 2018-06-25 17:15:44 +0200: > Doug Hellmann wrote: > > Are we planning to have space for goal teams to answer > > implementation questions? > > At previous PTGs the "goal rooms" ended up not being used (or very very > poorly-attended), so our current plan was to not allocate specific > space, but leverage the "Ask me anything" project help room to answer > those questions as well. It shall be a large room, so plenty of space to > do that... and probably nicer compared to waiting in a smaller, > dedicated but mostly empty room. > > That said, if we have people signed up to run a specific room, and > people interested in joining that, I'm happy allocating space on Monday > or Tuesday for that... > > Otherwise there is always the option to schedule in reservable space > using ptgbot on the fly :) > OK. Given the complexity of the zuul configuration changes for the python3 goal, I anticipated some questions. I will be prepared to talk to people about it regardless, and maybe we won't need a separate room. Doug From doug at doughellmann.com Mon Jun 25 15:28:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 11:28:43 -0400 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <20180625151950.rmsyjtdhnu6nylyo@gentoo.org> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> <1529931478-sup-5247@lrrr.local> <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> <1529938623-sup-784@lrrr.local> <20180625151950.rmsyjtdhnu6nylyo@gentoo.org> Message-ID: <1529940417-sup-7620@lrrr.local> Excerpts from Matthew Thode's message of 2018-06-25 10:19:50 -0500: > On 18-06-25 10:58:26, Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-06-25 09:38:28 -0500: > > > On 18-06-25 08:59:23, Doug Hellmann wrote: > > > > Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > > > > > The short of it is that we are currently using eventlet 0.20.0. The bot > > > > > proposes 0.22.1 which fails updates, I think we need to start bugging > > > > > projects that fail the cross test jobs. What do you think? > > > > > > > > > > > > > By "bugging" do you mean we should file bugs, or something else? > > > > > > > > > > Yes, to start, it'd look something like this. > > > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > OK, tracking issues like that will work. You might also need a > > storyboard story for projects that have already migrated. > > > > Ya, I'm not sure what to do there. Requirements hasn't migrated because > other projects haven't migrated (we really need to be on both systems > unfortunately). Is it possible to half migrate? > That's a question for the SB team, although I'm not sure I see why it's needed. If you're going to have a story and a LP bug, couldn't you just link to one from the other to avoid having to tag the requirements repo twice? Doug From fungi at yuggoth.org Mon Jun 25 16:01:11 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 25 Jun 2018 16:01:11 +0000 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: <1529940299-sup-4219@lrrr.local> References: <1529933751-sup-1272@lrrr.local> <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> <1529940299-sup-4219@lrrr.local> Message-ID: <20180625160110.2l3d2t4ujcvrwj7b@yuggoth.org> On 2018-06-25 11:26:29 -0400 (-0400), Doug Hellmann wrote: > Excerpts from Thierry Carrez's message of 2018-06-25 17:15:44 +0200: > > Doug Hellmann wrote: > > > Are we planning to have space for goal teams to answer > > > implementation questions? > > > > At previous PTGs the "goal rooms" ended up not being used (or very very > > poorly-attended), so our current plan was to not allocate specific > > space, but leverage the "Ask me anything" project help room to answer > > those questions as well. It shall be a large room, so plenty of space to > > do that... and probably nicer compared to waiting in a smaller, > > dedicated but mostly empty room. > > > > That said, if we have people signed up to run a specific room, and > > people interested in joining that, I'm happy allocating space on Monday > > or Tuesday for that... > > > > Otherwise there is always the option to schedule in reservable space > > using ptgbot on the fly :) > > > > OK. Given the complexity of the zuul configuration changes for the > python3 goal, I anticipated some questions. I will be prepared to > talk to people about it regardless, and maybe we won't need a > separate room. I think the up side to the current plan is that there are likely to also be Zuulies in that same room answering Zuulish sorts of questions anyway, so easier to get input on goal-specific topics where there is such an overlap. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Jun 25 16:03:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 25 Jun 2018 16:03:36 +0000 Subject: [openstack-dev] [all][requirements] we need to talk about eventlet In-Reply-To: <20180625151950.rmsyjtdhnu6nylyo@gentoo.org> References: <20180624074223.jrvtjmxwuogie4pm@gentoo.org> <1529931478-sup-5247@lrrr.local> <20180625143828.j57qd5v5hjpmpnkd@gentoo.org> <1529938623-sup-784@lrrr.local> <20180625151950.rmsyjtdhnu6nylyo@gentoo.org> Message-ID: <20180625160336.7ohpbsiuu67gq5ki@yuggoth.org> On 2018-06-25 10:19:50 -0500 (-0500), Matthew Thode wrote: > On 18-06-25 10:58:26, Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-06-25 09:38:28 -0500: > > > On 18-06-25 08:59:23, Doug Hellmann wrote: > > > > Excerpts from Matthew Thode's message of 2018-06-24 02:42:23 -0500: > > > > > The short of it is that we are currently using eventlet 0.20.0. The bot > > > > > proposes 0.22.1 which fails updates, I think we need to start bugging > > > > > projects that fail the cross test jobs. What do you think? > > > > > > > > > > > > > By "bugging" do you mean we should file bugs, or something else? > > > > > > > > > > Yes, to start, it'd look something like this. > > > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > OK, tracking issues like that will work. You might also need a > > storyboard story for projects that have already migrated. > > > > Ya, I'm not sure what to do there. Requirements hasn't migrated because > other projects haven't migrated (we really need to be on both systems > unfortunately). Is it possible to half migrate? The SB migration script is set up to handle incremental importing, so this could be an option in theory. In practice, we've not really used it to pull in updates for the same project over a span of more than a week, though it does basically still happen for shared bugs in LP where not all projects with bugtasks get imported in the same timeframe. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chris.friesen at windriver.com Mon Jun 25 16:16:38 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 25 Jun 2018 10:16:38 -0600 Subject: [openstack-dev] [cinder] making volume available without stopping VM In-Reply-To: References: Message-ID: <5B311566.60104@windriver.com> On 06/23/2018 08:38 AM, Volodymyr Litovka wrote: > Dear friends, > > I did some tests with making volume available without stopping VM. I'm using > CEPH and these steps produce the following results: > > 1) openstack volume set --state available [UUID] > - nothing changed inside both VM (volume is still connected) and CEPH > 2) openstack volume set --size [new size] --state in-use [UUID] > - nothing changed inside VM (volume is still connected and has an old size) > - size of CEPH volume changed to the new value > 3) during these operations I was copying a lot of data from external source and > all md5 sums are the same on both VM and source > 4) changes on VM happens upon any kind of power-cycle (e.g. reboot (either soft > or hard): openstack server reboot [--hard] [VM uuid] ) > - note: NOT after 'reboot' from inside VM > > It seems, that all these manipilations with cinder just update internal > parameters of cinder/CEPH subsystems, without immediate effect for VMs. Is it > safe to use this mechanism in this particular environent (e.g. CEPH as backend)? There are a different set of instructions[1] which imply that the change should be done via the hypervisor, and that the guest will then see the changes immediately. Also, If you resize the backend in a way that bypasses nova, I think it will result in the placement data being wrong. (At least temporarily.) Chris [1] https://wiki.skytech.dk/index.php/Ceph_-_howto,_rbd,_lvm,_cluster#Online_resizing_of_KVM_images_.28rbd.29 From openstack at nemebean.com Mon Jun 25 16:50:27 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 25 Jun 2018 11:50:27 -0500 Subject: [openstack-dev] [ironic] SOFT_REBOOT powers off/on the server In-Reply-To: References: Message-ID: <08f5ef58-127c-0619-14a5-d3c284f08d09@nemebean.com> A soft reboot should allow the server to shut down gracefully. A reset wouldn't do that, at least as I understand it. A reset would be more appropriate for a hard reboot, although I see Dmitry had a couple of other concerns with implementing it as a reset on the review. -Ben On 06/25/2018 05:31 AM, Lenny Berkhovsky wrote: > Hi, > > Is there a reason for powering off and on[1] instead of resetting server > during SOFT_REBOOT? > > I have a patchset[2] that resets the server instead of power cycling it. > > [1] > https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ipmitool.py#L820 > >   elif power_state == states.SOFT_REBOOT: > >             _soft_power_off(task, driver_info, timeout=timeout) > >             driver_utils.ensure_next_boot_device(task, driver_info) > >             _power_on(task, driver_info, timeout=timeout) > > [2] https://review.openstack.org/#/c/577748/ > > Lenny Verkhovsky > > ( aka lennyb ) > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doug at doughellmann.com Mon Jun 25 18:00:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 25 Jun 2018 14:00:09 -0400 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: <20180625160110.2l3d2t4ujcvrwj7b@yuggoth.org> References: <1529933751-sup-1272@lrrr.local> <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> <1529940299-sup-4219@lrrr.local> <20180625160110.2l3d2t4ujcvrwj7b@yuggoth.org> Message-ID: <1529949553-sup-1225@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-06-25 16:01:11 +0000: > On 2018-06-25 11:26:29 -0400 (-0400), Doug Hellmann wrote: > > Excerpts from Thierry Carrez's message of 2018-06-25 17:15:44 +0200: > > > Doug Hellmann wrote: > > > > Are we planning to have space for goal teams to answer > > > > implementation questions? > > > > > > At previous PTGs the "goal rooms" ended up not being used (or very very > > > poorly-attended), so our current plan was to not allocate specific > > > space, but leverage the "Ask me anything" project help room to answer > > > those questions as well. It shall be a large room, so plenty of space to > > > do that... and probably nicer compared to waiting in a smaller, > > > dedicated but mostly empty room. > > > > > > That said, if we have people signed up to run a specific room, and > > > people interested in joining that, I'm happy allocating space on Monday > > > or Tuesday for that... > > > > > > Otherwise there is always the option to schedule in reservable space > > > using ptgbot on the fly :) > > > > > > > OK. Given the complexity of the zuul configuration changes for the > > python3 goal, I anticipated some questions. I will be prepared to > > talk to people about it regardless, and maybe we won't need a > > separate room. > > I think the up side to the current plan is that there are likely to > also be Zuulies in that same room answering Zuulish sorts of > questions anyway, so easier to get input on goal-specific topics > where there is such an overlap. OK, that makes a lot of sense. I didn't see anything on the list here that looked like an "ask me anything" room but maybe I missed it or this list was just project teams? Either way, as long as we have some space, it's fine. Doug From lars at redhat.com Mon Jun 25 18:06:42 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Mon, 25 Jun 2018 14:06:42 -0400 Subject: [openstack-dev] [tripleo] Referring to the --templates directory? Message-ID: <20180625180642.xcej5666d6fysqks@redhat.com> Is there a way to refer to the `--templates` directory when writing service templates? Existing service templates can use relative paths, as in: resources: ContainersCommon: type: ./containers-common.yaml But if I'm write a local service template (which I often do during testing/development), I would need to use the full path to the corresponding file: ContainersCommon: type: /usr/share/openstack-tripleo-heat-templates/docker/services/containers-common.yaml But that breaks if I use another template directory via the --templates option to the `openstack overcloud deploy` command. Is there a way to refer to "the current templates directory"? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From ellstripes at gmail.com Mon Jun 25 18:21:19 2018 From: ellstripes at gmail.com (Ell Marquez) Date: Mon, 25 Jun 2018 13:21:19 -0500 Subject: [openstack-dev] OpenStack Mentorship Program Relaunch Message-ID: Hello all, We are happy to announce the relaunch of the OpenStack Mentoring program, and we are kicking off with some changes to the program that we hope will better serve the community. Previously mentoring occurred through one on one partnering of mentor and mentee; this new program will focus on providing mentorship through goal-focused cohorts of mentors. This change will allow mentoring responsibilities to be shared among each group's mentors. The initial cohorts will be: - Get your first patch merged - First CFP submission / Give your first talk - Become COA certified / study for COA - Deploy your first Cloud If you are interested in joining as a mentor or mentee, please sign up at : Mentor Signup: https://openstackfoundation.formstack.com/forms/mentoring_co horts_mentors Mentee Signup: https://openstackfoundation.formstack.com/forms/mentoring_co horts_mentees freenode irc room: #openstack-mentoring Cheers, Ell Marquez and Jill Rouleau -------------- next part -------------- An HTML attachment was scrubbed... URL: From bleiva.info at gmail.com Mon Jun 25 18:26:45 2018 From: bleiva.info at gmail.com (Braian Leiva) Date: Mon, 25 Jun 2018 15:26:45 -0300 Subject: [openstack-dev] [masakari] Masakarimonitors EndpointNotFound Issue Message-ID: Hello everyone, I've configured masakari-api (controller node01) and masakari-processmonitor (node02, node03 compute nodes) but I have an issue when I try to test the instance migration between nodes. The node alive can't reach to public endpoint for instance-ha. Jun 25 18:23:50 node02 python2: 2018-06-25 18:23:50.380 10269 INFO masakarimonitors.hostmonitor.host_handler.handle_host [-] 'node01.xxxxxx.net' is 'online'. Jun 25 18:23:50 node02 python2: 2018-06-25 18:23:50.380 10269 INFO masakarimonitors.hostmonitor.host_handler.handle_host [-] 'node03.xxxxxx.net' is 'online'. Jun 25 18:23:50 node02 python2: 2018-06-25 18:23:50.381 10269 INFO masakarimonitors.ha.masakari [-] Send a notification. {'notification': {'hostname': 'node03.xxxxxx.net', 'type': 'COMPUTE_HOST', 'payload': {'host_status': 'NORMAL', 'event': 'STARTED', 'cluster_status': 'ONLINE'}, 'generated_time': datetime.datetime(2018, 6, 25, 17, 23, 50, 381284)}} Jun 25 18:23:50 node02 python2: 2018-06-25 18:23:50.818 10269 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (public endpoint for instance-ha service not found): EndpointNotFound: public endpoint for instance-ha service not found Jun 25 18:23:53 node02 python2: 2018-06-25 18:23:53.826 10269 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (public endpoint for instance-ha service not found): EndpointNotFound: public endpoint for instance-ha service not found Jun 25 18:23:56 node02 python2: 2018-06-25 18:23:56.834 10269 WARNING masakarimonitors.ha.masakari [-] Retry sending a notification. (public endpoint for instance-ha service not found): EndpointNotFound: public endpoint for instance-ha service not found Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari [-] Exception caught: public endpoint for instance-ha service not found: EndpointNotFound: public endpoint for instance-ha service not found Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari Traceback (most recent call last): Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/masakarimonitors/ha/masakari.py", line 68, in send_notification Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari payload=event['notification']['payload']) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/instance_ha/v1/_proxy.py", line 65, in create_notification Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self._create(_notification.Notification, **attrs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/proxy.py", line 197, in _create Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return res.create(self) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/resource.py", line 729, in create Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari json=request.body, headers=request.headers) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 310, in post Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self.request(url, 'POST', **kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/_adapter.py", line 145, in request Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari **kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 138, in submit_function Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self.submit_task(task) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 127, in submit_task Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self.run_task(task=task) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 159, in run_task Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self._run_task(task) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 179, in _run_task Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return task.wait() Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 81, in wait Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari self._traceback) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 89, in run Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari self.done(self.main()) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/openstack/task_manager.py", line 61, in main Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self._main(*self.args, **self.kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 189, in request Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return self.session.request(url, method, **kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 695, in request Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari **endpoint_filter) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1077, in get_endpoint Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari return auth.get_endpoint(self, **kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari allow_version_hack=allow_version_hack, **kwargs) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari service_name=service_name) Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari raise exceptions.EndpointNotFound(msg) *Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari EndpointNotFound: public endpoint for instance-ha service not found* *Jun 25 18:23:59 node02 python2: 2018-06-25 18:23:59.841 10269 ERROR masakarimonitors.ha.masakari* *The configuration /etc/masakarimonitors/masakarimonitors.conf is:* [DEFAULT] debug = false [api] region = RegionOne api_version = v1 # I've tried with public and I have the same error # api_interface = public api_interface = internal auth_url = http://192.168.0.254:35357 project_name = service project_domain_id = default project_domain_name = Default username = masakari user_domain_id = default user_domain_name = Default password = masakari [callback] [cors] [healthcheck] [host] monitoring_interval = 5 api_retry_max = 3 api_retry_interval = 3 disable_ipmi_check = True stonith_wait = 15 corosync_multicast_interfaces = eno1 corosync_multicast_ports = 5405 [libvirt] connection_uri = qemu:///system [oslo_middleware] [process] *Endpoint list:* # openstack endpoint list | grep masakari | 0b18390c4b6f445486f663725a763dde | RegionOne | masakari | masakari | True | public | http://192.168.0.11:15868/v1/%(tenant_id)s | | 1d33d9aff3de420da285986da31626a8 | RegionOne | masakari | masakari | True | admin | http://192.168.0.11:15868/v1/%(tenant_id)s | | 503ebe5834074621a45db7c2da3703cc | RegionOne | masakari | masakari | True | internal | http://192.168.0.11:15868/v1/%(tenant_id)s | If someone can guide me, I will be grateful. Cheers! -- Braian F. Leiva -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Jun 25 18:42:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 25 Jun 2018 13:42:48 -0500 Subject: [openstack-dev] [cinder] making volume available without stopping VM In-Reply-To: <2b3b65dc-d284-2f78-3ab2-57ae0f9f5ecc@gmx.com> References: <2b3b65dc-d284-2f78-3ab2-57ae0f9f5ecc@gmx.com> Message-ID: <20180625184247.GA20692@sm-workstation> > > In fact, I'm ok with delayed resize (upon power-cycle), and it's not an > issue for me that VM don't detect changes immediately. What I want to > understand is that changes to Cinder (and, thus, underlying changes to CEPH) > are safe for VM while it's in active state. > No, this is not considered safe. You are forcing the volume state to be availabile when it is in fact not. Not sure if it's an option for you, but in the Pike release support was added to be able to extend attached volumes. There are several caveats with this feature though. I believe it only works with libvirt, and if I remember right, only newer versions of libvirt. You need to have notifications working for Nova to pick up that Cinder has extended the volume. You can get some details from the cinder spec: https://specs.openstack.org/openstack/cinder-specs/specs/pike/extend-attached-volume.html And the corresponding Nova spec: http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html You may also want to read through the mailing list thread if you want to get in to some of the nitty gritty details behind why certain design choices were made: http://lists.openstack.org/pipermail/openstack-dev/2017-April/115292.html From sean.mcginnis at gmx.com Mon Jun 25 18:48:01 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 25 Jun 2018 13:48:01 -0500 Subject: [openstack-dev] [Release-job-failures][release][horizon] Release of openstack/xstatic-angular-material failed In-Reply-To: References: <1529930278-sup-8192@lrrr.local> Message-ID: <20180625184800.GB20692@sm-workstation> > > Apart from the fix, is there any good way to detect this kind of errors in > individual project reviews? > I've added xstatic version checking to our release request validation. Once this merges, we should be able to automatically catch and error on these types of conditions: https://review.openstack.org/577828 From haleyb.dev at gmail.com Mon Jun 25 19:18:21 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Mon, 25 Jun 2018 15:18:21 -0400 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: Hi, I was Neutron bug deputy last week. Below is a short summary about reported bugs. Critical bugs ------------- None High bugs --------- * https://bugs.launchpad.net/neutron/+bug/1777908 - Ensure _get_changed_synthetic_fields() return updatable fields - breaks consumers - Boden proposed reverts * https://bugs.launchpad.net/neutron/+bug/1777922 - neutron is not dropping radvd privileges - Fix proposed, https://review.openstack.org/#/c/576923/ * https://bugs.launchpad.net/neutron/+bug/1777968 - Too many DBDeadlockError and IP address collision during port creating - Fix proposed, https://review.openstack.org/#/c/577739/ Medium bugs ----------- * https://bugs.launchpad.net/neutron/+bug/1778118 - missing transaction in driver_controller.py for l3 flavors - Fix proposed, https://review.openstack.org/#/c/577246/ Wishlist bugs ------------- * https://bugs.launchpad.net/neutron/+bug/1777746 - When use ‘neutron net-update’, we cannot change the 'vlan-transparent' dynamically - not a bug as per the API definition, asked if proposing extension - perhaps possible to implement in backward-compatible way Further triage required ----------------------- * https://bugs.launchpad.net/neutron/+bug/1777866 - QoS CLI – Warning in case when provided burst is lower than 80% BW - submitter wants CLI warning, not sure it's necessary as it is already mentioned in the admin guide - possible change in OSC could address * https://bugs.launchpad.net/neutron/+bug/1778407 - Quality of Service (QoS) in Neutron - Notes regarding burst (80% BW) - Closed as duplicate of 1777866 * https://bugs.launchpad.net/neutron/+bug/1777965 - Create port get quota related DBDeadLock * https://bugs.launchpad.net/neutron/+bug/1778207 - fwaas v2 add port into firewall group failed - most likely need DVR/HA check in affected code path - need to ping SridarK to take a look From bodenvmw at gmail.com Mon Jun 25 20:04:23 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Mon, 25 Jun 2018 14:04:23 -0600 Subject: [openstack-dev] [neutron] Networking projects not setup properly for Zuul v3 and local testing Message-ID: It appears a number of networking related projects aren't setup properly for Zuul v3 gate jobs, as well as for local testing when it comes to pulling in dependencies from source. Since this may not be a common concept, ask yourself: "should my project's master branch source be running with and tested against neutron's (or other projects) master branch source"? If the answer is yes, this may impact you. I started a brain-dump on etherpad [1] for this topic and best I can tell a number of networking projects are impacted. I would like to please ask networking related projects that are "staying current" to have a look. Thanks! [1] https://etherpad.openstack.org/p/neutron-sibling-setup From kennelson11 at gmail.com Mon Jun 25 20:36:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 25 Jun 2018 13:36:32 -0700 Subject: [openstack-dev] [Openstack-sigs] [PTG] Updates! In-Reply-To: <1529949553-sup-1225@lrrr.local> References: <1529933751-sup-1272@lrrr.local> <02a9e157-fd1c-d951-bfc4-471526984fb1@openstack.org> <1529940299-sup-4219@lrrr.local> <20180625160110.2l3d2t4ujcvrwj7b@yuggoth.org> <1529949553-sup-1225@lrrr.local> Message-ID: On Mon, Jun 25, 2018 at 11:00 AM Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-06-25 16:01:11 +0000: > > On 2018-06-25 11:26:29 -0400 (-0400), Doug Hellmann wrote: > > > Excerpts from Thierry Carrez's message of 2018-06-25 17:15:44 +0200: > > > > Doug Hellmann wrote: > > > > > Are we planning to have space for goal teams to answer > > > > > implementation questions? > > > > > > > > At previous PTGs the "goal rooms" ended up not being used (or very > very > > > > poorly-attended), so our current plan was to not allocate specific > > > > space, but leverage the "Ask me anything" project help room to > answer > > > > those questions as well. It shall be a large room, so plenty of > space to > > > > do that... and probably nicer compared to waiting in a smaller, > > > > dedicated but mostly empty room. > > > > > > > > That said, if we have people signed up to run a specific room, and > > > > people interested in joining that, I'm happy allocating space on > Monday > > > > or Tuesday for that... > > > > > > > > Otherwise there is always the option to schedule in reservable space > > > > using ptgbot on the fly :) > > > > > > > > > > OK. Given the complexity of the zuul configuration changes for the > > > python3 goal, I anticipated some questions. I will be prepared to > > > talk to people about it regardless, and maybe we won't need a > > > separate room. > > > > I think the up side to the current plan is that there are likely to > > also be Zuulies in that same room answering Zuulish sorts of > > questions anyway, so easier to get input on goal-specific topics > > where there is such an overlap. > > OK, that makes a lot of sense. I didn't see anything on the list > here that looked like an "ask me anything" room but maybe I missed > it or this list was just project teams? Either way, as long as we have > some space, it's fine. > > Yep, this thread was to announce the teams/groups/sigs/wgs list to help officialize who might need to attend and start booking travel. Hopefully things will be even more clear once we have an actual strawman schedule to send out. We will have the 'ask me anything' helproom and reservable space as well. -Kendall (diablo_rojo) > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Jun 25 21:17:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 25 Jun 2018 16:17:59 -0500 Subject: [openstack-dev] [nova] Need feedback on spec for handling down cells in the API In-Reply-To: References: Message-ID: <85b298dd-57d8-7ae1-8b35-121c78aacd3c@gmail.com> On 6/7/2018 9:02 AM, Matt Riedemann wrote: > We have a nova spec [1] which is at the point that it needs some API > user (and operator) feedback on what nova API should be doing when > listing servers and there are down cells (unable to reach the cell DB or > it times out). > > tl;dr: the spec proposes to return "shell" instances which have the > server uuid and created_at fields set, and maybe some other fields we > can set, but otherwise a bunch of fields in the server response would be > set to UNKNOWN sentinel values. This would be unversioned, and therefore > could wreak havoc on existing client side code that expects fields like > 'config_drive' and 'updated' to be of a certain format. > > There are alternatives listed in the spec so please read this over and > provide feedback since this is a pretty major UX change. > > Oh, and no pressure, but today is the spec freeze deadline for Rocky. > > [1] https://review.openstack.org/#/c/557369/ The options laid out right now are: 1. Without a new microversion, include 'shell' servers in the response when listing over down cells. These would have UNKNOWN values for the fields in the server object. gibi and I didn't like this because existing client code wouldn't know how to deal with these UNKNOWN shell instances - and not all of the server fields are simple strings, we have booleans, integers, dicts and lists, so what would those values be? 2. In a new microversion, return a new top-level parameter when listing servers which would include minimal details about servers that are in down cells (minimal like just the uuid). This was an alternative gibi and I had discussed because we didn't like the client-side impacts w/o a microversion or the full 'shell' servers in option 1. From an IRC conversation last week with mordred [1], dansmith and mordred don't care for the new top-level parameter since clients would have to merge that in to the full list of available servers. Plus, in the future, if we ever have some kind of caching mechanism in the API from which we can pull instance information if it's in a down cell, then the new top-level parameter becomes kind of pointless. 3. In a new microversion, include servers from down cells in the same top-level servers response parameter but for those in down cells, we'll just include minimal information (status=UNKNOWN and the uuid). Clients would opt-in to the new microversion when they know how to deal with what an instance in UNKNOWN status means. In the future, we could use a caching mechanism to fill in these details for instances in down cells. #3 is kind of a compromise on options 1 and 2, and I'm OK with it (barring any hairy details). In all cases, we won't include 'shell' servers in the response if the user is filtering (or paging?) because we can't be honest about the results and just have to treat the filters as if they don't apply to the instances in the down cell. If you have a server in a down cell, you can't delete it or really do anything with it because we literally can't pull the instance out of the cell database while the cell is down. You'd get a 500 or 503 in that case. Regardless of microversion, we plan on omitting instances from down cells when listing which is a backportable reliability bug fix [2] so we don't 500 the API when listing across 70 cells and 1 is down. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-06-20.log.html#t2018-06-20T16:52:27 [2] https://review.openstack.org/#/c/575734/ -- Thanks, Matt From mnaser at vexxhost.com Mon Jun 25 21:28:39 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 25 Jun 2018 17:28:39 -0400 Subject: [openstack-dev] [nova] Adding hostId to metadata Message-ID: Hi everyone: While working with the OpenStack infrastructure team, we noticed that we were having some intermittent issues where we wanted to identify a theory if all VMs with this issue were landing on the same hypervisor. However, there seems to be no way of directly accessing `hostId` from inside the virtual machine (such as using the metadata API). This is a very useful thing to expose over the metadata API as not only would it help for troubleshooting these types of scenarios however it would also help software that can manage anti-affinity simply by checking the API and taking scheduling decisions. I've proposed the following patch to add this[1], however, this is technically an API change, and the blueprints document specifies that "API changes always require a design discussion." Also, I believe that we're in a state where getting a spec would require an exception. However, this is a very trivial change. Also, according to the notes in the metadata file, it looks like there is one "bump" per OpenStack release[3] which means that this change can just be part of that release-wide version bump of the OpenStack API. Can we include this trivial patch in the upcoming Rocky release? Thanks, Mohammed [1]: https://review.openstack.org/577933 [2]: https://docs.openstack.org/nova/latest/contributor/blueprints.html [3]: http://git.openstack.org/cgit/openstack/nova/tree/nova/api/metadata/base.py#n60 From mikal at stillhq.com Mon Jun 25 22:15:57 2018 From: mikal at stillhq.com (Michael Still) Date: Tue, 26 Jun 2018 08:15:57 +1000 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: References: Message-ID: We only bump the version if something has changed IIRC. I think bumping when nothing has changed would create a burden for implementers of client software. So its not like you get a chance to sneak this in "for free". Does this information really need to be available in the host OS? Its trivial to look it up via our existing APIs outside the host, although possibly less trivial if the instance has already been deleted. Michael On Tue, Jun 26, 2018 at 7:30 AM Mohammed Naser wrote: > Hi everyone: > > While working with the OpenStack infrastructure team, we noticed that > we were having some intermittent issues where we wanted to identify a > theory if all VMs with this issue were landing on the same hypervisor. > > However, there seems to be no way of directly accessing `hostId` from > inside the virtual machine (such as using the metadata API). This is > a very useful thing to expose over the metadata API as not only would > it help for troubleshooting these types of scenarios however it would > also help software that can manage anti-affinity simply by checking > the API and taking scheduling decisions. > > I've proposed the following patch to add this[1], however, this is > technically an API change, and the blueprints document specifies that > "API changes always require a design discussion." > > Also, I believe that we're in a state where getting a spec would > require an exception. However, this is a very trivial change. Also, > according to the notes in the metadata file, it looks like there is > one "bump" per OpenStack release[3] which means that this change can > just be part of that release-wide version bump of the OpenStack API. > > Can we include this trivial patch in the upcoming Rocky release? > > Thanks, > Mohammed > > [1]: https://review.openstack.org/577933 > [2]: https://docs.openstack.org/nova/latest/contributor/blueprints.html > [3]: > http://git.openstack.org/cgit/openstack/nova/tree/nova/api/metadata/base.py#n60 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Mon Jun 25 22:42:00 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 25 Jun 2018 17:42:00 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> Message-ID: <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> Keystone is hitting this, too [0]. I attempted the same solution that Tony posted, but no luck. I've even gone so far as removing every comment from the module to see if that helps narrow down the problem area, but sphinx still trips. The output from the error message isn't very descriptive either. Has anyone else had issues fixing this for python comments, not just docstrings? [0] https://bugs.launchpad.net/keystone/+bug/1778603 On 06/20/2018 11:52 PM, Takashi Yamamoto wrote: > On Thu, Jun 21, 2018 at 12:13 PM, Tony Breeds wrote: >> On Wed, Jun 20, 2018 at 08:54:56PM +0900, Takashi Yamamoto wrote: >> >>> do you have a plan to submit these changes on gerrit? >> I didn't but I have now: >> >> * https://review.openstack.org/577028 >> * https://review.openstack.org/577029 >> >> Feel free to edit/test as you like. > thank you! > >> Yours Tony. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Tue Jun 26 00:27:29 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 26 Jun 2018 10:27:29 +1000 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> Message-ID: <20180626002729.GB21570@thor.bakeyournoodle.com> On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: > Keystone is hitting this, too [0]. I attempted the same solution that > Tony posted, but no luck. I've even gone so far as removing every > comment from the module to see if that helps narrow down the problem > area, but sphinx still trips. The output from the error message isn't > very descriptive either. Has anyone else had issues fixing this for > python comments, not just docstrings? > > [0] https://bugs.launchpad.net/keystone/+bug/1778603 I did a little digging for the keystone problem and it's due to a missing ':' in https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 So the correct way to fix this is to correct that in oauthlib, get it released and use that. I hit additional problems in that enabling -W in oauthlib, to pevent this happening in the future, lead me down a rabbit hole I don't really have cycles to dig out of. Here's a dump of where I got to[1]. Clearly it mixes "fixes" with debugging but it isn't too hard to reproduce and someone that knows more Sphinx will be able to understand the errors better than I can. [1] http://paste.openstack.org/show/724271/ Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From feilong at catalyst.net.nz Tue Jun 26 02:45:48 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Tue, 26 Jun 2018 14:45:48 +1200 Subject: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC In-Reply-To: References: Message-ID: <1cdd9614-fef3-df33-f6c9-66d9d1764e5e@catalyst.net.nz> Hi Spyros, Thanks for posting the discussion output. I'm not sure I can follow the idea of simplifying CNI configuration. Though we have both calico and flannel for k8s, but if we put both of them into single one config script. The script could be very complex. That's why I think we should define some naming and logging rules/policies for those scripts for long term maintenance to make our life easier. Thoughts? On 25/06/18 19:20, Spyros Trigazis wrote: > Hello again, > > After Thursday's meeting I want to summarize what we discussed and add > some pointers. > > * Work on using the out-of-tree cloud provider and move to the new > model of defining it > https://storyboard.openstack.org/#!/story/1762743 > > https://review.openstack.org/#/c/577477/ > * Configure kubelet and kube-proxy on master nodes > This story of the master node label can be > extened https://storyboard.openstack.org/#!/story/2002618 > > or we can add a new one > * Simplify CNI configuration, we have calico and flannel. Ideally we > should a single config script for each > one. We could move flannel to the kubernetes hosted version that > uses kubernetes objects for storage. > (it is the recommended way by flannel and how it is done with kubeadm) > * magum support in gophercloud > https://github.com/gophercloud/gophercloud/issues/1003 > * *needs discussion *update version of heat templates (pike or > queens) This need its own tread > * Post deployment scripts for clusters, I have this since some time > for my but doing it in > heat is slightly (not a lot) complicated. Most magnum users favor  > the simpler solution > of passing a url of a manifest or script to the cluster (at least > let's add sha512sum). > * Simplify addition of custom labels/parameters. To avoid patcing > magnum, it would be > more ops friendly to have a generic field of custom parameters > > Not discussed in the last meeting but we should in the next ones: > > * Allow cluster scaling from different users in the same project > https://storyboard.openstack.org/#!/story/2002648 > > * Add the option to remove node from a resource group for swarm > clusters like > in kubernetes > https://storyboard.openstack.org/#!/story/2002677 > > > Let's follow these up in the coming meetings, Tuesday 1000UTC and > Thursday 1700UTC. > > You can always consult this page [1] for future meetings. > > Cheers, > Spyros > > [1] https://wiki.openstack.org/wiki/Meetings/Containers > > On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis > wrote: > > Hello list, > > We are going to have a second weekly meeting for magnum for 3 weeks > as a test to reach out to contributors in the Americas. > > You can join us tomorrow (or today for some?) at 1700UTC in > #openstack-containers . > > Cheers, > Spyros > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Jun 26 03:51:37 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 25 Jun 2018 22:51:37 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180626002729.GB21570@thor.bakeyournoodle.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> Message-ID: <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> Thanks a bunch for digging into this, Tony. I'll follow up with the oauthlib maintainers and see if they'd be interested in these changes upstream. If so, I can chip away at it. For now we'll have to settle for not treating warnings as errors to unblock our documentation gate [0]. [0] https://review.openstack.org/#/c/577974/ On 06/25/2018 07:27 PM, Tony Breeds wrote: > On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: >> Keystone is hitting this, too [0]. I attempted the same solution that >> Tony posted, but no luck. I've even gone so far as removing every >> comment from the module to see if that helps narrow down the problem >> area, but sphinx still trips. The output from the error message isn't >> very descriptive either. Has anyone else had issues fixing this for >> python comments, not just docstrings? >> >> [0] https://bugs.launchpad.net/keystone/+bug/1778603 > I did a little digging for the keystone problem and it's due to a > missing ':' in > https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 > > So the correct way to fix this is to correct that in oauthlib, get it > released and use that. > > I hit additional problems in that enabling -W in oauthlib, to pevent > this happening in the future, lead me down a rabbit hole I don't really > have cycles to dig out of. > > Here's a dump of where I got to[1]. Clearly it mixes "fixes" with > debugging but it isn't too hard to reproduce and someone that knows more > Sphinx will be able to understand the errors better than I can. > > > [1] http://paste.openstack.org/show/724271/ > > Yours Tony. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Tue Jun 26 09:18:52 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Jun 2018 18:18:52 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins Message-ID: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Hello Everyone, In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". One way to make this coordinated release (just a initial thought): 1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, each plugins will be released as 19.0.0 and so on. Giving the above background and use case of this coordinated release, A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. If we get the agreement from all Plugins then, B) Release team or TC help to find the better model for this use case or improvement in above model. C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html [2] https://docs.openstack.org/tempest/latest/plugin-registry.html [3] https://github.com/openstack/kuryr-tempest-plugin/releases https://github.com/openstack/ironic-tempest-plugin/releases [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html -gmann From ltoscano at redhat.com Tue Jun 26 09:28:03 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 26 Jun 2018 11:28:03 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Message-ID: <1725337.bCXafnShJA@whitebase.usersys.redhat.com> On Tuesday, 26 June 2018 11:18:52 CEST Ghanshyam Mann wrote: > Hello Everyone, > > In Queens cycle, community goal to split the Tempest Plugin has been > completed [1] and i think almost all the projects have separate repo for > tempest plugin [2]. Which means each tempest plugins are being separated > from their project release model. Few projects have started the > independent release model for their plugins like kuryr-tempest-plugin, > ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also > planning as chatted with amotoki. > > There might be some changes in Tempest which might not work with older > version of Tempest Plugins. For example, If I am testing any production > cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i > will be using Tempest and Aodh's , Congress's Tempest plugins. With > Independent release model of each Tempest Plugins, there might be chance > that the Aodh's or Congress's Tempest plugin versions are not compatible > with latest/known Tempest versions. It will become hard to find the > compatible tag/release of Tempest and Tempest Plugins or in some cases i > might need to patch up the things. > > During QA feedback sessions at Vancouver Summit, there was feedback to > coordinating the release of all Tempest plugins and Tempest [4] (also > amotoki talked to me on this as neutron-tempest-plugin is planning their > first release). Idea is to release/tag all the Tempest plugins and Tempest > together so that specific release/tag can be identified as compatible > version of all the Plugins and Tempest for testing the complete stack. That > way user can get to know what version of Tempest Plugins is compatible with > what version of Tempest. > > For above use case, we need some coordinated release model among Tempest and > all the Tempest Plugins. There should be single release of all Tempest > Plugins with well defined tag whenever any Tempest release is happening. > For Example, Tempest version 19.0.0 is to mark the "support of the Rocky > release". When releasing the Tempest 19.0, we will release all the Tempest > plugins also to tag the compatibility of plugins with Tempest for "support > of the Rocky release". > > One way to make this coordinated release (just a initial thought): > 1. Release Each Tempest Plugins whenever there is any major version release > of Tempest (like marking the support of OpenStack release in Tempest, EOL > of OpenStack release in Tempest) 1.1 Each plugin or Tempest can do their > intermediate release of minor version change which are in backward > compatible way. 1.2 This coordinated Release can be started from latest > Tempest Version for simple reading. Like if we start this coordinated > release from Tempest version 19.0.0 then, each plugins will be released as > 19.0.0 and so on. > > Giving the above background and use case of this coordinated release, > A) I would like to ask each plugins owner if you are agree on this > coordinated release? If no, please give more feedback or issue we can face > due to this coordinated release. > The Sahara PTL may disagree with me, but I disagree with forcing each team to release in a coordinate model. I already take care of releasing sahara-tests, which contains both the tempest plugin and the scenario tests, when a new major version of OpenStack is released, keeping the compatibility with the relevant versions of Tempest. tl;dr I agree with having Tempest plugins follow the same lifecycle of Tempest, but please allow me to do so manually. -- Luigi From dtantsur at redhat.com Tue Jun 26 09:37:42 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 26 Jun 2018 11:37:42 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Message-ID: On 06/26/2018 11:18 AM, Ghanshyam Mann wrote: > Hello Everyone, > > In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > > There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. FWIW this is solved by stable branches for all other projects. If we cannot keep Tempest compatible with all supported branches, we should back off our decision to make it branchless. The very nature of being branchless implies being compatible with all supported releases. > > During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > > For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > > One way to make this coordinated release (just a initial thought): > 1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > each plugins will be released as 19.0.0 and so on. > > Giving the above background and use case of this coordinated release, > A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. Disclaimer: I'm not the PTL. Similarly to Luigi, I don't feel well about forcing a plugin release at the same time as a tempest release, UNLESS tempest folks are going to coordinate their releases with all how-many-do-we-have plugins. What I'd like to avoid is cutting a release in the middle of a patch chain or some refactoring just because tempest happened to have a release right now. > > If we get the agreement from all Plugins then, > B) Release team or TC help to find the better model for this use case or improvement in above model. > > C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > > NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > > [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > [2] https://docs.openstack.org/tempest/latest/plugin-registry.html > [3] https://github.com/openstack/kuryr-tempest-plugin/releases > https://github.com/openstack/ironic-tempest-plugin/releases > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > > -gmann > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From doka.ua at gmx.com Tue Jun 26 09:42:06 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Tue, 26 Jun 2018 12:42:06 +0300 Subject: [openstack-dev] [cinder] making volume available without stopping VM In-Reply-To: <20180625184247.GA20692@sm-workstation> References: <2b3b65dc-d284-2f78-3ab2-57ae0f9f5ecc@gmx.com> <20180625184247.GA20692@sm-workstation> Message-ID: <6415ff16-8065-56cb-36c3-de40b1dfc287@gmx.com> Hi Sean, thanks for the responce, my questions and comments below. On 6/25/18 9:42 PM, Sean McGinnis wrote: > Not sure if it's an option for you, but in the Pike release support was added > to be able to extend attached volumes. There are several caveats with this > feature though. I believe it only works with libvirt, and if I remember right, > only newer versions of libvirt. You need to have notifications working for Nova > to pick up that Cinder has extended the volume. Pike release notes states the following: "It is now possible to signal and perform an online volume size change as of the 2.51 microversion using the volume-extended external event. Nova will perform the volume extension so the host can detect its new size. It will also resize the device in QEMU so instance can detect the new disk size without rebooting. Currently only the *libvirt compute driver with iSCSI and FC volumes supports the online volume size change*." And yes, it doesn't work for me since I'm using CEPH as backend. Queens release notes says nothing on changes. Feature matrix (https://docs.openstack.org/nova/queens/user/support-matrix.html) says it's supported on libvirt/x86 without any other further details. Does anybody know whether this feature implemented in Queens for other backends except iSCSI and FC? Mentioned earlier spec are talking about how to make result of resize to be visible to VM immediately upon resize, without restarting VM, while I don't asking for this. My question is how to resize volume and make it available after restart, see below >> In fact, I'm ok with delayed resize (upon power-cycle), and it's not an >> issue for me that VM don't detect changes immediately. What I want to >> understand is that changes to Cinder (and, thus, underlying changes to CEPH) >> are safe for VM while it's in active state. > No, this is not considered safe. You are forcing the volume state to be > availabile when it is in fact not. In very general case, I agree with you. For example, I can imagine that allocation of new blocks can fail if volume is declared as available, but, in particular case of CEPH: - in short: # status of volume in Cinder means nothing to CEPH - in details: # while Cinder do provisioning and maintenance # kvm/libvirt work directly with CEPH (after got this endpoint from <-Nova<-Cinder) # and I see no changes in CEPH's status of volume while it is available in Cinder: * in-use: $ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':     size 20480 MB in 5120 objects     order 22 (4096 kB objects)     block_name_prefix: rbd_data.2414a7572c9f46     format: 2     features: layering, exclusive-lock, object-map, fast-diff, deep-flatten     flags:     create_timestamp: Mon Jun 25 10:47:03 2018     parent: volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a at volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap     overlap: 3072 MB * available: $ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':     size 20480 MB in 5120 objects     order 22 (4096 kB objects)     block_name_prefix: rbd_data.2414a7572c9f46     format: 2     features: layering, exclusive-lock, object-map, fast-diff, deep-flatten     flags:     create_timestamp: Mon Jun 25 10:47:03 2018     parent: volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a at volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap     overlap: 3072 MB # and, during copying data, CEPH successfully allocates additional blocks to the volume: * before copying (volume is already available in Cinder) $ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb NAME                                        PROVISIONED USED volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb      20480M *2256M* * after copying (while volume is available in Cinder) $ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb NAME                                        PROVISIONED USED volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb      20480M *2560M* # which preserved after back to in-use: $ rbd du volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb NAME                                        PROVISIONED USED volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb      20480M *2560M* $ rbd info volumes/volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb rbd image 'volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb':     size 20480 MB in 5120 objects     order 22 (4096 kB objects)     block_name_prefix: rbd_data.2414a7572c9f46     format: 2     features: layering, exclusive-lock, object-map, fast-diff, deep-flatten     flags:     create_timestamp: Mon Jun 25 10:47:03 2018     parent: volumes/volume-42edf442-1dbb-4b6e-8593-1fbfbc821a1a at volume-5474ca4f-40ad-4151-9916-d9b4e9de14eb.clone_snap     overlap: 3072 MB Actually, the only problem with safety I see is possible administrative race - since volume is available, cloud administrator or any kind of automation can break dependencies. If this is fully controlled environment (nobody else can modify it in any way or reattach it to other instance or make anything else with the volume), which other kinds of problems can appear in this case? Thank you. > You can get some details from the cinder spec: > > https://specs.openstack.org/openstack/cinder-specs/specs/pike/extend-attached-volume.html > > And the corresponding Nova spec: > > http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/nova-support-attached-volume-extend.html > > You may also want to read through the mailing list thread if you want to get in > to some of the nitty gritty details behind why certain design choices were > made: > > http://lists.openstack.org/pipermail/openstack-dev/2017-April/115292.html -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Jun 26 09:52:53 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Jun 2018 18:52:53 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1725337.bCXafnShJA@whitebase.usersys.redhat.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1725337.bCXafnShJA@whitebase.usersys.redhat.com> Message-ID: <1643b829e6b.120486c4e2995.1663207930000915034@ghanshyammann.com> ---- On Tue, 26 Jun 2018 18:28:03 +0900 Luigi Toscano wrote ---- > On Tuesday, 26 June 2018 11:18:52 CEST Ghanshyam Mann wrote: > > Hello Everyone, > > > > In Queens cycle, community goal to split the Tempest Plugin has been > > completed [1] and i think almost all the projects have separate repo for > > tempest plugin [2]. Which means each tempest plugins are being separated > > from their project release model. Few projects have started the > > independent release model for their plugins like kuryr-tempest-plugin, > > ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also > > planning as chatted with amotoki. > > > > There might be some changes in Tempest which might not work with older > > version of Tempest Plugins. For example, If I am testing any production > > cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i > > will be using Tempest and Aodh's , Congress's Tempest plugins. With > > Independent release model of each Tempest Plugins, there might be chance > > that the Aodh's or Congress's Tempest plugin versions are not compatible > > with latest/known Tempest versions. It will become hard to find the > > compatible tag/release of Tempest and Tempest Plugins or in some cases i > > might need to patch up the things. > > > > During QA feedback sessions at Vancouver Summit, there was feedback to > > coordinating the release of all Tempest plugins and Tempest [4] (also > > amotoki talked to me on this as neutron-tempest-plugin is planning their > > first release). Idea is to release/tag all the Tempest plugins and Tempest > > together so that specific release/tag can be identified as compatible > > version of all the Plugins and Tempest for testing the complete stack. That > > way user can get to know what version of Tempest Plugins is compatible with > > what version of Tempest. > > > > For above use case, we need some coordinated release model among Tempest and > > all the Tempest Plugins. There should be single release of all Tempest > > Plugins with well defined tag whenever any Tempest release is happening. > > For Example, Tempest version 19.0.0 is to mark the "support of the Rocky > > release". When releasing the Tempest 19.0, we will release all the Tempest > > plugins also to tag the compatibility of plugins with Tempest for "support > > of the Rocky release". > > > > One way to make this coordinated release (just a initial thought): > > 1. Release Each Tempest Plugins whenever there is any major version release > > of Tempest (like marking the support of OpenStack release in Tempest, EOL > > of OpenStack release in Tempest) 1.1 Each plugin or Tempest can do their > > intermediate release of minor version change which are in backward > > compatible way. 1.2 This coordinated Release can be started from latest > > Tempest Version for simple reading. Like if we start this coordinated > > release from Tempest version 19.0.0 then, each plugins will be released as > > 19.0.0 and so on. > > > > Giving the above background and use case of this coordinated release, > > A) I would like to ask each plugins owner if you are agree on this > > coordinated release? If no, please give more feedback or issue we can face > > due to this coordinated release. > > > > The Sahara PTL may disagree with me, but I disagree with forcing each team to > release in a coordinate model. > > I already take care of releasing sahara-tests, which contains both the tempest > plugin and the scenario tests, when a new major version of OpenStack is > released, keeping the compatibility with the relevant versions of Tempest. > > tl;dr I agree with having Tempest plugins follow the same lifecycle of > Tempest, but please allow me to do so manually. But with coordinated release, we can make sure we have particular tags which can be used in OpenStack Complete testing. With independent release model, there is no guarantee that all tempest plugins will be compatible with Tempest versions. -gmann > > > -- > Luigi > > > From gmann at ghanshyammann.com Tue Jun 26 09:57:46 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Jun 2018 18:57:46 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Message-ID: <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> ---- On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur wrote ---- > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > > > > There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. > > FWIW this is solved by stable branches for all other projects. If we cannot keep > Tempest compatible with all supported branches, we should back off our decision > to make it branchless. The very nature of being branchless implies being > compatible with all supported releases. > > > > > During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > > > > For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > > > > One way to make this coordinated release (just a initial thought): > > 1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > > each plugins will be released as 19.0.0 and so on. > > > > Giving the above background and use case of this coordinated release, > > A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. > > Disclaimer: I'm not the PTL. > > Similarly to Luigi, I don't feel well about forcing a plugin release at the same > time as a tempest release, UNLESS tempest folks are going to coordinate their > releases with all how-many-do-we-have plugins. What I'd like to avoid is cutting > a release in the middle of a patch chain or some refactoring just because > tempest happened to have a release right now. I understand your point. But we can avoid that case if we only coordinate on major version bump only. as i mentioned in 1.2 point, Tempest and Tempest plugins can do their intermediate release anytime which are nothing but backward compatible release. In this proposed model, we can do a coordinated release for major version bump only which is happening only on OpenStack release and EOL of any stable branch. Or I am all open to have another release model which can be best suited for all plugins which can address the mentioned use case of coordinated release. -gmann > > > > > If we get the agreement from all Plugins then, > > B) Release team or TC help to find the better model for this use case or improvement in above model. > > > > C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > > > > NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > > > > [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > [2] https://docs.openstack.org/tempest/latest/plugin-registry.html > > [3] https://github.com/openstack/kuryr-tempest-plugin/releases > > https://github.com/openstack/ironic-tempest-plugin/releases > > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > > > > > -gmann > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ltoscano at redhat.com Tue Jun 26 10:06:07 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 26 Jun 2018 12:06:07 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b829e6b.120486c4e2995.1663207930000915034@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1725337.bCXafnShJA@whitebase.usersys.redhat.com> <1643b829e6b.120486c4e2995.1663207930000915034@ghanshyammann.com> Message-ID: <5022648.SslV2oUudv@whitebase.usersys.redhat.com> On Tuesday, 26 June 2018 11:52:53 CEST Ghanshyam Mann wrote: > ---- On Tue, 26 Jun 2018 18:28:03 +0900 Luigi Toscano > wrote ---- > > On Tuesday, 26 June 2018 11:18:52 CEST Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > In Queens cycle, community goal to split the Tempest Plugin has been > > > completed [1] and i think almost all the projects have separate repo > > > for > > > tempest plugin [2]. Which means each tempest plugins are being > > > separated > > > from their project release model. Few projects have started the > > > independent release model for their plugins like kuryr-tempest-plugin, > > > ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also > > > planning as chatted with amotoki. > > > > > > There might be some changes in Tempest which might not work with older > > > version of Tempest Plugins. For example, If I am testing any > > > production > > > cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i > > > will be using Tempest and Aodh's , Congress's Tempest plugins. With > > > Independent release model of each Tempest Plugins, there might be > > > chance > > > that the Aodh's or Congress's Tempest plugin versions are not > > > compatible > > > with latest/known Tempest versions. It will become hard to find the > > > compatible tag/release of Tempest and Tempest Plugins or in some cases > > > i > > > might need to patch up the things. > > > > > > During QA feedback sessions at Vancouver Summit, there was feedback to > > > coordinating the release of all Tempest plugins and Tempest [4] (also > > > amotoki talked to me on this as neutron-tempest-plugin is planning > > > their > > > first release). Idea is to release/tag all the Tempest plugins and > > > Tempest > > > together so that specific release/tag can be identified as compatible > > > version of all the Plugins and Tempest for testing the complete stack. > > > That > > > way user can get to know what version of Tempest Plugins is compatible > > > with > > > what version of Tempest. > > > > > > For above use case, we need some coordinated release model among > > > Tempest and all the Tempest Plugins. There should be single release of > > > all Tempest Plugins with well defined tag whenever any Tempest release > > > is happening. For Example, Tempest version 19.0.0 is to mark the > > > "support of the Rocky release". When releasing the Tempest 19.0, we > > > will release all the Tempest plugins also to tag the compatibility of > > > plugins with Tempest for "support of the Rocky release". > > > > > > One way to make this coordinated release (just a initial thought): > > > 1. Release Each Tempest Plugins whenever there is any major version > > > release > > > of Tempest (like marking the support of OpenStack release in Tempest, > > > EOL > > > of OpenStack release in Tempest) 1.1 Each plugin or Tempest can do > > > their > > > intermediate release of minor version change which are in backward > > > compatible way. 1.2 This coordinated Release can be started from latest > > > Tempest Version for simple reading. Like if we start this coordinated > > > release from Tempest version 19.0.0 then, each plugins will be released > > > as > > > 19.0.0 and so on. > > > > > > Giving the above background and use case of this coordinated release, > > > A) I would like to ask each plugins owner if you are agree on this > > > coordinated release? If no, please give more feedback or issue we can > > > face > > > due to this coordinated release. > > > > The Sahara PTL may disagree with me, but I disagree with forcing each > > team to release in a coordinate model. > > > > I already take care of releasing sahara-tests, which contains both the > > tempest plugin and the scenario tests, when a new major version of > > OpenStack is released, keeping the compatibility with the relevant > > versions of Tempest. > > > > tl;dr I agree with having Tempest plugins follow the same lifecycle of > > Tempest, but please allow me to do so manually. > > But with coordinated release, we can make sure we have particular tags > which can be used in OpenStack Complete testing. With independent release > model, there is no guarantee that all tempest plugins will be compatible > with Tempest versions. With the independent release model (in general, not just for Tempest plugins) it's up to the program maintainer to ensure that things are compatible. Let me make sure that: - I agree that Tempest plugins should follow the same lifecycle than Tempest. It's the policy that I applied for the Tempest plugin part of sahara-tests since Kilo. - most of plugins maintainer don't care, can't care or shouldn't care about the release process: then coordinated release is fine (as it happen also for the general usage of coordinated release vs independent relase in OpenStack) - I still prefer to manage this myself, because the repository does not contain only the Tempest plugin. I may end up with the need of another release at a different time, because other things are broken. And no, I'm not going to split the repository, because the part which is not a Tempest plugin still uses tempest.lib. -- Luigi From ramishra at redhat.com Tue Jun 26 10:09:40 2018 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 26 Jun 2018 15:39:40 +0530 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Message-ID: On Tue, Jun 26, 2018 at 2:48 PM, Ghanshyam Mann wrote: > Hello Everyone, > > In Queens cycle, community goal to split the Tempest Plugin has been > completed [1] and i think almost all the projects have separate repo for > tempest plugin [2]. Which means each tempest plugins are being separated > from their project release model. Few projects have started the > independent release model for their plugins like kuryr-tempest-plugin, > ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also > planning as chatted with amotoki. > > There might be some changes in Tempest which might not work with older > version of Tempest Plugins. I don't think that's a good premise. Isn't tempest branchless and by definition should be backward compatible with service releases? If there are changes in the plugin interface in tempest, I would also expect those to be backward compatible too. Likewise plugins should be backward compatible with their respective projects, so any kind of release model would work. Else, I think the whole branchless concept is of very little use. For example, If I am testing any production cloud which has Nova, Neutron, > Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , > Congress's Tempest plugins. With Independent release model of each Tempest > Plugins, there might be chance that the Aodh's or Congress's Tempest plugin > versions are not compatible with latest/known Tempest versions. It will > become hard to find the compatible tag/release of Tempest and Tempest > Plugins or in some cases i might need to patch up the things. > > During QA feedback sessions at Vancouver Summit, there was feedback to > coordinating the release of all Tempest plugins and Tempest [4] (also > amotoki talked to me on this as neutron-tempest-plugin is planning their > first release). Idea is to release/tag all the Tempest plugins and Tempest > together so that specific release/tag can be identified as compatible > version of all the Plugins and Tempest for testing the complete stack. That > way user can get to know what version of Tempest Plugins is compatible with > what version of Tempest. > > For above use case, we need some coordinated release model among Tempest > and all the Tempest Plugins. There should be single release of all Tempest > Plugins with well defined tag whenever any Tempest release is happening. > For Example, Tempest version 19.0.0 is to mark the "support of the Rocky > release". When releasing the Tempest 19.0, we will release all the Tempest > plugins also to tag the compatibility of plugins with Tempest for "support > of the Rocky release". > > One way to make this coordinated release (just a initial thought): > 1. Release Each Tempest Plugins whenever there is any major version > release of Tempest (like marking the support of OpenStack release in > Tempest, EOL of OpenStack release in Tempest) > 1.1 Each plugin or Tempest can do their intermediate release of minor > version change which are in backward compatible way. > 1.2 This coordinated Release can be started from latest Tempest > Version for simple reading. Like if we start this coordinated release from > Tempest version 19.0.0 then, > each plugins will be released as 19.0.0 and so on. > > Giving the above background and use case of this coordinated release, > A) I would like to ask each plugins owner if you are agree on this > coordinated release? If no, please give more feedback or issue we can face > due to this coordinated release. > > If we get the agreement from all Plugins then, > B) Release team or TC help to find the better model for this use case or > improvement in above model. > > C) Once we define the release model, find out the team owning that release > model (there are more than 40 Tempest plugins currently) . > > NOTE: Till we decide the best solution for this use case, each plugins can > do/keep doing their plugin release as per independent release model. > > [1] https://governance.openstack.org/tc/goals/queens/split-tempe > st-plugins.html > [2] https://docs.openstack.org/tempest/latest/plugin-registry.html > [3] https://github.com/openstack/kuryr-tempest-plugin/releases > https://github.com/openstack/ironic-tempest-plugin/releases > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June > /131011.html > > > -gmann > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jun 26 10:12:33 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 26 Jun 2018 12:12:33 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> Message-ID: On 06/26/2018 11:57 AM, Ghanshyam Mann wrote: > > > > ---- On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur wrote ---- > > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > > > > > > There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. > > > > FWIW this is solved by stable branches for all other projects. If we cannot keep > > Tempest compatible with all supported branches, we should back off our decision > > to make it branchless. The very nature of being branchless implies being > > compatible with all supported releases. > > > > > > > > During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > > > > > > For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > > > > > > One way to make this coordinated release (just a initial thought): > > > 1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > > > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > > > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > > > each plugins will be released as 19.0.0 and so on. > > > > > > Giving the above background and use case of this coordinated release, > > > A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. > > > > Disclaimer: I'm not the PTL. > > > > Similarly to Luigi, I don't feel well about forcing a plugin release at the same > > time as a tempest release, UNLESS tempest folks are going to coordinate their > > releases with all how-many-do-we-have plugins. What I'd like to avoid is cutting > > a release in the middle of a patch chain or some refactoring just because > > tempest happened to have a release right now. > > I understand your point. But we can avoid that case if we only coordinate on major version bump only. as i mentioned in 1.2 point, Tempest and Tempest plugins can do their intermediate release anytime which are nothing but backward compatible release. In this proposed model, we can do a coordinated release for major version bump only which is happening only on OpenStack release and EOL of any stable branch. Even bigger concern: what if the plugin is actually not compatible yet? Say, you're releasing tempest 19.0. As the same point you're cutting ironic-tempest-plugin 19.0. Who guarantees that they're compatible? If we haven't had any patches for it in a month, it may well happen that it does not work. > > Or I am all open to have another release model which can be best suited for all plugins which can address the mentioned use case of coordinated release. My suggestion: tempest has to be compatible with all supported releases (of both services and plugins) OR be branched. > > -gmann > > > > > > > > If we get the agreement from all Plugins then, > > > B) Release team or TC help to find the better model for this use case or improvement in above model. > > > > > > C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > > > > > > NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > > > > > > [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > > [2] https://docs.openstack.org/tempest/latest/plugin-registry.html > > > [3] https://github.com/openstack/kuryr-tempest-plugin/releases > > > https://github.com/openstack/ironic-tempest-plugin/releases > > > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > > > > > > > > -gmann > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sileht at sileht.net Tue Jun 26 10:32:59 2018 From: sileht at sileht.net (Mehdi Abaakouk) Date: Tue, 26 Jun 2018 12:32:59 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> Message-ID: <20180626103258.vpk5462pjoujwqz5@sileht.net> Hi, I have never understood the branchless tempest thing. Making Tempest release is a great news for me. But about plugins... Tempest already provides a API for plugins. If you are going to break this API, why not using stable branches and deprecation process like any other software ? If you do that, plugin will be informed that Tempest will soon do a breaking change. Their can update their plugin code and raise the minimal tempest version required to work. Their can do that when they have times, and not because Tempest want to release a version soon. Also the stable branch/deprecation process is well known by the whole community. And this will also allow them to release a version when their want. So I support making release of Tempest and Plugins, but do not support a coordinated release. Regards, On Tue, Jun 26, 2018 at 06:18:52PM +0900, Ghanshyam Mann wrote: >Hello Everyone, > >In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > >There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. > >During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > >For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > >One way to make this coordinated release (just a initial thought): >1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > each plugins will be released as 19.0.0 and so on. > >Giving the above background and use case of this coordinated release, >A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. >If we get the agreement from all Plugins then, >B) Release team or TC help to find the better model for this use case or improvement in above model. > >C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > >NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > >[1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html >[2] https://docs.openstack.org/tempest/latest/plugin-registry.html >[3] https://github.com/openstack/kuryr-tempest-plugin/releases > https://github.com/openstack/ironic-tempest-plugin/releases >[4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > >-gmann > > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mehdi Abaakouk mail: sileht at sileht.net irc: sileht -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 898 bytes Desc: not available URL: From thierry at openstack.org Tue Jun 26 12:08:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 26 Jun 2018 14:08:19 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> Message-ID: <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> Dmitry Tantsur wrote: > [...] > My suggestion: tempest has to be compatible with all supported releases > (of both services and plugins) OR be branched. > [...] I tend to agree with Dmitry... We have a model for things that need release alignment, and that's the cycle-bound series. The reason tempest is branchless was because there was no compatibility issue. If the split of tempest plugins introduces a potential incompatibility, then I would prefer aligning tempest to the existing model rather than introduce a parallel tempest-specific cycle just so that tempest can stay release-independent... I seem to remember there were drawbacks in branching tempest, though... Can someone with functioning memory brain cells summarize them again ? -- Thierry Carrez (ttx) From andrea.frittoli at gmail.com Tue Jun 26 12:35:11 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 26 Jun 2018 13:35:11 +0100 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> Message-ID: On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > Dmitry Tantsur wrote: > > [...] > > My suggestion: tempest has to be compatible with all supported releases > > (of both services and plugins) OR be branched. > > [...] > I tend to agree with Dmitry... We have a model for things that need > release alignment, and that's the cycle-bound series. The reason tempest > is branchless was because there was no compatibility issue. If the split > of tempest plugins introduces a potential incompatibility, then I would > prefer aligning tempest to the existing model rather than introduce a > parallel tempest-specific cycle just so that tempest can stay > release-independent... > > I seem to remember there were drawbacks in branching tempest, though... > Can someone with functioning memory brain cells summarize them again ? > Branchless Tempest enforces api stability across branches. For the same reason tempest plug ins should be branchless, which is one of the reasons that having them in the same repo as a service was an issue: services are branched but api/integration tests should be branchless. Andrea > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Jun 26 12:41:30 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 26 Jun 2018 13:41:30 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-26 Message-ID: HTML: https://anticdent.org/tc-report-18-26.html All the bits and pieces of OpenStack are interconnected and interdependent across the many groupings of technology and people. When we plan or make changes, wiggling something _here_ has consequences over _there_. Some intended, some unintended. This is such commonly accepted wisdom that to say it risks being a cliche but acting accordingly remains hard. This [morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T09:09:57) Thierry and I had a useful conversation about the [Tech Vision 2018 etherpad](https://etherpad.openstack.org/p/tech-vision-2018). One of the issues there is agreeing on what we're even talking about. How can we have a vision for a "cloud" if we don't agree what that is? There's hope that clarifying the vision will help unify and direct energy, but as the discussion and the etherpad show, there's work to do. The lack of clarity on the vision is one of the reasons why Adjutant's [application to be official](https://review.openstack.org/#/c/553643/) still has [no clear outcome](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-19.log.html#t2018-06-19T18:59:43). Meanwhile, to continue [last week's theme](/tc-report-18-25.html), the TC's role as listener, mediator, and influencer lacks definition. Zane wrote up a blog post explaining the various ways in which the OpenStack Foundation is [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion). But this raises [questions](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-20.log.html#t2018-06-20T15:41:41) about what, if any, role the TC has in that expansion. It appears that the board has decided to not to do a [joint leadership meeting](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:32:17) at the PTG, which means discussions about such things will need to happen in other media, or be delayed until the next summit in Berlin. To make up for the gap, the TC is [planning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T16:54:43) to hold [a gathering](http://lists.openstack.org/pipermail/openstack-tc/2018-June/001510.html) to work on some of the much needed big-picture and shared-understanding building. While that shared understanding is critical, we have to be sure that it incorporates what we can hear from people who are not long-term members of the community. In a long discussion asking if [our tooling makes things harder for new contributors](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-21.log.html#t2018-06-21T15:21:24) several of us tried to make it clear that we have an incomplete understanding about the barriers people experience, that we often assume rather than verify, and that sometimes our interest in and enthusiasm for making incremental progress (because if iterating in code is good and just, perhaps it is in social groups too?) can mean that we avoid the deeper analysis required for paradigm shifts. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Tue Jun 26 12:53:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 08:53:21 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> Message-ID: <1530017472-sup-6339@lrrr.local> Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > Dmitry Tantsur wrote: > > > [...] > > > My suggestion: tempest has to be compatible with all supported releases > > > (of both services and plugins) OR be branched. > > > [...] > > I tend to agree with Dmitry... We have a model for things that need > > release alignment, and that's the cycle-bound series. The reason tempest > > is branchless was because there was no compatibility issue. If the split > > of tempest plugins introduces a potential incompatibility, then I would > > prefer aligning tempest to the existing model rather than introduce a > > parallel tempest-specific cycle just so that tempest can stay > > release-independent... > > > > I seem to remember there were drawbacks in branching tempest, though... > > Can someone with functioning memory brain cells summarize them again ? > > > > > Branchless Tempest enforces api stability across branches. I'm sorry, but I'm having a hard time taking this statement seriously when the current source of tension is that the Tempest API itself is breaking for its plugins. Maybe rather than talking about how to release compatible things together, we should go back and talk about why Tempest's API is changing in a way that can't be made backwards-compatible. Can you give some more detail about that? Doug From doug at doughellmann.com Tue Jun 26 13:03:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 09:03:40 -0400 Subject: [openstack-dev] [requirements][stable][docs] updating openstackdocstheme in stable branches Message-ID: <1530017858-sup-4432@lrrr.local> Requirements team, At some point in the next few months we're going to want to raise the constraint on openstackdocstheme in all of the old branches so we can take advantage of a new feature for showing the supported status of each version of a project. That feature isn't implemented yet, but I thought it would be good to discuss in advance the need to update the dependency and how to do it. The theme is released under an independent release model and does not currently have stable branches. It depends on pbr and dulwich, both of which should already be in the requirements and constraints lists (dulwich is a dependency of reno). I think that means the simplest thing to do would be to just update the constraint for the theme in the stable branches. Does that seem right? If we can make that happen before we start the zuul configuration porting work that we're going to do as part of the python3-first goal, then we can take advantage of those patches to trigger doc rebuilds in all of the projects. Doug From jaypipes at gmail.com Tue Jun 26 13:12:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 26 Jun 2018 09:12:37 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: Message-ID: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> On 06/26/2018 08:41 AM, Chris Dent wrote: > Meanwhile, to continue [last week's theme](/tc-report-18-25.html), > the TC's role as listener, mediator, and influencer lacks > definition. > > Zane wrote up a blog post explaining the various ways in which the > OpenStack Foundation is > [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion). One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] whether there is any actual expectation that there will be any focus at all any more. Are CI/CD and secure containers important? [2] Yes, absolutely. Is (one of) the problem(s) with our community that we have too small of a scope/footprint? No. Not in the slightest. IMHO, what we need is focus. And having 4 different focus areas doesn't help focus things. I keep waiting for people to say "no, that isn't part of our scope". But all I see is people saying "yes, we will expand our scope to these new sets of things (otherwise *gasp* the Linux Foundation will gobble up all the hype)". Just my two cents and sorry for being opinionated, -jay [1] https://www.openstack.org/foundation/strategic-focus-areas/ [2] I don't include "edge" in my list of things that are important considering nobody even knows what "edge" is yet. I fail to see how people can possibly "focus" on something that isn't defined. From doug at doughellmann.com Tue Jun 26 13:13:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 09:13:55 -0400 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> Message-ID: <1530018806-sup-7849@lrrr.local> Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500: > Thanks a bunch for digging into this, Tony. I'll follow up with the > oauthlib maintainers and see if they'd be interested in these changes > upstream. If so, I can chip away at it. For now we'll have to settle for > not treating warnings as errors to unblock our documentation gate [0]. > > [0] https://review.openstack.org/#/c/577974/ How are docstrings from a third-party library making their way into the keystone docs and breaking the build? Doug > > On 06/25/2018 07:27 PM, Tony Breeds wrote: > > On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: > >> Keystone is hitting this, too [0]. I attempted the same solution that > >> Tony posted, but no luck. I've even gone so far as removing every > >> comment from the module to see if that helps narrow down the problem > >> area, but sphinx still trips. The output from the error message isn't > >> very descriptive either. Has anyone else had issues fixing this for > >> python comments, not just docstrings? > >> > >> [0] https://bugs.launchpad.net/keystone/+bug/1778603 > > I did a little digging for the keystone problem and it's due to a > > missing ':' in > > https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 > > > > So the correct way to fix this is to correct that in oauthlib, get it > > released and use that. > > > > I hit additional problems in that enabling -W in oauthlib, to pevent > > this happening in the future, lead me down a rabbit hole I don't really > > have cycles to dig out of. > > > > Here's a dump of where I got to[1]. Clearly it mixes "fixes" with > > debugging but it isn't too hard to reproduce and someone that knows more > > Sphinx will be able to understand the errors better than I can. > > > > > > [1] http://paste.openstack.org/show/724271/ > > > > Yours Tony. > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From balazs.gibizer at ericsson.com Tue Jun 26 13:17:04 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 26 Jun 2018 15:17:04 +0200 Subject: [openstack-dev] [nova]Notification update week 26 Message-ID: <1530019024.16678.2@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Undecided] "IndexError: list index out of range" in ExceptionPayload.from_exception during resize failure https://bugs.launchpad.net/nova/+bug/1777540 I failed to reproduce and based on the newly provided logs in the parent bug https://bugs.launchpad.net/nova/+bug/1777157 this happens in an environment that runs heavily forked nova code. So I marked the bug inva lid. [Medium] Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields https://bugs.launchpad.net/nova/+bug/1739325 This bug is still open and reportedly visible in multiple independent environment but I failed to find the root cause. So I'm wondering if we can implement a nova-manage heal-instance-flavor command for these environments. Features -------- Sending full traceback in versioned notifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications has been implemented. \o/ Introduce Pending VM state ~~~~~~~~~~~~~~~~~~~~~~~~~~ The spec https://review.openstack.org/#/c/554212 still not exactly define what will be in the select_destination notification payload and seems it is deferred to Stein. Add the user id and project id of the user initiated the instance action to the notification -------------------------------------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Work progressing in https://review.openstack.org/#/c/536243 Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances has been implemented \o/ Weekly meeting -------------- No meeting this week. The next meeting is planned to be held on 3rd of June on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180703T170000 Cheers, gibi From mtreinish at kortar.org Tue Jun 26 13:52:09 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 26 Jun 2018 09:52:09 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1530017472-sup-6339@lrrr.local> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> Message-ID: <20180626135209.GA15436@zeong> On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > Dmitry Tantsur wrote: > > > > [...] > > > > My suggestion: tempest has to be compatible with all supported releases > > > > (of both services and plugins) OR be branched. > > > > [...] > > > I tend to agree with Dmitry... We have a model for things that need > > > release alignment, and that's the cycle-bound series. The reason tempest > > > is branchless was because there was no compatibility issue. If the split > > > of tempest plugins introduces a potential incompatibility, then I would > > > prefer aligning tempest to the existing model rather than introduce a > > > parallel tempest-specific cycle just so that tempest can stay > > > release-independent... > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > Branchless Tempest enforces api stability across branches. > > I'm sorry, but I'm having a hard time taking this statement seriously > when the current source of tension is that the Tempest API itself > is breaking for its plugins. > > Maybe rather than talking about how to release compatible things > together, we should go back and talk about why Tempest's API is changing > in a way that can't be made backwards-compatible. Can you give some more > detail about that? > Well it's not, if it did that would violate all the stability guarantees provided by Tempest's library and plugin interface. I've not ever heard of these kind of backwards incompatibilities in those interfaces and we go to all effort to make sure we don't break them. Where did the idea that backwards incompatible changes where being introduced come from? That being said things are definitely getting confused here, all andreaf was talking about the branchless nature is ensuring we run the same tests against service REST APIs, making sure we have API stability between releases in the services when we say we do. (to answer ttx's question) As for this whole thread I don't understand any of the points being brought up in the original post or any of the follow ons, things seem to have been confused from the start. The ask from users at the summit was simple. When a new OpenStack release is pushed we push a tempest release to mark that (the next one will be 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a corresponding version to mark support for a new release. So when trying to run against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for various services at different sha1s which have to be manually looked up based on dates. All users wanted at the summit was a tag for plugins like tempest does with the first number in: https://docs.openstack.org/tempest/latest/overview.html#release-versioning which didn't seem like a bad idea to me. I'm not sure the best mechanism to accomplish this, because I agree with much of what plugin maintainers were saying on the thread about wanting to control their own releases. But the desire to make sure users have a tag they can pull for the addition or removal of a supported release makes sense as something a plugin should do. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From yamamoto at midokura.com Tue Jun 26 13:57:41 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 26 Jun 2018 22:57:41 +0900 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <1530018806-sup-7849@lrrr.local> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> <1530018806-sup-7849@lrrr.local> Message-ID: On Tue, Jun 26, 2018 at 10:13 PM, Doug Hellmann wrote: > Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500: >> Thanks a bunch for digging into this, Tony. I'll follow up with the >> oauthlib maintainers and see if they'd be interested in these changes >> upstream. If so, I can chip away at it. For now we'll have to settle for >> not treating warnings as errors to unblock our documentation gate [0]. >> >> [0] https://review.openstack.org/#/c/577974/ > > How are docstrings from a third-party library making their way into the > keystone docs and breaking the build? in the same way that docstrings from os-vif affect networking-midonet docs. i.e. via class inheritance > > Doug > >> >> On 06/25/2018 07:27 PM, Tony Breeds wrote: >> > On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: >> >> Keystone is hitting this, too [0]. I attempted the same solution that >> >> Tony posted, but no luck. I've even gone so far as removing every >> >> comment from the module to see if that helps narrow down the problem >> >> area, but sphinx still trips. The output from the error message isn't >> >> very descriptive either. Has anyone else had issues fixing this for >> >> python comments, not just docstrings? >> >> >> >> [0] https://bugs.launchpad.net/keystone/+bug/1778603 >> > I did a little digging for the keystone problem and it's due to a >> > missing ':' in >> > https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 >> > >> > So the correct way to fix this is to correct that in oauthlib, get it >> > released and use that. >> > >> > I hit additional problems in that enabling -W in oauthlib, to pevent >> > this happening in the future, lead me down a rabbit hole I don't really >> > have cycles to dig out of. >> > >> > Here's a dump of where I got to[1]. Clearly it mixes "fixes" with >> > debugging but it isn't too hard to reproduce and someone that knows more >> > Sphinx will be able to understand the errors better than I can. >> > >> > >> > [1] http://paste.openstack.org/show/724271/ >> > >> > Yours Tony. >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From paul.bourke at oracle.com Tue Jun 26 14:05:34 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 26 Jun 2018 15:05:34 +0100 Subject: [openstack-dev] [kolla] Removing old / unused images Message-ID: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> Hi all, At the weekly meeting a week or two ago, we mentioned removing some old / unused images from Kolla in the interest of keeping the gate run times down, as well as general code hygiene. The images I've determined that are either no longer relevant, or were simply never made use of in kolla-ansible are the following: * almanach * certmonger * dind * qdrouterd * rsyslog * helm-repository * kube * kubernetes-entrypoint * kubetoolbox If you still care about any of these or I've made an oversight, please have a look at the patch [0] Thanks! -Paul [0] https://review.openstack.org/#/c/578111/ From doug at doughellmann.com Tue Jun 26 14:12:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 10:12:30 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <20180626135209.GA15436@zeong> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> Message-ID: <1530021936-sup-5714@lrrr.local> Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > Dmitry Tantsur wrote: > > > > > [...] > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > (of both services and plugins) OR be branched. > > > > > [...] > > > > I tend to agree with Dmitry... We have a model for things that need > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > is branchless was because there was no compatibility issue. If the split > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > prefer aligning tempest to the existing model rather than introduce a > > > > parallel tempest-specific cycle just so that tempest can stay > > > > release-independent... > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > when the current source of tension is that the Tempest API itself > > is breaking for its plugins. > > > > Maybe rather than talking about how to release compatible things > > together, we should go back and talk about why Tempest's API is changing > > in a way that can't be made backwards-compatible. Can you give some more > > detail about that? > > > > Well it's not, if it did that would violate all the stability guarantees > provided by Tempest's library and plugin interface. I've not ever heard of > these kind of backwards incompatibilities in those interfaces and we go to > all effort to make sure we don't break them. Where did the idea that > backwards incompatible changes where being introduced come from? In his original post, gmann said, "There might be some changes in Tempest which might not work with older version of Tempest Plugins." I was surprised to hear that, but I'm not sure how else to interpret that statement. > As for this whole thread I don't understand any of the points being brought up > in the original post or any of the follow ons, things seem to have been confused > from the start. The ask from users at the summit was simple. When a new OpenStack > release is pushed we push a tempest release to mark that (the next one will be > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > corresponding version to mark support for a new release. So when trying to run > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > various services at different sha1s which have to be manually looked up based > on dates. All users wanted at the summit was a tag for plugins like tempest > does with the first number in: > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > accomplish this, because I agree with much of what plugin maintainers were > saying on the thread about wanting to control their own releases. But the > desire to make sure users have a tag they can pull for the addition or > removal of a supported release makes sense as something a plugin should do. We don't coordinate versions across projects anywhere else, for a bunch of reasons including the complexity of coordinating the details and the confusion it causes when the first version of something is 19.0.0. Instead, we list the compatible versions of everything together on a series-specific page on releases.o.o. That seems to be enough to help anyone wanting to know which versions of tools work together. The data is also available in YAML files, so it's easy enough to consume by automation. Would that work for tempest and it's plugins, too? Is the problem that the versions are not the same, or that some of the plugins are not being tagged at all? Doug From lbragstad at gmail.com Tue Jun 26 14:13:07 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 26 Jun 2018 09:13:07 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> <1530018806-sup-7849@lrrr.local> Message-ID: <7b98c8b8-2659-e40b-ba7c-fcdbd959247b@gmail.com> On 06/26/2018 08:57 AM, Takashi Yamamoto wrote: > On Tue, Jun 26, 2018 at 10:13 PM, Doug Hellmann wrote: >> Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500: >>> Thanks a bunch for digging into this, Tony. I'll follow up with the >>> oauthlib maintainers and see if they'd be interested in these changes >>> upstream. If so, I can chip away at it. For now we'll have to settle for >>> not treating warnings as errors to unblock our documentation gate [0]. >>> >>> [0] https://review.openstack.org/#/c/577974/ >> How are docstrings from a third-party library making their way into the >> keystone docs and breaking the build? > in the same way that docstrings from os-vif affect networking-midonet docs. > i.e. via class inheritance Correct, keystone relies in an interface from that library. I've reached out to their community to see if they would be interested in the fixes upstream [0], and they were receptive. Until then we might have to override the offending documentation strings somehow (per Doug's suggestion in IRC) or disable warning as errors in our build [1]. [0] https://github.com/oauthlib/oauthlib/issues/558 [1] https://review.openstack.org/#/c/577974/ > >> Doug >> >>> On 06/25/2018 07:27 PM, Tony Breeds wrote: >>>> On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: >>>>> Keystone is hitting this, too [0]. I attempted the same solution that >>>>> Tony posted, but no luck. I've even gone so far as removing every >>>>> comment from the module to see if that helps narrow down the problem >>>>> area, but sphinx still trips. The output from the error message isn't >>>>> very descriptive either. Has anyone else had issues fixing this for >>>>> python comments, not just docstrings? >>>>> >>>>> [0] https://bugs.launchpad.net/keystone/+bug/1778603 >>>> I did a little digging for the keystone problem and it's due to a >>>> missing ':' in >>>> https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 >>>> >>>> So the correct way to fix this is to correct that in oauthlib, get it >>>> released and use that. >>>> >>>> I hit additional problems in that enabling -W in oauthlib, to pevent >>>> this happening in the future, lead me down a rabbit hole I don't really >>>> have cycles to dig out of. >>>> >>>> Here's a dump of where I got to[1]. Clearly it mixes "fixes" with >>>> debugging but it isn't too hard to reproduce and someone that knows more >>>> Sphinx will be able to understand the errors better than I can. >>>> >>>> >>>> [1] http://paste.openstack.org/show/724271/ >>>> >>>> Yours Tony. >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From vdrok at mirantis.com Tue Jun 26 14:14:45 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Tue, 26 Jun 2018 17:14:45 +0300 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> <1530018806-sup-7849@lrrr.local> Message-ID: On Tue, Jun 26, 2018 at 4:58 PM Takashi Yamamoto wrote: > On Tue, Jun 26, 2018 at 10:13 PM, Doug Hellmann > wrote: > > Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500: > >> Thanks a bunch for digging into this, Tony. I'll follow up with the > >> oauthlib maintainers and see if they'd be interested in these changes > >> upstream. If so, I can chip away at it. For now we'll have to settle for > >> not treating warnings as errors to unblock our documentation gate [0]. > >> > >> [0] https://review.openstack.org/#/c/577974/ > > > > How are docstrings from a third-party library making their way into the > > keystone docs and breaking the build? > > in the same way that docstrings from os-vif affect networking-midonet docs. > i.e. via class inheritance > > > > > Doug > > > >> > >> On 06/25/2018 07:27 PM, Tony Breeds wrote: > >> > On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: > >> >> Keystone is hitting this, too [0]. I attempted the same solution that > >> >> Tony posted, but no luck. I've even gone so far as removing every > >> >> comment from the module to see if that helps narrow down the problem > >> >> area, but sphinx still trips. The output from the error message isn't > >> >> very descriptive either. Has anyone else had issues fixing this for > >> >> python comments, not just docstrings? > >> >> > >> >> [0] https://bugs.launchpad.net/keystone/+bug/1778603 > >> > I did a little digging for the keystone problem and it's due to a > >> > missing ':' in > >> > > https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 > >> > > >> > So the correct way to fix this is to correct that in oauthlib, get it > >> > released and use that. > >> > > >> > I hit additional problems in that enabling -W in oauthlib, to pevent > >> > this happening in the future, lead me down a rabbit hole I don't > really > >> > have cycles to dig out of. > >> > > >> > Here's a dump of where I got to[1]. Clearly it mixes "fixes" with > >> > debugging but it isn't too hard to reproduce and someone that knows > more > >> > Sphinx will be able to understand the errors better than I can. > >> > > >> > > >> > [1] http://paste.openstack.org/show/724271/ > >> > > >> > Yours Tony. > This also might be related to this thread, in ironic I can see the following while building the docs, apart from 'Bullet list ends without a blank line' issue: Warning, treated as error: /home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS:1:Error in "wsme:service" directive: unknown option: "module". .. wsme:service:: None :module: ironic.api.app Bases: :class:`oslo_middleware.cors.CORS` Ironic-specific CORS class We're adding the Ironic-specific version headers to the list of simple headers in order that a request bearing those headers might be accepted by the Ironic REST API. I see that there is some code in wsmeext that should be dealing with that if I understand correctly -- https://github.com/openstack/wsme/blob/0.8.0/wsmeext/sphinxext.py#L356-L357 Sphinx version I have is 1.7.5. Several other folks indicated that they hit it with on fresh fedora. > >> > > >> > > >> > > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vdrok at mirantis.com Tue Jun 26 14:24:25 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Tue, 26 Jun 2018 17:24:25 +0300 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> <20180621031338.GK18927@thor.bakeyournoodle.com> <6ff62d7a-78b8-b9dd-d9bc-99e2f2d7cd4d@gmail.com> <20180626002729.GB21570@thor.bakeyournoodle.com> <1b2f56b9-f2c6-31c4-9019-7d11458c900a@gmail.com> <1530018806-sup-7849@lrrr.local> Message-ID: On Tue, Jun 26, 2018 at 5:14 PM Vladyslav Drok wrote: > > On Tue, Jun 26, 2018 at 4:58 PM Takashi Yamamoto > wrote: > >> On Tue, Jun 26, 2018 at 10:13 PM, Doug Hellmann >> wrote: >> > Excerpts from Lance Bragstad's message of 2018-06-25 22:51:37 -0500: >> >> Thanks a bunch for digging into this, Tony. I'll follow up with the >> >> oauthlib maintainers and see if they'd be interested in these changes >> >> upstream. If so, I can chip away at it. For now we'll have to settle >> for >> >> not treating warnings as errors to unblock our documentation gate [0]. >> >> >> >> [0] https://review.openstack.org/#/c/577974/ >> > >> > How are docstrings from a third-party library making their way into the >> > keystone docs and breaking the build? >> >> in the same way that docstrings from os-vif affect networking-midonet >> docs. >> i.e. via class inheritance >> >> > >> > Doug >> > >> >> >> >> On 06/25/2018 07:27 PM, Tony Breeds wrote: >> >> > On Mon, Jun 25, 2018 at 05:42:00PM -0500, Lance Bragstad wrote: >> >> >> Keystone is hitting this, too [0]. I attempted the same solution >> that >> >> >> Tony posted, but no luck. I've even gone so far as removing every >> >> >> comment from the module to see if that helps narrow down the problem >> >> >> area, but sphinx still trips. The output from the error message >> isn't >> >> >> very descriptive either. Has anyone else had issues fixing this for >> >> >> python comments, not just docstrings? >> >> >> >> >> >> [0] https://bugs.launchpad.net/keystone/+bug/1778603 >> >> > I did a little digging for the keystone problem and it's due to a >> >> > missing ':' in >> >> > >> https://github.com/oauthlib/oauthlib/blob/master/oauthlib/oauth1/rfc5849/request_validator.py#L819-L820 >> >> > >> >> > So the correct way to fix this is to correct that in oauthlib, get it >> >> > released and use that. >> >> > >> >> > I hit additional problems in that enabling -W in oauthlib, to pevent >> >> > this happening in the future, lead me down a rabbit hole I don't >> really >> >> > have cycles to dig out of. >> >> > >> >> > Here's a dump of where I got to[1]. Clearly it mixes "fixes" with >> >> > debugging but it isn't too hard to reproduce and someone that knows >> more >> >> > Sphinx will be able to understand the errors better than I can. >> >> > >> >> > >> >> > [1] http://paste.openstack.org/show/724271/ >> >> > >> >> > Yours Tony. >> > > This also might be related to this thread, in ironic I can see the > following while building the docs, apart from 'Bullet list ends without a > blank line' issue: > > Warning, treated as error: > /home/vlad/work/ironic/ironic/api/app.py:docstring of > ironic.api.app.IronicCORS:1:Error in "wsme:service" directive: > unknown option: "module". > > .. wsme:service:: None > :module: ironic.api.app > > Bases: :class:`oslo_middleware.cors.CORS` > > Ironic-specific CORS class > > We're adding the Ironic-specific version headers to the list of simple > headers in order that a request bearing those headers might be accepted > by > the Ironic REST API. > > I see that there is some code in wsmeext that should be dealing with that > if I understand correctly -- > https://github.com/openstack/wsme/blob/0.8.0/wsmeext/sphinxext.py#L356-L357 > > Sphinx version I have is 1.7.5. > > Several other folks indicated that they hit it with on fresh fedora. > It also appears that index of :module: entry has changed: -> if ':module:' in self.directive.result[-1]: (Pdb) self.directive.result ViewList(['', '.. py:module:: ironic.api.app', '', '', '.. wsme:service:: None', ' :module: ironic.api.app', '', ' Bases: :class:`oslo_middleware.cors.CORS`'], items=[('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS', 0), ('/home/vlad/work/ironic/ironic/api/app.py:docstring of ironic.api.app.IronicCORS', 0)]) It is self.directive.result[-3] now, so I guess it needs to be changed to iterate on everything in that list with that check? > >> >> > >> >> > >> >> > >> __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Tue Jun 26 14:37:54 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 26 Jun 2018 10:37:54 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1530021936-sup-5714@lrrr.local> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> Message-ID: <20180626143754.GB15436@zeong> On Tue, Jun 26, 2018 at 10:12:30AM -0400, Doug Hellmann wrote: > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > Dmitry Tantsur wrote: > > > > > > [...] > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > (of both services and plugins) OR be branched. > > > > > > [...] > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > is branchless was because there was no compatibility issue. If the split > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > release-independent... > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > when the current source of tension is that the Tempest API itself > > > is breaking for its plugins. > > > > > > Maybe rather than talking about how to release compatible things > > > together, we should go back and talk about why Tempest's API is changing > > > in a way that can't be made backwards-compatible. Can you give some more > > > detail about that? > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > provided by Tempest's library and plugin interface. I've not ever heard of > > these kind of backwards incompatibilities in those interfaces and we go to > > all effort to make sure we don't break them. Where did the idea that > > backwards incompatible changes where being introduced come from? > > In his original post, gmann said, "There might be some changes in > Tempest which might not work with older version of Tempest Plugins." > I was surprised to hear that, but I'm not sure how else to interpret > that statement. I have no idea what he means here either. If we went off and broke plugins using a defined stable interface with changes on master we would breaking all the stability guarantees Tempest provides on those interfaces. That's not something we do, and have review processes and testing to prevent. The only thing I can think of is removal of an interface, but that is pretty rare and when we do we go through the standard deprecation procedure when we do that. > > > As for this whole thread I don't understand any of the points being brought up > > in the original post or any of the follow ons, things seem to have been confused > > from the start. The ask from users at the summit was simple. When a new OpenStack > > release is pushed we push a tempest release to mark that (the next one will be > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > corresponding version to mark support for a new release. So when trying to run > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > various services at different sha1s which have to be manually looked up based > > on dates. All users wanted at the summit was a tag for plugins like tempest > > does with the first number in: > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > accomplish this, because I agree with much of what plugin maintainers were > > saying on the thread about wanting to control their own releases. But the > > desire to make sure users have a tag they can pull for the addition or > > removal of a supported release makes sense as something a plugin should do. > > We don't coordinate versions across projects anywhere else, for a > bunch of reasons including the complexity of coordinating the details > and the confusion it causes when the first version of something is > 19.0.0. Instead, we list the compatible versions of everything > together on a series-specific page on releases.o.o. That seems to > be enough to help anyone wanting to know which versions of tools > work together. The data is also available in YAML files, so it's easy > enough to consume by automation. > > Would that work for tempest and it's plugins, too? That is exactly what I had in mind. I wasn't advocating all plugins use the same version number for releases, for the same reasons we don't do that for service projects anymore. Just that there is a release for a plugin when they add support for a new service release so that users can know which version to install. > > Is the problem that the versions are not the same, or that some of the > plugins are not being tagged at all? > While I don't pay attention to all the plugins, the impression I got was that it was the latter and some plugins aren't pushing releases at all. Or if they are there is no clear version to use for a specific openstack release. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Tue Jun 26 14:51:18 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 26 Jun 2018 08:51:18 -0600 Subject: [openstack-dev] [kolla] Removing old / unused images In-Reply-To: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> References: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> Message-ID: On Tue, Jun 26, 2018 at 8:05 AM, Paul Bourke wrote: > Hi all, > > At the weekly meeting a week or two ago, we mentioned removing some old / > unused images from Kolla in the interest of keeping the gate run times down, > as well as general code hygiene. > > The images I've determined that are either no longer relevant, or were > simply never made use of in kolla-ansible are the following: > > * almanach > * certmonger > * dind > * qdrouterd > * rsyslog > > * helm-repository > * kube > * kubernetes-entrypoint > * kubetoolbox > > If you still care about any of these or I've made an oversight, please have > a look at the patch [0] > I have commented as tripleo is using some of these. I would say that you shouldn't just remove these and there needs to be a proper deprecation policy. Just because you aren't using them in kolla-ansible doesn't mean someone isn't actually using them. Thanks, -Alex > Thanks! > -Paul > > [0] https://review.openstack.org/#/c/578111/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 26 15:19:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 11:19:05 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <20180626143754.GB15436@zeong> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <20180626143754.GB15436@zeong> Message-ID: <1530025123-sup-1247@lrrr.local> Excerpts from Matthew Treinish's message of 2018-06-26 10:37:54 -0400: > On Tue, Jun 26, 2018 at 10:12:30AM -0400, Doug Hellmann wrote: > > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > > > Dmitry Tantsur wrote: > > > As for this whole thread I don't understand any of the points being brought up > > > in the original post or any of the follow ons, things seem to have been confused > > > from the start. The ask from users at the summit was simple. When a new OpenStack > > > release is pushed we push a tempest release to mark that (the next one will be > > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > > corresponding version to mark support for a new release. So when trying to run > > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > > various services at different sha1s which have to be manually looked up based > > > on dates. All users wanted at the summit was a tag for plugins like tempest > > > does with the first number in: > > > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > > accomplish this, because I agree with much of what plugin maintainers were > > > saying on the thread about wanting to control their own releases. But the > > > desire to make sure users have a tag they can pull for the addition or > > > removal of a supported release makes sense as something a plugin should do. > > > > We don't coordinate versions across projects anywhere else, for a > > bunch of reasons including the complexity of coordinating the details > > and the confusion it causes when the first version of something is > > 19.0.0. Instead, we list the compatible versions of everything > > together on a series-specific page on releases.o.o. That seems to > > be enough to help anyone wanting to know which versions of tools > > work together. The data is also available in YAML files, so it's easy > > enough to consume by automation. > > > > Would that work for tempest and it's plugins, too? > > That is exactly what I had in mind. I wasn't advocating all plugins use the same > version number for releases, for the same reasons we don't do that for service > projects anymore. Just that there is a release for a plugin when they add > support for a new service release so that users can know which version to > install. > > > Is the problem that the versions are not the same, or that some of the > > plugins are not being tagged at all? > > > > While I don't pay attention to all the plugins, the impression I got was that > it was the latter and some plugins aren't pushing releases at all. Or if they > are there is no clear version to use for a specific openstack release. OK, that should be easy enough to work out a solution to. The release team can remind project teams to tag their tempest plugin(s) when they prepare their other releases at the end of the cycle, for example. 29 out of 40 repos that have "tempest" in the name have not been tagged via the releases repo. Not all of those are plugins. Here's a list: $ for repo in $(grep openstack/ reference/projects.yaml | grep tempest | cut -f2- -d- | cut -f2 -d' ') ; do (echo -n $repo; cd ../releases/; if grep -q $repo deliverables/*/*.yaml ; then echo ' FOUND'; else echo ' NOT FOUND'; fi); done openstack/barbican-tempest-plugin NOT FOUND openstack/blazar-tempest-plugin NOT FOUND openstack/cinder-tempest-plugin NOT FOUND openstack/cloudkitty-tempest-plugin NOT FOUND openstack/congress-tempest-plugin NOT FOUND openstack/designate-tempest-plugin FOUND openstack/ec2api-tempest-plugin NOT FOUND openstack/freezer-tempest-plugin NOT FOUND openstack/heat-tempest-plugin NOT FOUND openstack/tempest-horizon NOT FOUND openstack/ironic-tempest-plugin FOUND openstack/networking-generic-switch-tempest-plugin NOT FOUND openstack/keystone-tempest-plugin NOT FOUND openstack/kuryr-tempest-plugin FOUND openstack/magnum-tempest-plugin NOT FOUND openstack/manila-tempest-plugin NOT FOUND openstack/mistral-tempest-plugin NOT FOUND openstack/monasca-tempest-plugin FOUND openstack/murano-tempest-plugin NOT FOUND openstack/neutron-tempest-plugin NOT FOUND openstack/octavia-tempest-plugin NOT FOUND openstack/charm-tempest NOT FOUND openstack/openstack-ansible-os_tempest FOUND openstack/puppet-tempest FOUND openstack/tempest FOUND openstack/tempest-plugin-cookiecutter NOT FOUND openstack/tempest-lib NOT FOUND openstack/tempest-stress NOT FOUND openstack/python-tempestconf FOUND openstack/senlin-tempest-plugin NOT FOUND openstack/solum-tempest-plugin NOT FOUND openstack/telemetry-tempest-plugin NOT FOUND openstack/tempest-tripleo-ui NOT FOUND openstack/tripleo-common-tempest-plugin NOT FOUND openstack/trove-tempest-plugin NOT FOUND openstack/vitrage-tempest-plugin FOUND openstack/watcher-tempest-plugin NOT FOUND openstack/oswin-tempest-plugin FOUND openstack/zaqar-tempest-plugin NOT FOUND openstack/zun-tempest-plugin FOUND From sean.mcginnis at gmx.com Tue Jun 26 15:35:48 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 26 Jun 2018 10:35:48 -0500 Subject: [openstack-dev] [requirements][stable][docs] updating openstackdocstheme in stable branches In-Reply-To: <1530017858-sup-4432@lrrr.local> References: <1530017858-sup-4432@lrrr.local> Message-ID: <20180626153548.GA31752@sm-workstation> > > The theme is released under an independent release model and does > not currently have stable branches. It depends on pbr and dulwich, > both of which should already be in the requirements and constraints > lists (dulwich is a dependency of reno). > > I think that means the simplest thing to do would be to just update > the constraint for the theme in the stable branches. Does that seem > right? > This makes sense to me. As long as their is no concern about backward compatibility with openstackdocstheme (which I would be kind of surprised if there were), I think this should be safe enough. > If we can make that happen before we start the zuul configuration > porting work that we're going to do as part of the python3-first > goal, then we can take advantage of those patches to trigger doc > rebuilds in all of the projects. > > Doug > From prometheanfire at gentoo.org Tue Jun 26 15:44:15 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 26 Jun 2018 10:44:15 -0500 Subject: [openstack-dev] [requirements][stable][docs] updating openstackdocstheme in stable branches In-Reply-To: <1530017858-sup-4432@lrrr.local> References: <1530017858-sup-4432@lrrr.local> Message-ID: <20180626154415.qnkgiwcjcgvlcr3h@gentoo.org> On 18-06-26 09:03:40, Doug Hellmann wrote: > Requirements team, > > At some point in the next few months we're going to want to raise > the constraint on openstackdocstheme in all of the old branches so > we can take advantage of a new feature for showing the supported > status of each version of a project. That feature isn't implemented > yet, but I thought it would be good to discuss in advance the need > to update the dependency and how to do it. > > The theme is released under an independent release model and does > not currently have stable branches. It depends on pbr and dulwich, > both of which should already be in the requirements and constraints > lists (dulwich is a dependency of reno). > > I think that means the simplest thing to do would be to just update > the constraint for the theme in the stable branches. Does that seem > right? > > If we can make that happen before we start the zuul configuration > porting work that we're going to do as part of the python3-first > goal, then we can take advantage of those patches to trigger doc > rebuilds in all of the projects. > Yep, talked about this in the reqs channel, seems like a good idea/plan. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ansmith at redhat.com Tue Jun 26 15:51:57 2018 From: ansmith at redhat.com (Andy Smith) Date: Tue, 26 Jun 2018 11:51:57 -0400 Subject: [openstack-dev] [kolla] Removing old / unused images In-Reply-To: References: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> Message-ID: Also commented as tripleo is using qdrouterd. It's use in kolla-ansible https://github.com/openstack/kolla-ansible/tree/master/ansible/roles/qdrouterd and bp https://blueprints.launchpad.net/kolla/+spec/dispatch-router-messaging-component Thanks, Andy On Tue, Jun 26, 2018 at 10:52 AM Alex Schultz wrote: > On Tue, Jun 26, 2018 at 8:05 AM, Paul Bourke > wrote: > > Hi all, > > > > At the weekly meeting a week or two ago, we mentioned removing some old / > > unused images from Kolla in the interest of keeping the gate run times > down, > > as well as general code hygiene. > > > > The images I've determined that are either no longer relevant, or were > > simply never made use of in kolla-ansible are the following: > > > > * almanach > > * certmonger > > * dind > > * qdrouterd > > * rsyslog > > > > * helm-repository > > * kube > > * kubernetes-entrypoint > > * kubetoolbox > > > > If you still care about any of these or I've made an oversight, please > have > > a look at the patch [0] > > > > I have commented as tripleo is using some of these. I would say that > you shouldn't just remove these and there needs to be a proper > deprecation policy. Just because you aren't using them in > kolla-ansible doesn't mean someone isn't actually using them. > > Thanks, > -Alex > > > Thanks! > > -Paul > > > > [0] https://review.openstack.org/#/c/578111/ > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Tue Jun 26 15:55:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Tue, 26 Jun 2018 10:55:45 -0500 Subject: [openstack-dev] [neutron] neutron-lib consumption patches for the Neutron Stadium and networking projects Message-ID: Dear Neutron Community, As we are all aware, over the past few cycles we have been re-homing from Neutron to neutron-lib all the common functionality that is shared with the OpenStack Networking family of projects. In a nutshell, the process is the following: 1. Shared functionality is identified in the Neutron code-base and it is re-homed in neutron-lib 2. Patches are submitted to the OpenStack Networking projects with stable branches, to consume the newly re-homed functionality from neutron-lib. It is important to mention here that all that is required from these projects is to review and merge the patches. Boden Russel takes care of creating, submitting and amending these patches until they merge 3. Once all the stable branches projects merge the corresponding consumption patch, the shared functionality is removed from Neutron Lately, we have found that some projects are not merging consumption patches, either for lack of review and / or gate issues. This prevents the team from being able to remove re-homed functionality from Neutron. To be able to continue making progress in the neutron-lib effort, going forward we are going to adopt the following policy: 1. Projects will have two weeks to review and merge neutron-lib consumption patches 2. Boden will stop submitting consumption patches that fail the previous point. The corresponding functionality will be removed from Neutron. At that point, it will be the responsibility of the project in question to catch up with the consumption of functionality from neutron-lib Thanks for your cooperation Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Jun 26 16:13:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 12:13:11 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1530025123-sup-1247@lrrr.local> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <20180626143754.GB15436@zeong> <1530025123-sup-1247@lrrr.local> Message-ID: <1530029495-sup-4961@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-26 11:19:05 -0400: > 29 out of 40 repos that have "tempest" in the name have not been > tagged via the releases repo. Not all of those are plugins. Here's > a list: > > $ for repo in $(grep openstack/ reference/projects.yaml | grep tempest | > cut -f2- -d- | cut -f2 -d' ') ; do (echo -n $repo; cd ../releases/; if > grep -q $repo deliverables/*/*.yaml ; then echo ' FOUND'; else > echo ' NOT FOUND'; fi); done > > openstack/barbican-tempest-plugin NOT FOUND > openstack/blazar-tempest-plugin NOT FOUND > openstack/cinder-tempest-plugin NOT FOUND > openstack/cloudkitty-tempest-plugin NOT FOUND > openstack/congress-tempest-plugin NOT FOUND > openstack/designate-tempest-plugin FOUND > openstack/ec2api-tempest-plugin NOT FOUND > openstack/freezer-tempest-plugin NOT FOUND > openstack/heat-tempest-plugin NOT FOUND > openstack/tempest-horizon NOT FOUND > openstack/ironic-tempest-plugin FOUND > openstack/networking-generic-switch-tempest-plugin NOT FOUND > openstack/keystone-tempest-plugin NOT FOUND > openstack/kuryr-tempest-plugin FOUND > openstack/magnum-tempest-plugin NOT FOUND > openstack/manila-tempest-plugin NOT FOUND > openstack/mistral-tempest-plugin NOT FOUND > openstack/monasca-tempest-plugin FOUND > openstack/murano-tempest-plugin NOT FOUND > openstack/neutron-tempest-plugin NOT FOUND > openstack/octavia-tempest-plugin NOT FOUND > openstack/charm-tempest NOT FOUND > openstack/openstack-ansible-os_tempest FOUND > openstack/puppet-tempest FOUND > openstack/tempest FOUND > openstack/tempest-plugin-cookiecutter NOT FOUND > openstack/tempest-lib NOT FOUND > openstack/tempest-stress NOT FOUND > openstack/python-tempestconf FOUND > openstack/senlin-tempest-plugin NOT FOUND > openstack/solum-tempest-plugin NOT FOUND > openstack/telemetry-tempest-plugin NOT FOUND > openstack/tempest-tripleo-ui NOT FOUND > openstack/tripleo-common-tempest-plugin NOT FOUND > openstack/trove-tempest-plugin NOT FOUND > openstack/vitrage-tempest-plugin FOUND > openstack/watcher-tempest-plugin NOT FOUND > openstack/oswin-tempest-plugin FOUND > openstack/zaqar-tempest-plugin NOT FOUND > openstack/zun-tempest-plugin FOUND I have proposed https://review.openstack.org/578141 to update the deliverable files for rocky to include the plugins. I left out openstack/ec2api-tempest-plugin because it doesn't look like ec2-api has been tagged in quite a while. Doug From Kevin.Fox at pnnl.gov Tue Jun 26 19:19:49 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 26 Jun 2018 19:19:49 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> References: , <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C140D04@EX10MBOX03.pnnl.gov> "What is OpenStack".... ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, June 26, 2018 6:12 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 06/26/2018 08:41 AM, Chris Dent wrote: > Meanwhile, to continue [last week's theme](/tc-report-18-25.html), > the TC's role as listener, mediator, and influencer lacks > definition. > > Zane wrote up a blog post explaining the various ways in which the > OpenStack Foundation is > [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion). One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] whether there is any actual expectation that there will be any focus at all any more. Are CI/CD and secure containers important? [2] Yes, absolutely. Is (one of) the problem(s) with our community that we have too small of a scope/footprint? No. Not in the slightest. IMHO, what we need is focus. And having 4 different focus areas doesn't help focus things. I keep waiting for people to say "no, that isn't part of our scope". But all I see is people saying "yes, we will expand our scope to these new sets of things (otherwise *gasp* the Linux Foundation will gobble up all the hype)". Just my two cents and sorry for being opinionated, -jay [1] https://www.openstack.org/foundation/strategic-focus-areas/ [2] I don't include "edge" in my list of things that are important considering nobody even knows what "edge" is yet. I fail to see how people can possibly "focus" on something that isn't defined. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Tue Jun 26 20:26:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 26 Jun 2018 16:26:31 -0400 Subject: [openstack-dev] [python3] building tools for the transition In-Reply-To: <1529506563-sup-8170@lrrr.local> References: <1529506563-sup-8170@lrrr.local> Message-ID: <1530044673-sup-6115@lrrr.local> Excerpts from Doug Hellmann's message of 2018-06-20 11:34:10 -0400: > I want to thank Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for > agreeing to be a part of the goal champion team for the python3 > goal for Stein. > > The next step for us is to build some tools to make the process a > little easier. > > One of the aspects of this goal that makes it difficult is that we > need to change so many repositories. There are more than 480 > repositories associated with official project teams. I do not think > we want to edit their zuul configurations by hand. :-) > > It would be ideal if we had a script that could read the > openstack-infra/project-config/zuul.d/projects.yaml file to find > the project settings for a repository and copy those settings into > the right settings file within the repository. We could then review > the patch locally before proposing the change to gerrit. A second > script to remove the settings from the project-config file, and > then submit that second change as a patch that has a Depends-On > listed for the first patch would also be useful. > > Another aspect that makes it difficult is that zuul is very flexible > with how it reads its configuration files. The configuration can > be in .zuul.yaml, zuul.yaml, .zuul.d/*.yaml, or zuul.d/*.yaml. > Projects have not been consistent with the way they have named their > files, and that will make writing a script to automate editing them > more difficult. For example, I found automaton/.zuul.yaml, > rally/zuul.yaml, oslo.config/.zuul.d, and python-ironicclient/zuul.d. > > When I was working on adding the lower-constraints jobs, I created some > tools in https://github.com/dhellmann/openstack-requirements-stop-sync > to help create the patches, and we may be able to reuse some of that > code to make similar changes for this goal. > https://github.com/dhellmann/openstack-requirements-stop-sync/blob/master/make_patches.sh > is the script that makes the patches, and > https://github.com/dhellmann/openstack-requirements-stop-sync/blob/master/add_job.py > is the python script that edits the YAML file. > > The task for this goal is a little more complicated, since we are > not just adding 1 template to the existing project settings. We > may have to add several templates and jobs to the existing settings, > merging the two sets together, and removing branch specifiers at > the same time. And we may need to do that in several branches. > > Would a couple of you have time to work on the script to prepare > the patches? We can work in the openstack/goal-tools repository so > that we can collaborate on the code in an official OpenStack > repository (instead of using my GitHub project). > > Doug I started working on these tools today. Given the complexity, for now the two commands only print the expected settings. The 'jobs extract' command shows which job settings should be copied from project-config to the zuul settings in a given branch of a repository, and 'jobs retain' shows which jobs settings should remain in place. Please look over the changeset (https://review.openstack.org/#/c/578194/) and leave review comments if you have them. Doug From zbitter at redhat.com Tue Jun 26 21:12:02 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Jun 2018 17:12:02 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: On 26/06/18 09:12, Jay Pipes wrote: > On 06/26/2018 08:41 AM, Chris Dent wrote: >> Meanwhile, to continue [last week's theme](/tc-report-18-25.html), >> the TC's role as listener, mediator, and influencer lacks >> definition. >> >> Zane wrote up a blog post explaining the various ways in which the >> OpenStack Foundation is >> [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion). > > One has to wonder with 4 "focus areas" for the OpenStack Foundation [1] > whether there is any actual expectation that there will be any focus at > all any more. > > Are CI/CD and secure containers important? [2] Yes, absolutely. > > Is (one of) the problem(s) with our community that we have too small of > a scope/footprint? No. Not in the slightest. > > IMHO, what we need is focus. And having 4 different focus areas doesn't > help focus things. One of the upshots of this change is that when discussing stuff we now need to be more explicit about who 'we' are. We, the OpenStack project, will have less stuff to focus on as a result of this change (no Zuul, for example, and if that doesn't make you happy then perhaps no 'edge' stuff will ;). We, the OpenStack Foundation, will unquestionably have more stuff. > I keep waiting for people to say "no, that isn't part of our scope". But > all I see is people saying "yes, we will expand our scope to these new > sets of things Arguably we're saying both of these things, but for different definitions of 'our'. > (otherwise *gasp* the Linux Foundation will gobble up all > the hype)". I could also speculate on what the board was hoping to achieve when it made this move, but it would be much better if they were to communicate that clearly to the membership themselves. One thing we did at the joint leadership meeting was essentially brainstorming for a new mission statement for the Foundation, and this very much seemed like a post-hoc exercise - we (the Foundation) are operating outside the current mission of record, but nobody has yet articulated what our new mission is. > Just my two cents and sorry for being opinionated, Hey, feel free to complain to the TC on openstack-dev any time. But also be aware that if you actually want anything to happen, you also need to complain to your Individual Directors of the Foundation and/or on the foundation list. cheers, Zane. > -jay > > [1] https://www.openstack.org/foundation/strategic-focus-areas/ > > [2] I don't include "edge" in my list of things that are important > considering nobody even knows what "edge" is yet. I fail to see how > people can possibly "focus" on something that isn't defined. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Tue Jun 26 21:21:04 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 26 Jun 2018 14:21:04 -0700 Subject: [openstack-dev] [all][ci][infra] Small network interface MTUs on test nodes Message-ID: <1530048064.2740281.1421412296.3308F217@webmail.messagingengine.com> Hello everyone, We now have more than one cloud provider giving us test node resources where we can expect network interfaces to have MTUs less that 1500. This is a side effect of running Neutron with overlay networking in the cloud providing the test resources. Considering we've largely made this "problem" for ourselves we should try to accommodate this. I have pushed a documentation update to explain this [0] as well as job updates for infra managed overlays used in multinode testing [1][2]. If your jobs manage interfaces or bridges themselves you may need to make similar updates as well. (I believe that devstack + neutron already do this for you if using them). Let the infra team know if you have any questions about this. [0] https://review.openstack.org/#/c/578159/1/doc/source/testing.rst [1] https://review.openstack.org/578146 [2] https://review.openstack.org/578153 Clark From corvus at inaugust.com Wed Jun 27 00:18:24 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 26 Jun 2018 17:18:24 -0700 Subject: [openstack-dev] [infra] Behavior change in Zuul post pipeline Message-ID: <871sctjd4f.fsf@meyer.lemoncheese.net> Hi, We recently changed the behavior* of the post pipeline in Zuul to only run jobs for the most recently merged changes on each project's branches. If you were relying on the old behavior where jobs ran on every merged change, let us know, we can make a new pipeline for that. But for the typical case, this should result in some improvements: 1) We waste fewer build resources building intermediate build artifacts (e.g., documentation for a version which is already obsoleted by the change which landed after it). 2) Races in artifact build jobs will no longer result in old versions of documentation being published because they ran on a slightly faster node than the newer version. If you observe any unexpected behavior as the result of this change, please let us know in #openstack-infra. -Jim * The thing which implements this behavior in Zuul is the "supercedent"** pipeline manager[1]. Zuul has had, since the initial commit six years ago, a pluggable system for controlling the behavior in its pipelines. To date, we have only had two pipeline managers: "dependent" which controls the gate, and "independent" which controls everything else. [1] https://zuul-ci.org/docs/zuul/user/config.html#value-pipeline.manager.supercedent ** It may or may not be named after anyone you know. From gmann at ghanshyammann.com Wed Jun 27 01:17:33 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 10:17:33 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1530021936-sup-5714@lrrr.local> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> Message-ID: <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> ---- On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann wrote ---- > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > Dmitry Tantsur wrote: > > > > > > [...] > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > (of both services and plugins) OR be branched. > > > > > > [...] > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > is branchless was because there was no compatibility issue. If the split > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > release-independent... > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > when the current source of tension is that the Tempest API itself > > > is breaking for its plugins. > > > > > > Maybe rather than talking about how to release compatible things > > > together, we should go back and talk about why Tempest's API is changing > > > in a way that can't be made backwards-compatible. Can you give some more > > > detail about that? > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > provided by Tempest's library and plugin interface. I've not ever heard of > > these kind of backwards incompatibilities in those interfaces and we go to > > all effort to make sure we don't break them. Where did the idea that > > backwards incompatible changes where being introduced come from? > > In his original post, gmann said, "There might be some changes in > Tempest which might not work with older version of Tempest Plugins." > I was surprised to hear that, but I'm not sure how else to interpret > that statement. I did not mean to say that Tempest will introduce the changes in backward incompatible way which can break plugins. That cannot happen as all plugins and tempest are branchless and they are being tested with master Tempest so if we change anything backward incompatible then it break the plugins gate. Even we have to remove any deprecated interfaces from Tempest, we fix all plugins first like - https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) What I mean to say here is that adding new or removing deprecated interface in Tempest might not work with all released version or unreleased Plugins. That point is from point of view of using Tempest and Plugins in production cloud testing not gate(where we keep the compatibility). Production Cloud user use Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 not latest version (though latest version might work). This thread is not just for gate testing point of view (which seems to be always interpreted), this is more for user using Tempest and Plugins for their cloud testing. I am looping operator mail list also which i forgot in initial post. We do not have any tag/release from plugins to know what version of plugin can work with what version of tempest. For Example If There is new interface introduced by Tempest 19.0.0 and pluginX start using it. Now it can create issues for pluginX in both release model 1. plugins with no release (I will call this PluginNR), 2. plugins with independent release (I will call it PluginIR). Users (Not Gate) will face below issues: - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. Let me make it more clear via diagram: PluginNR PluginIR Tempest 19.0.0 Add NewInterface Use NewInterface Use NewInterface Tempest 18.0.0 NewInterface not present No version of PluginNR Unknown version of PluginIR GATE (No Issue as latest things always being tested live ): OK OK User issues: X (does not work) Hard to find compatible version We need a particular tag from Plugins for OpenStack release, EOL of OpenStack release like Tempest does so that user can test their old release Cloud in easy way. -gmann > > > As for this whole thread I don't understand any of the points being brought up > > in the original post or any of the follow ons, things seem to have been confused > > from the start. The ask from users at the summit was simple. When a new OpenStack > > release is pushed we push a tempest release to mark that (the next one will be > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > corresponding version to mark support for a new release. So when trying to run > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > various services at different sha1s which have to be manually looked up based > > on dates. All users wanted at the summit was a tag for plugins like tempest > > does with the first number in: > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > accomplish this, because I agree with much of what plugin maintainers were > > saying on the thread about wanting to control their own releases. But the > > desire to make sure users have a tag they can pull for the addition or > > removal of a supported release makes sense as something a plugin should do. > > We don't coordinate versions across projects anywhere else, for a > bunch of reasons including the complexity of coordinating the details > and the confusion it causes when the first version of something is > 19.0.0. Instead, we list the compatible versions of everything > together on a series-specific page on releases.o.o. That seems to > be enough to help anyone wanting to know which versions of tools > work together. The data is also available in YAML files, so it's easy > enough to consume by automation. > > Would that work for tempest and it's plugins, too? > > Is the problem that the versions are not the same, or that some of the > plugins are not being tagged at all? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Wed Jun 27 01:19:17 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 10:19:17 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> Message-ID: <1643ed2c5d2.f4af11ae18001.3627876780003826684@ghanshyammann.com> ++ operator ML ---- On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann wrote ---- > > > > ---- On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann wrote ---- > > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > > > Dmitry Tantsur wrote: > > > > > > > [...] > > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > > (of both services and plugins) OR be branched. > > > > > > > [...] > > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > > is branchless was because there was no compatibility issue. If the split > > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > > release-independent... > > > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > > when the current source of tension is that the Tempest API itself > > > > is breaking for its plugins. > > > > > > > > Maybe rather than talking about how to release compatible things > > > > together, we should go back and talk about why Tempest's API is changing > > > > in a way that can't be made backwards-compatible. Can you give some more > > > > detail about that? > > > > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > > provided by Tempest's library and plugin interface. I've not ever heard of > > > these kind of backwards incompatibilities in those interfaces and we go to > > > all effort to make sure we don't break them. Where did the idea that > > > backwards incompatible changes where being introduced come from? > > > > In his original post, gmann said, "There might be some changes in > > Tempest which might not work with older version of Tempest Plugins." > > I was surprised to hear that, but I'm not sure how else to interpret > > that statement. > > I did not mean to say that Tempest will introduce the changes in backward incompatible way which can break plugins. That cannot happen as all plugins and tempest are branchless and they are being tested with master Tempest so if we change anything backward incompatible then it break the plugins gate. Even we have to remove any deprecated interfaces from Tempest, we fix all plugins first like - https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) > > What I mean to say here is that adding new or removing deprecated interface in Tempest might not work with all released version or unreleased Plugins. That point is from point of view of using Tempest and Plugins in production cloud testing not gate(where we keep the compatibility). Production Cloud user use Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 not latest version (though latest version might work). > > This thread is not just for gate testing point of view (which seems to be always interpreted), this is more for user using Tempest and Plugins for their cloud testing. I am looping operator mail list also which i forgot in initial post. > > We do not have any tag/release from plugins to know what version of plugin can work with what version of tempest. For Example If There is new interface introduced by Tempest 19.0.0 and pluginX start using it. Now it can create issues for pluginX in both release model 1. plugins with no release (I will call this PluginNR), 2. plugins with independent release (I will call it PluginIR). > > Users (Not Gate) will face below issues: > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. > - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. > > Let me make it more clear via diagram: > PluginNR PluginIR > > Tempest 19.0.0 > Add NewInterface Use NewInterface Use NewInterface > > > Tempest 18.0.0 > NewInterface not present No version of PluginNR Unknown version of PluginIR > > > GATE (No Issue as latest things always being tested live ): OK OK > > User issues: X (does not work) Hard to find compatible version > > > We need a particular tag from Plugins for OpenStack release, EOL of OpenStack release like Tempest does so that user can test their old release Cloud in easy way. > > -gmann > > > > > > As for this whole thread I don't understand any of the points being brought up > > > in the original post or any of the follow ons, things seem to have been confused > > > from the start. The ask from users at the summit was simple. When a new OpenStack > > > release is pushed we push a tempest release to mark that (the next one will be > > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > > corresponding version to mark support for a new release. So when trying to run > > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > > various services at different sha1s which have to be manually looked up based > > > on dates. All users wanted at the summit was a tag for plugins like tempest > > > does with the first number in: > > > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > > accomplish this, because I agree with much of what plugin maintainers were > > > saying on the thread about wanting to control their own releases. But the > > > desire to make sure users have a tag they can pull for the addition or > > > removal of a supported release makes sense as something a plugin should do. > > > > We don't coordinate versions across projects anywhere else, for a > > bunch of reasons including the complexity of coordinating the details > > and the confusion it causes when the first version of something is > > 19.0.0. Instead, we list the compatible versions of everything > > together on a series-specific page on releases.o.o. That seems to > > be enough to help anyone wanting to know which versions of tools > > work together. The data is also available in YAML files, so it's easy > > enough to consume by automation. > > > > Would that work for tempest and it's plugins, too? > > > > Is the problem that the versions are not the same, or that some of the > > plugins are not being tagged at all? > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > From gmann at ghanshyammann.com Wed Jun 27 01:31:42 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 10:31:42 +0900 Subject: [openstack-dev] [Openstack-operators] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643ed2c5d2.f4af11ae18001.3627876780003826684@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> <1643ed2c5d2.f4af11ae18001.3627876780003826684@ghanshyammann.com> Message-ID: <1643ede22bb.c88c0cdb18029.7871042374175052950@ghanshyammann.com> ---- On Wed, 27 Jun 2018 10:19:17 +0900 Ghanshyam Mann wrote ---- > ++ operator ML > > ---- On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann wrote ---- > > > > > > > > ---- On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann wrote ---- > > > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > > > > > Dmitry Tantsur wrote: > > > > > > > > [...] > > > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > > > (of both services and plugins) OR be branched. > > > > > > > > [...] > > > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > > > is branchless was because there was no compatibility issue. If the split > > > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > > > release-independent... > > > > > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > > > when the current source of tension is that the Tempest API itself > > > > > is breaking for its plugins. > > > > > > > > > > Maybe rather than talking about how to release compatible things > > > > > together, we should go back and talk about why Tempest's API is changing > > > > > in a way that can't be made backwards-compatible. Can you give some more > > > > > detail about that? > > > > > > > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > > > provided by Tempest's library and plugin interface. I've not ever heard of > > > > these kind of backwards incompatibilities in those interfaces and we go to > > > > all effort to make sure we don't break them. Where did the idea that > > > > backwards incompatible changes where being introduced come from? > > > > > > In his original post, gmann said, "There might be some changes in > > > Tempest which might not work with older version of Tempest Plugins." > > > I was surprised to hear that, but I'm not sure how else to interpret > > > that statement. > > > > I did not mean to say that Tempest will introduce the changes in backward incompatible way which can break plugins. That cannot happen as all plugins and tempest are branchless and they are being tested with master Tempest so if we change anything backward incompatible then it break the plugins gate. Even we have to remove any deprecated interfaces from Tempest, we fix all plugins first like - https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) > > > > What I mean to say here is that adding new or removing deprecated interface in Tempest might not work with all released version or unreleased Plugins. That point is from point of view of using Tempest and Plugins in production cloud testing not gate(where we keep the compatibility). Production Cloud user use Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 not latest version (though latest version might work). > > > > This thread is not just for gate testing point of view (which seems to be always interpreted), this is more for user using Tempest and Plugins for their cloud testing. I am looping operator mail list also which i forgot in initial post. > > > > We do not have any tag/release from plugins to know what version of plugin can work with what version of tempest. For Example If There is new interface introduced by Tempest 19.0.0 and pluginX start using it. Now it can create issues for pluginX in both release model 1. plugins with no release (I will call this PluginNR), 2. plugins with independent release (I will call it PluginIR). > > > > Users (Not Gate) will face below issues: > > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. > > - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. > > > > Let me make it more clear via diagram: > > PluginNR PluginIR > > > > Tempest 19.0.0 > > Add NewInterface Use NewInterface Use NewInterface > > > > > > Tempest 18.0.0 > > NewInterface not present No version of PluginNR Unknown version of PluginIR > > > > > > GATE (No Issue as latest things always being tested live ): OK OK > > > > User issues: X (does not work) Hard to find compatible version > > > > Adding it here as formatting issue to read it- http://paste.openstack.org/show/724347/ > > We need a particular tag from Plugins for OpenStack release, EOL of OpenStack release like Tempest does so that user can test their old release Cloud in easy way. > > > > -gmann > > > > > > > > > As for this whole thread I don't understand any of the points being brought up > > > > in the original post or any of the follow ons, things seem to have been confused > > > > from the start. The ask from users at the summit was simple. When a new OpenStack > > > > release is pushed we push a tempest release to mark that (the next one will be > > > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > > > corresponding version to mark support for a new release. So when trying to run > > > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > > > various services at different sha1s which have to be manually looked up based > > > > on dates. All users wanted at the summit was a tag for plugins like tempest > > > > does with the first number in: > > > > > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > > > accomplish this, because I agree with much of what plugin maintainers were > > > > saying on the thread about wanting to control their own releases. But the > > > > desire to make sure users have a tag they can pull for the addition or > > > > removal of a supported release makes sense as something a plugin should do. > > > > > > We don't coordinate versions across projects anywhere else, for a > > > bunch of reasons including the complexity of coordinating the details > > > and the confusion it causes when the first version of something is > > > 19.0.0. Instead, we list the compatible versions of everything > > > together on a series-specific page on releases.o.o. That seems to > > > be enough to help anyone wanting to know which versions of tools > > > work together. The data is also available in YAML files, so it's easy > > > enough to consume by automation. > > > > > > Would that work for tempest and it's plugins, too? > > > > > > Is the problem that the versions are not the same, or that some of the > > > plugins are not being tagged at all? > > > > > > Doug > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From gmann at ghanshyammann.com Wed Jun 27 01:35:32 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 10:35:32 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> Message-ID: <1643ee1a653.ff8869ba18037.6132831083139139335@ghanshyammann.com> ---- On Tue, 26 Jun 2018 19:12:33 +0900 Dmitry Tantsur wrote ---- > On 06/26/2018 11:57 AM, Ghanshyam Mann wrote: > > > > > > > > ---- On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur wrote ---- > > > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > > > > > In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > > > > > > > > There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. > > > > > > FWIW this is solved by stable branches for all other projects. If we cannot keep > > > Tempest compatible with all supported branches, we should back off our decision > > > to make it branchless. The very nature of being branchless implies being > > > compatible with all supported releases. > > > > > > > > > > > During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > > > > > > > > For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > > > > > > > > One way to make this coordinated release (just a initial thought): > > > > 1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > > > > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > > > > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > > > > each plugins will be released as 19.0.0 and so on. > > > > > > > > Giving the above background and use case of this coordinated release, > > > > A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. > > > > > > Disclaimer: I'm not the PTL. > > > > > > Similarly to Luigi, I don't feel well about forcing a plugin release at the same > > > time as a tempest release, UNLESS tempest folks are going to coordinate their > > > releases with all how-many-do-we-have plugins. What I'd like to avoid is cutting > > > a release in the middle of a patch chain or some refactoring just because > > > tempest happened to have a release right now. > > > > I understand your point. But we can avoid that case if we only coordinate on major version bump only. as i mentioned in 1.2 point, Tempest and Tempest plugins can do their intermediate release anytime which are nothing but backward compatible release. In this proposed model, we can do a coordinated release for major version bump only which is happening only on OpenStack release and EOL of any stable branch. > > Even bigger concern: what if the plugin is actually not compatible yet? Say, > you're releasing tempest 19.0. As the same point you're cutting > ironic-tempest-plugin 19.0. Who guarantees that they're compatible? If we > haven't had any patches for it in a month, it may well happen that it does not work. On gate, their are being usually tested with latest tempest until tempest is capped. There is no single place of testing all plugins which is not feasible too but each plugins running in gate will break if Tempest introduced any backward incompatible change. My proposal or user request about this coordinated release is testing cloud with older version of Tempest and plugins. For gate there is no issue but for Cloud tester it create the issue. I tried to explain it in more clear way here - http://lists.openstack.org/pipermail/openstack-dev/2018-June/131850.html -gmann > > > > > Or I am all open to have another release model which can be best suited for all plugins which can address the mentioned use case of coordinated release. > > My suggestion: tempest has to be compatible with all supported releases (of both > services and plugins) OR be branched. > > > > > -gmann > > > > > > > > > > > If we get the agreement from all Plugins then, > > > > B) Release team or TC help to find the better model for this use case or improvement in above model. > > > > > > > > C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > > > > > > > > NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > > > > > > > > [1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > > > > [2] https://docs.openstack.org/tempest/latest/plugin-registry.html > > > > [3] https://github.com/openstack/kuryr-tempest-plugin/releases > > > > https://github.com/openstack/ironic-tempest-plugin/releases > > > > [4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > > > > > > > > > > > -gmann > > > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gmann at ghanshyammann.com Wed Jun 27 01:43:23 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 10:43:23 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <20180626103258.vpk5462pjoujwqz5@sileht.net> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <20180626103258.vpk5462pjoujwqz5@sileht.net> Message-ID: <1643ee8d2e8.ce28ce5c18064.8682327260105512968@ghanshyammann.com> ---- On Tue, 26 Jun 2018 19:32:59 +0900 Mehdi Abaakouk wrote ---- > Hi, > > I have never understood the branchless tempest thing. Making Tempest > release is a great news for me. > > But about plugins... Tempest already provides a API for plugins. If you > are going to break this API, why not using stable branches and > deprecation process like any other software ? > > If you do that, plugin will be informed that Tempest will soon do a > breaking change. Their can update their plugin code and raise the > minimal tempest version required to work. > > Their can do that when they have times, and not because Tempest want to > release a version soon. > > Also the stable branch/deprecation process is well known by the > whole community. There is no issue of backward incompatibility from Tempest and on Gate. GATE is always good as it is going with mater version or minimum supported version in plugins as you mentioned. We take care of all these things you mentioned which is our main goal also. But If we think from Cloud tester perspective where they use older version of tempest for particular OpenStack release but there is no corresponding tag/version from plugins to use them for that OpenStack release. Idea is here to have a tag from Plugins also like Tempest does currently for each OpenStack release so that user can pickup those tag and test their Complete Cloud. -gmann > > And this will also allow them to release a version when their want. > > So I support making release of Tempest and Plugins, but do not support > a coordinated release. > > Regards, > > On Tue, Jun 26, 2018 at 06:18:52PM +0900, Ghanshyam Mann wrote: > >Hello Everyone, > > > >In Queens cycle, community goal to split the Tempest Plugin has been completed [1] and i think almost all the projects have separate repo for tempest plugin [2]. Which means each tempest plugins are being separated from their project release model. Few projects have started the independent release model for their plugins like kuryr-tempest-plugin, ironic-tempest-plugin etc [3]. I think neutron-tempest-plugin also planning as chatted with amotoki. > > > >There might be some changes in Tempest which might not work with older version of Tempest Plugins. For example, If I am testing any production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc i will be using Tempest and Aodh's , Congress's Tempest plugins. With Independent release model of each Tempest Plugins, there might be chance that the Aodh's or Congress's Tempest plugin versions are not compatible with latest/known Tempest versions. It will become hard to find the compatible tag/release of Tempest and Tempest Plugins or in some cases i might need to patch up the things. > > > >During QA feedback sessions at Vancouver Summit, there was feedback to coordinating the release of all Tempest plugins and Tempest [4] (also amotoki talked to me on this as neutron-tempest-plugin is planning their first release). Idea is to release/tag all the Tempest plugins and Tempest together so that specific release/tag can be identified as compatible version of all the Plugins and Tempest for testing the complete stack. That way user can get to know what version of Tempest Plugins is compatible with what version of Tempest. > > > >For above use case, we need some coordinated release model among Tempest and all the Tempest Plugins. There should be single release of all Tempest Plugins with well defined tag whenever any Tempest release is happening. For Example, Tempest version 19.0.0 is to mark the "support of the Rocky release". When releasing the Tempest 19.0, we will release all the Tempest plugins also to tag the compatibility of plugins with Tempest for "support of the Rocky release". > > > >One way to make this coordinated release (just a initial thought): > >1. Release Each Tempest Plugins whenever there is any major version release of Tempest (like marking the support of OpenStack release in Tempest, EOL of OpenStack release in Tempest) > > 1.1 Each plugin or Tempest can do their intermediate release of minor version change which are in backward compatible way. > > 1.2 This coordinated Release can be started from latest Tempest Version for simple reading. Like if we start this coordinated release from Tempest version 19.0.0 then, > > each plugins will be released as 19.0.0 and so on. > > > >Giving the above background and use case of this coordinated release, > >A) I would like to ask each plugins owner if you are agree on this coordinated release? If no, please give more feedback or issue we can face due to this coordinated release. > > > > > >If we get the agreement from all Plugins then, > >B) Release team or TC help to find the better model for this use case or improvement in above model. > > > >C) Once we define the release model, find out the team owning that release model (there are more than 40 Tempest plugins currently) . > > > >NOTE: Till we decide the best solution for this use case, each plugins can do/keep doing their plugin release as per independent release model. > > > >[1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html > >[2] https://docs.openstack.org/tempest/latest/plugin-registry.html > >[3] https://github.com/openstack/kuryr-tempest-plugin/releases > > https://github.com/openstack/ironic-tempest-plugin/releases > >[4] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131011.html > > > > > >-gmann > > > > > >__________________________________________________________________________ > >OpenStack Development Mailing List (not for usage questions) > >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Mehdi Abaakouk > mail: sileht at sileht.net > irc: sileht > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zbitter at redhat.com Wed Jun 27 02:00:06 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 26 Jun 2018 22:00:06 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: On 26/06/18 09:12, Jay Pipes wrote: > Is (one of) the problem(s) with our community that we have too small of > a scope/footprint? No. Not in the slightest. Incidentally, this is an interesting/amusing example of what we talked about this morning on IRC[1]: you say your concern is that the scope of *Nova* is too big and that you'd be happy to have *more* services in OpenStack if they took the orchestration load off Nova and left it just to handle the 'plumbing' part (which I agree with, while noting that nobody knows how to get there from here); but here you're implying that Kata Containers (something that will clearly have no effect either way on the simplicity or otherwise of Nova) shouldn't be part of the Foundation because it will take focus away from Nova/OpenStack. So to answer your question: zaneb: yeah... nobody I know who argues for a small stable core (in Nova) has ever said there should be fewer higher layer services. zaneb: I'm not entirely sure where you got that idea from. I guess from all the people who keep saying it ;) Apparently somebody was saying it a year ago too :D https://twitter.com/zerobanana/status/883052105791156225 cheers, Zane. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 From gmann at ghanshyammann.com Wed Jun 27 02:06:38 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 27 Jun 2018 11:06:38 +0900 Subject: [openstack-dev] [nova] Nova API Office Hour Message-ID: <1643efe1ce7.12759efb218134.1622541660269898395@ghanshyammann.com> Hi All, >From today, we will be hosting the office hour for Nova API discussions which will cover the Nova API priority and API Bug triage things. I have updated the information about agenda and time in wiki page [1]. All are welcome to join. We will continue this on every Wedneday 06.00 UTC [1] https://wiki.openstack.org/wiki/Meetings/NovaAPI -gmann From tony at bakeyournoodle.com Wed Jun 27 02:42:47 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 27 Jun 2018 12:42:47 +1000 Subject: [openstack-dev] [requirements][stable][docs] updating openstackdocstheme in stable branches In-Reply-To: <1530017858-sup-4432@lrrr.local> References: <1530017858-sup-4432@lrrr.local> Message-ID: <20180627024246.GC21570@thor.bakeyournoodle.com> On Tue, Jun 26, 2018 at 09:03:40AM -0400, Doug Hellmann wrote: > Requirements team, > > At some point in the next few months we're going to want to raise > the constraint on openstackdocstheme in all of the old branches so > we can take advantage of a new feature for showing the supported > status of each version of a project. That feature isn't implemented > yet, but I thought it would be good to discuss in advance the need > to update the dependency and how to do it. > > The theme is released under an independent release model and does > not currently have stable branches. It depends on pbr and dulwich, > both of which should already be in the requirements and constraints > lists (dulwich is a dependency of reno). The only possible gottcha is if openstackdocstheme relies on a feature in any of pbr or dulwich which isn't in the version currently in upper-constratints.txt. If that happens we can easily bump those constraints also. > I think that means the simplest thing to do would be to just update > the constraint for the theme in the stable branches. Does that seem > right? Yup that seems to be all there is to it. Once the release happens the bot will propose the constraints bump on master which someone will need to cherry-pick onto the stable branches. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From shu.mutow at gmail.com Wed Jun 27 04:17:46 2018 From: shu.mutow at gmail.com (Shu M.) Date: Wed, 27 Jun 2018 13:17:46 +0900 Subject: [openstack-dev] [zun][zun-ui] Priorities of new feature on Zun UI Message-ID: Hi folks, Could you let me know your thoughts for priorities of new features on Zun UI. Could you jump to following pad, and fill your opinions? https://etherpad.openstack.org/p/zun-ui Best regards, Shu -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Wed Jun 27 09:38:09 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Wed, 27 Jun 2018 10:38:09 +0100 Subject: [openstack-dev] [kolla] Removing old / unused images In-Reply-To: References: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> Message-ID: Ok, thanks for the replies. Seems at least the non k8s images are in use by tripleo and so will remain untouched. caoyuan has a similar patch open to just remove the k8s related images so please have a look and vote on that instead: https://review.openstack.org/#/c/576911/ I'm not sure we need a deprecation cycle on the k8s images given they were directly related to kolla-k8s, that said, we need to remember there are other consumers outside these projects so if people feel we should keep them for a cycle please let me know. On 26/06/18 16:51, Andy Smith wrote: > Also commented as tripleo is using qdrouterd. > > It's use in kolla-ansible > https://github.com/openstack/kolla-ansible/tree/master/ansible/roles/qdrouterd > > and bp > https://blueprints.launchpad.net/kolla/+spec/dispatch-router-messaging-component > > Thanks, > Andy > > On Tue, Jun 26, 2018 at 10:52 AM Alex Schultz > wrote: > > On Tue, Jun 26, 2018 at 8:05 AM, Paul Bourke > wrote: > > Hi all, > > > > At the weekly meeting a week or two ago, we mentioned removing > some old / > > unused images from Kolla in the interest of keeping the gate run > times down, > > as well as general code hygiene. > > > > The images I've determined that are either no longer relevant, or > were > > simply never made use of in kolla-ansible are the following: > > > > * almanach > > * certmonger > > * dind > > * qdrouterd > > * rsyslog > > > > * helm-repository > > * kube > > * kubernetes-entrypoint > > * kubetoolbox > > > > If you still care about any of these or I've made an oversight, > please have > > a look at the patch [0] > > > > I have commented as tripleo is using some of these. I would say that > you shouldn't just remove these and there needs to be a proper > deprecation policy. Just because you aren't using them in > kolla-ansible doesn't mean someone isn't actually using them. > > Thanks, > -Alex > > > Thanks! > > -Paul > > > > [0] https://review.openstack.org/#/c/578111/ > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Wed Jun 27 11:55:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 27 Jun 2018 07:55:34 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: WARNING: Danger, Will Robinson! Strong opinions ahead! On 06/26/2018 10:00 PM, Zane Bitter wrote: > On 26/06/18 09:12, Jay Pipes wrote: >> Is (one of) the problem(s) with our community that we have too small >> of a scope/footprint? No. Not in the slightest. > > Incidentally, this is an interesting/amusing example of what we talked > about this morning on IRC[1]: you say your concern is that the scope of > *Nova* is too big and that you'd be happy to have *more* services in > OpenStack if they took the orchestration load off Nova and left it just > to handle the 'plumbing' part (which I agree with, while noting that > nobody knows how to get there from here); but here you're implying that > Kata Containers (something that will clearly have no effect either way > on the simplicity or otherwise of Nova) shouldn't be part of the > Foundation because it will take focus away from Nova/OpenStack. Above, I was saying that the scope of the *OpenStack* community is already too broad (IMHO). An example of projects that have made the *OpenStack* community too broad are purpose-built telco applications like Tacker [1] and Service Function Chaining. [2] I've also argued in the past that all distro- or vendor-specific deployment tools (Fuel, Triple-O, etc [3]) should live outside of OpenStack because these projects are more products and the relentless drive of vendor product management (rightfully) pushes the scope of these applications to gobble up more and more feature space that may or may not have anything to do with the core OpenStack mission (and have more to do with those companies' product roadmap). On the other hand, my statement that the OpenStack Foundation having 4 different focus areas leads to a lack of, well, focus, is a general statement on the OpenStack *Foundation* simultaneously expanding its sphere of influence while at the same time losing sight of OpenStack itself -- and thus the push to create an Open Infrastructure Foundation that would be able to compete with the larger mission of the Linux Foundation. [1] This is nothing against Tacker itself. I just don't believe that *applications* that are specially built for one particular industry belong in the OpenStack set of projects. I had repeatedly stated this on Tacker's application to become an OpenStack project, FWIW: https://review.openstack.org/#/c/276417/ [2] There is also nothing wrong with service function chains. I just don't believe they belong in *OpenStack*. They more appropriately belong in the (Open)NFV community because they just are not applicable outside of that community's scope and mission. [3] It's interesting to note that Airship was put into its own playground outside the bounds of the OpenStack community (but inside the bounds of the OpenStack Foundation). Airship is AT&T's specific deployment tooling for "the edge!". I actually think this was the correct move for this vendor-opinionated deployment tool. > So to answer your question: > > zaneb: yeah... nobody I know who argues for a small stable > core (in Nova) has ever said there should be fewer higher layer services. > zaneb: I'm not entirely sure where you got that idea from. Note the emphasis on *Nova* above? Also note that when I've said that *OpenStack* should have a smaller mission and scope, that doesn't mean that higher-level services aren't necessary or wanted. It's just that Nova has been a dumping ground over the past 7+ years for features that, looking back, should never have been added to Nova (or at least, never added to the Compute API) [4]. What we were discussing yesterday on IRC was this: "Which parts of the Compute API should have been implemented in other services?" What we are discussing here is this: "Which projects in the OpenStack community expanded the scope of the OpenStack mission beyond infrastructure-as-a-service?" and, following that: "What should we do about projects that expanded the scope of the OpenStack mission beyond infrastructure-as-a-service?" Note that, clearly, my opinion is that OpenStack's mission should be to provide infrastructure as a service projects (both plumbing and porcelain). This is MHO only. The actual OpenStack mission statement [5] is sufficiently vague as to provide no meaningful filtering value for determining new entrants to the project ecosystem. I *personally* believe that should change in order for the *OpenStack* community to have some meaningful definition and differentiation from the broader cloud computing, application development, and network orchestration ecosystems. All the best, -jay [4] ... or never brought into the Compute API to begin with. You know, vestigial tail and all that. [5] for reference: "The OpenStack Mission is to produce a ubiquitous Open Source Cloud Computing platform that is easy to use, simple to implement, interoperable between deployments, works well at all scales, and meets the needs of users and operators of both public and private clouds." > I guess from all the people who keep saying it ;) > > Apparently somebody was saying it a year ago too :D > https://twitter.com/zerobanana/status/883052105791156225 > > cheers, > Zane. > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From thierry at openstack.org Wed Jun 27 12:26:05 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 27 Jun 2018 14:26:05 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: Jay Pipes wrote: > [...] > I've also argued in the past that all distro- or vendor-specific > deployment tools (Fuel, Triple-O, etc [3]) should live outside of > OpenStack because these projects are more products and the relentless > drive of vendor product management (rightfully) pushes the scope of > these applications to gobble up more and more feature space that may or > may not have anything to do with the core OpenStack mission (and have > more to do with those companies' product roadmap). I totally agree on the need to distinguish between OpenStack-the-main-product (the set of user-facing API services that one assembles to build an infrastructure provider) and the tooling that helps deploy it. The map[1] that was produced last year draws that line by placing deployment and lifecycle management tooling into a separate bucket. I'm not sure of the value of preventing those interested in openly collaborating around packaging solutions from doing it as a part of OpenStack-the-community. As long as there is potential for open collaboration I think we should encourage it, as long as we make it clear where the "main product" (that deployment tooling helps deploying) is. > On the other hand, my statement that the OpenStack Foundation having 4 > different focus areas leads to a lack of, well, focus, is a general > statement on the OpenStack *Foundation* simultaneously expanding its > sphere of influence while at the same time losing sight of OpenStack > itself I understand that fear -- however it's not really a zero-sum game. In all of those "focus areas", OpenStack is a piece of the puzzle, so it's still very central to everything we do. > -- and thus the push to create an Open Infrastructure Foundation > that would be able to compete with the larger mission of the Linux > Foundation. As I explained in a talk in Vancouver[2], the strategic evolution of the Foundation is more the result of a number of parallel discussions happening in 2017 that pointed toward a similar need for a change: moving the discussions from being product-oriented to being goal-oriented, and no longer be stuck in a "everything we produce must be called OpenStack" box. It's more the result of our community's evolving needs than the need to "compete". [1] http://openstack.org/openstack-map [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20968/beyond-datacenter-cloud-the-future-of-the-openstack-foundation -- Thierry Carrez (ttx) From hongbin034 at gmail.com Wed Jun 27 12:45:28 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 27 Jun 2018 08:45:28 -0400 Subject: [openstack-dev] [zun][zun-ui] Priorities of new feature on Zun UI In-Reply-To: References: Message-ID: Hi Shu, Thanks for the raising this discussion. I have filled my opinion in the etherpad. In general, I am quite satisfied by the current feature set provided by the Zun UI. Thanks for the great work from the UI team. Best regards, Hongbin On Wed, Jun 27, 2018 at 12:18 AM Shu M. wrote: > Hi folks, > > Could you let me know your thoughts for priorities of new features on Zun > UI. > Could you jump to following pad, and fill your opinions? > https://etherpad.openstack.org/p/zun-ui > > Best regards, > Shu > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Wed Jun 27 13:08:10 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Wed, 27 Jun 2018 15:08:10 +0200 Subject: [openstack-dev] [cursive] usage of cursive library Message-ID: <4427c92e-b52e-75d3-0b40-a99ee478af10@secustack.com> Hello, our team has implemented a prototype for an osc-included image signing. And we would like to get some of the functionality to cursive, so it can be reused for example in nova, when creating an image from a server. Would you think it is okay to extend cursive in that way? Here is link to the story of use case we have: https://storyboard.openstack.org/#!/story/2002128 Regards, Josephine From doug at doughellmann.com Wed Jun 27 13:20:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 27 Jun 2018 09:20:29 -0400 Subject: [openstack-dev] [requirements][stable][docs] updating openstackdocstheme in stable branches In-Reply-To: <20180627024246.GC21570@thor.bakeyournoodle.com> References: <1530017858-sup-4432@lrrr.local> <20180627024246.GC21570@thor.bakeyournoodle.com> Message-ID: <1530105575-sup-5283@lrrr.local> Excerpts from Tony Breeds's message of 2018-06-27 12:42:47 +1000: > On Tue, Jun 26, 2018 at 09:03:40AM -0400, Doug Hellmann wrote: > > Requirements team, > > > > At some point in the next few months we're going to want to raise > > the constraint on openstackdocstheme in all of the old branches so > > we can take advantage of a new feature for showing the supported > > status of each version of a project. That feature isn't implemented > > yet, but I thought it would be good to discuss in advance the need > > to update the dependency and how to do it. > > > > The theme is released under an independent release model and does > > not currently have stable branches. It depends on pbr and dulwich, > > both of which should already be in the requirements and constraints > > lists (dulwich is a dependency of reno). > > The only possible gottcha is if openstackdocstheme relies on a feature > in any of pbr or dulwich which isn't in the version currently in > upper-constratints.txt. If that happens we can easily bump those > constraints also. Good point. I don't expect either to be the case, but I'll verify that before going ahead. > > I think that means the simplest thing to do would be to just update > > the constraint for the theme in the stable branches. Does that seem > > right? > > Yup that seems to be all there is to it. Once the release happens the > bot will propose the constraints bump on master which someone will need > to cherry-pick onto the stable branches. I'm sure we can find someone from the documentation team to do that. Thanks! Doug From pkovar at redhat.com Wed Jun 27 14:14:55 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 27 Jun 2018 16:14:55 +0200 Subject: [openstack-dev] [docs] Office hours instead of regular docs project meetings? In-Reply-To: <20180620142157.6701a2de41326adba9574ea5@redhat.com> References: <20180620142157.6701a2de41326adba9574ea5@redhat.com> Message-ID: <20180627161455.6076f0abb3250571e6002fb5@redhat.com> Hi again, Haven't got much feedback so far on the meeting format change, so let's proceed with turning formal docs meetings into office hours, keeping the same time for now, until we decide to make further adjustments based on the attendance. The patch is here: https://review.openstack.org/#/c/578398/ Thanks, pk On Wed, 20 Jun 2018 14:21:57 +0200 Petr Kovar wrote: > Hi all, > > Due to low attendance in docs project meetings in recent months, I'd like > to propose turning regular docs meetings into office hours, like many other > OpenStack teams did. > > My idea is to hold office hours every Wednesday, same time we held our > docs meetings, at 16:00 UTC, in our team channel #openstack-doc where > many community members already hang out and ask their questions about > OpenStack docs. > > Objections, comments, thoughts? > > Would there be interest to also hold office hours during a more > APAC-friendly time slot? We'd need to volunteers to take care of it, please > let me know! > > Thanks, > pk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From amy at demarco.com Wed Jun 27 14:58:10 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 27 Jun 2018 09:58:10 -0500 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: References: Message-ID: Abhijit, I'm forwarding your issue to the OpenStack-dev list so that the right people might see your issue and respond. Thanks, Amy (spotz) ---------- Forwarded message ---------- From: Abhijit Dutta Date: Wed, Jun 27, 2018 at 5:23 AM Subject: [openstack-community] DevStack Installation issue To: "community at lists.openstack.org" Hi, I am trying to install DevStack for the first time in a baremetal with Fedora 28 installed. While executing the stack.sh I am getting the following error: No match for argument: Django Error: Unable to find a match Can anybody in the community help me out with this problem. PS: [stack at localhost devstack]$ uname -a Linux localhost.localdomain 4.16.3-301.fc28.x86_64 #1 SMP Mon Apr 23 21:59:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux ~Thanks Abhijit _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Jun 27 15:13:04 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 27 Jun 2018 11:13:04 -0400 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: References: Message-ID: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> On 06/25/2018 05:28 PM, Mohammed Naser wrote: > Hi everyone: > > While working with the OpenStack infrastructure team, we noticed that > we were having some intermittent issues where we wanted to identify a > theory if all VMs with this issue were landing on the same hypervisor. > > However, there seems to be no way of directly accessing `hostId` from > inside the virtual machine (such as using the metadata API). Yes, that is correct. VMs should not know (or need to know) where they are hosted. > This is a very useful thing to expose over the metadata API as not > only would it help for troubleshooting these types of scenarios > however it would also help software that can manage anti-affinity > simply by checking the API and taking scheduling decisions. We try very hard to not expose administrative operational details about the underlying hardware via the metadata API. Virtual machines and the software running in them should not need to know what particular piece of hardware they are running on. VMs having knowledge of the underlying hardware and host violates the principle of least privilege and introduces attack vectors that I'm pretty sure you (as an operator) don't want to open up. There is a bright red line between the adminstrative domain and the virtual/guest domain, and presenting host identifiers over the metadata API would definitely cross that bright red line. > I've proposed the following patch to add this[1], however, this is > technically an API change, and the blueprints document specifies that > "API changes always require a design discussion." > > Also, I believe that we're in a state where getting a spec would > require an exception. However, this is a very trivial change. Also, > according to the notes in the metadata file, it looks like there is > one "bump" per OpenStack release[3] which means that this change can > just be part of that release-wide version bump of the OpenStack API. > > Can we include this trivial patch in the upcoming Rocky release? I'm -2'd the patch in question because of these concerns about crossing the line between administrative and guest/virtual domains. It may seem like a very trivial patch, but from what I can tell, it would be a very big departure from the types of information we have traditionally allowed in the metadata API. Best, -jay > Thanks, > Mohammed > > [1]: https://review.openstack.org/577933 > [2]: https://docs.openstack.org/nova/latest/contributor/blueprints.html > [3]: http://git.openstack.org/cgit/openstack/nova/tree/nova/api/metadata/base.py#n60 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhang.lei.fly at gmail.com Wed Jun 27 15:19:09 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 27 Jun 2018 23:19:09 +0800 Subject: [openstack-dev] [kolla] Removing old / unused images In-Reply-To: References: <0bea20ac-2188-ffbb-d264-07b8203f5dbc@oracle.com> Message-ID: There are really lots of downstream consumers using kolla image. So when removing images, we need care about the downstream consumers. And follow our deprecation policy. i.e. deprecate at first cycle, and remove it at following cycle. If there is really some guys depends on these images, we should keep them. On Wed, Jun 27, 2018 at 5:39 PM Paul Bourke wrote: > Ok, thanks for the replies. Seems at least the non k8s images are in use > by tripleo and so will remain untouched. caoyuan has a similar patch > open to just remove the k8s related images so please have a look and > vote on that instead: https://review.openstack.org/#/c/576911/ > > I'm not sure we need a deprecation cycle on the k8s images given they > were directly related to kolla-k8s, that said, we need to remember there > are other consumers outside these projects so if people feel we should > keep them for a cycle please let me know. > > On 26/06/18 16:51, Andy Smith wrote: > > Also commented as tripleo is using qdrouterd. > > > > It's use in kolla-ansible > > > https://github.com/openstack/kolla-ansible/tree/master/ansible/roles/qdrouterd > > > > and bp > > > https://blueprints.launchpad.net/kolla/+spec/dispatch-router-messaging-component > > > > Thanks, > > Andy > > > > On Tue, Jun 26, 2018 at 10:52 AM Alex Schultz > > wrote: > > > > On Tue, Jun 26, 2018 at 8:05 AM, Paul Bourke > > wrote: > > > Hi all, > > > > > > At the weekly meeting a week or two ago, we mentioned removing > > some old / > > > unused images from Kolla in the interest of keeping the gate run > > times down, > > > as well as general code hygiene. > > > > > > The images I've determined that are either no longer relevant, or > > were > > > simply never made use of in kolla-ansible are the following: > > > > > > * almanach > > > * certmonger > > > * dind > > > * qdrouterd > > > * rsyslog > > > > > > * helm-repository > > > * kube > > > * kubernetes-entrypoint > > > * kubetoolbox > > > > > > If you still care about any of these or I've made an oversight, > > please have > > > a look at the patch [0] > > > > > > > I have commented as tripleo is using some of these. I would say that > > you shouldn't just remove these and there needs to be a proper > > deprecation policy. Just because you aren't using them in > > kolla-ansible doesn't mean someone isn't actually using them. > > > > Thanks, > > -Alex > > > > > Thanks! > > > -Paul > > > > > > [0] https://review.openstack.org/#/c/578111/ > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > < > http://OpenStack-dev-request at lists.openstack.org?subject:unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.harbott at x-ion.de Wed Jun 27 15:53:20 2018 From: j.harbott at x-ion.de (Dr. Jens Harbott (frickler)) Date: Wed, 27 Jun 2018 17:53:20 +0200 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: References: Message-ID: 2018-06-27 16:58 GMT+02:00 Amy Marrich : > Abhijit, > > I'm forwarding your issue to the OpenStack-dev list so that the right people > might see your issue and respond. > > Thanks, > > Amy (spotz) > > ---------- Forwarded message ---------- > From: Abhijit Dutta > Date: Wed, Jun 27, 2018 at 5:23 AM > Subject: [openstack-community] DevStack Installation issue > To: "community at lists.openstack.org" > > > Hi, > > > I am trying to install DevStack for the first time in a baremetal with > Fedora 28 installed. While executing the stack.sh I am getting the > following error: > > > No match for argument: Django > Error: Unable to find a match > > Can anybody in the community help me out with this problem. We are aware of some issues with deploying devstack on Fedora 28, these are being worked on, see https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28 If you want a quick solution, you could try deploying on Fedora 27 or Centos 7 instead. From mriedemos at gmail.com Wed Jun 27 16:20:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 27 Jun 2018 11:20:37 -0500 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> References: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> Message-ID: <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> On 6/27/2018 10:13 AM, Jay Pipes wrote: > I'm -2'd the patch in question because of these concerns about crossing > the line between administrative and guest/virtual domains. It may seem > like a very trivial patch, but from what I can tell, it would be a very > big departure from the types of information we have traditionally > allowed in the metadata API. To be clear, this is exposing the exact same hashed host+project_id value via the metadata API that you can already get, as a non-admin user, from the compute REST API: https://github.com/openstack/nova/blob/c8b93fa2493dce82ef4c0b1e7a503ba9b81c2e86/nova/api/openstack/compute/views/servers.py#L135 So I don't think it's a security issue at all. The one thing I would be a bit worried about is that the value would be stale from the config drive if the instance is live migrated. We also expose the availability zone the instance is in from the config drive, but as far as I know you can't live migrate your way into another availability zone (unless of course the admin force live migrates to another host in another AZ and bypasses the scheduler). -- Thanks, Matt From fungi at yuggoth.org Wed Jun 27 16:26:59 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jun 2018 16:26:59 +0000 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> References: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> Message-ID: <20180627162659.zipwlas32rktwa5u@yuggoth.org> On 2018-06-27 11:13:04 -0400 (-0400), Jay Pipes wrote: [...] > Virtual machines and the software running in them should not need > to know what particular piece of hardware they are running on. VMs > having knowledge of the underlying hardware and host violates the > principle of least privilege and introduces attack vectors that > I'm pretty sure you (as an operator) don't want to open up. [...] I saw similar security red flags with the proposal, but didn't weigh in at the time because I was confident Nova core reviewers would arrive at the same quite quickly on their own. While it would be "nice" to have this for the Infra team to be able to give providers a heads up when we see instances crashing consistently on a particular compute node, we're not the administrators of those compute nodes and so it is not information for which we should expect to have access. It may be a pain to collect up instance UUIDs and them pass those along to the provider so they can correlate to compute nodes in their service logs, but that's ultimately the right way to go about it so that separation of concerns is preserved. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From michael.glasgow at oracle.com Wed Jun 27 16:35:24 2018 From: michael.glasgow at oracle.com (Michael Glasgow) Date: Wed, 27 Jun 2018 11:35:24 -0500 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> References: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> Message-ID: On 06/27/18 11:20, Matt Riedemann wrote: > To be clear, this is exposing the exact same hashed host+project_id > value via the metadata API that you can already get, as a non-admin > user, from the compute REST API: > > https://github.com/openstack/nova/blob/c8b93fa2493dce82ef4c0b1e7a503ba9b81c2e86/nova/api/openstack/compute/views/servers.py#L135 > > So I don't think it's a security issue at all. I'm not sure I understand this rationale. Strictly speaking, I would think that in order for this to be true, the set of authenticated control plane users would have to always include the set of users who can read the metadata from a guest. Which I'm pretty sure is not true in the general case. Am I missing something? -- Michael Glasgow From jaypipes at gmail.com Wed Jun 27 16:37:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 27 Jun 2018 12:37:43 -0400 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> References: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> Message-ID: On 06/27/2018 12:20 PM, Matt Riedemann wrote: > On 6/27/2018 10:13 AM, Jay Pipes wrote: >> I'm -2'd the patch in question because of these concerns about >> crossing the line between administrative and guest/virtual domains. It >> may seem like a very trivial patch, but from what I can tell, it would >> be a very big departure from the types of information we have >> traditionally allowed in the metadata API. > > To be clear, this is exposing the exact same hashed host+project_id > value via the metadata API that you can already get, as a non-admin > user, from the compute REST API: > > https://github.com/openstack/nova/blob/c8b93fa2493dce82ef4c0b1e7a503ba9b81c2e86/nova/api/openstack/compute/views/servers.py#L135 > > So I don't think it's a security issue at all. My sincere apologies. I did not realize that the hostId was not, in fact, the host identifier, but rather a SHA-224 hash of the host and project_id. > The one thing I would be a bit worried about is that the value would be > stale from the config drive if the instance is live migrated. We also > expose the availability zone the instance is in from the config drive, > but as far as I know you can't live migrate your way into another > availability zone (unless of course the admin force live migrates to > another host in another AZ and bypasses the scheduler). OK, I'll remove my -2. Apologies! -jay From fungi at yuggoth.org Wed Jun 27 17:20:00 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jun 2018 17:20:00 +0000 Subject: [openstack-dev] [nova] Adding hostId to metadata In-Reply-To: References: <411cb802-2982-81ac-2918-64e27fd6e4a6@gmail.com> <769d0e8b-f840-cf66-4248-822a1b8fcf1f@gmail.com> Message-ID: <20180627172000.rswjt4axeyuhjso5@yuggoth.org> On 2018-06-27 12:37:43 -0400 (-0400), Jay Pipes wrote: [...] > the hostId was not, in fact, the host identifier, but rather a > SHA-224 hash of the host and project_id. [...] Oh, that's slick. Yeah, it would basically take brute-forcing the UUID space to divine the actual host identifier from that (you could use it to confirm a known identifier, but not easily discover it). I too am not concerned about security in light of this, though it does open the door to users doing things like booting and deleting instances until they get one scheduled to a compute node they like (for whatever reason, be it affinity, anti-affinity, et cetera). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Jun 27 19:08:35 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 27 Jun 2018 14:08:35 -0500 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643ee8d2e8.ce28ce5c18064.8682327260105512968@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <20180626103258.vpk5462pjoujwqz5@sileht.net> <1643ee8d2e8.ce28ce5c18064.8682327260105512968@ghanshyammann.com> Message-ID: <20180627190834.GA3924@sm-workstation> > > There is no issue of backward incompatibility from Tempest and on Gate. GATE > is always good as it is going with mater version or minimum supported version > in plugins as you mentioned. We take care of all these things you mentioned > which is our main goal also. > > But If we think from Cloud tester perspective where they use older version of > tempest for particular OpenStack release but there is no corresponding > tag/version from plugins to use them for that OpenStack release. > > Idea is here to have a tag from Plugins also like Tempest does currently for > each OpenStack release so that user can pickup those tag and test their > Complete Cloud. > Thanks for the further explanation Ghanshyam. So it's not so much that newer versions of tempest may break the current repo plugins, it's more to the fact that any random plugin that gets pulled in has no way of knowing if it can take advantage of a potentially older version of tempest that had not yet introduced something the plugin is relying on. I think it makes sense for the tempest plugins to be following the cycle-with-intermediary model. This would allow plugins to be released at any point during a given cycle and would then have a way to match up a "release" of the plugin. Release repo deliverable placeholders are being proposed for all the tempest plugin repos we could find. Thanks to Doug for pulling this all together: https://review.openstack.org/#/c/578141/ Please comment there if you see any issues. Sean From zbitter at redhat.com Wed Jun 27 20:39:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Jun 2018 16:39:58 -0400 Subject: [openstack-dev] [barbican][heat] Identifying secrets in Barbican Message-ID: We're looking at using Barbican to implement a feature in Heat[1] and ran into some questions about how secrets are identified in the client. With most openstack clients, resources are identified by a UUID. You pass the UUID on the command line (or via the Python API or whatever) and the client combines that with the endpoint of the service obtained from the service catalog and a path to the resource type to generate the URL used to access the resource. While there appears to be no technical reason that barbicanclient couldn't also do this, instead of just the UUID it uses the full URL as the identifier for the resource. This is extremely cumbersome for the user, and invites confused-deputy attacks where if the attacker can control the URL, they can get barbicanclient to send a token to an arbitrary URL. What is the rationale for doing it this way? In a tangentially related question, since secrets are immutable once they've been uploaded, what's the best way to handle a case where you need to rotate a secret without causing a temporary condition where there is no version of the secret available? (The fact that there's no way to do this for Nova keypairs is a perpetual problem for people, and I'd anticipate similar use cases for Barbican.) I'm going to guess it's: * Create a new secret with the same name * GET /v1/secrets/?name=&sort=created:desc&limit=1 to find out the URL for the newest secret with that name * Use that URL when accessing the secret * Once the new secret is created, delete the old one Should this, or whatever the actual recommended way of doing it is, be baked in to the client somehow so that not every user needs to reimplement it? Bottom line: how should Heat expect/require a user to refer to a Barbican secret in a property of a Heat resource, given that: - We don't want Heat to become the deputy in "confused deputy attack". - We shouldn't do things radically differently to the way Barbican does them, because users will need to interact with Barbican first to store the secret. - Many services will likely end up implementing integration with Barbican and we'd like them all to have similar user interfaces. - Users will need to rotate credentials without downtime. cheers, Zane. BTW the user documentation for Barbican is really hard to find. Y'all might want to look in to cross-linking all of the docs you have together. e.g. there is no link from the Barbican docs to the python-barbicanclient docs or vice-versa. [1] https://storyboard.openstack.org/#!/story/2002126 From sbaker at redhat.com Wed Jun 27 20:52:27 2018 From: sbaker at redhat.com (Steve Baker) Date: Thu, 28 Jun 2018 08:52:27 +1200 Subject: [openstack-dev] [tripleo] Referring to the --templates directory? In-Reply-To: <20180625180642.xcej5666d6fysqks@redhat.com> References: <20180625180642.xcej5666d6fysqks@redhat.com> Message-ID: On 26/06/18 06:06, Lars Kellogg-Stedman wrote: > Is there a way to refer to the `--templates` directory when writing > service templates? Existing service templates can use relative paths, > as in: > > resources: > > ContainersCommon: > type: ./containers-common.yaml > > But if I'm write a local service template (which I often do during > testing/development), I would need to use the full path to the > corresponding file: > > ContainersCommon: > type: /usr/share/openstack-tripleo-heat-templates/docker/services/containers-common.yaml > > But that breaks if I use another template directory via the > --templates option to the `openstack overcloud deploy` command. Is > there a way to refer to "the current templates directory"? > You're only choice would be to either use an absolute path, or develop your local service template inside a checkout of tripleo-heat-templates, which is how other new services are developed. From zbitter at redhat.com Wed Jun 27 23:23:43 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 27 Jun 2018 19:23:43 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> Message-ID: On 27/06/18 07:55, Jay Pipes wrote: > WARNING: > > Danger, Will Robinson! Strong opinions ahead! I'd have been disappointed with anything less :) > On 06/26/2018 10:00 PM, Zane Bitter wrote: >> On 26/06/18 09:12, Jay Pipes wrote: >>> Is (one of) the problem(s) with our community that we have too small >>> of a scope/footprint? No. Not in the slightest. >> >> Incidentally, this is an interesting/amusing example of what we talked >> about this morning on IRC[1]: you say your concern is that the scope >> of *Nova* is too big and that you'd be happy to have *more* services >> in OpenStack if they took the orchestration load off Nova and left it >> just to handle the 'plumbing' part (which I agree with, while noting >> that nobody knows how to get there from here); but here you're >> implying that Kata Containers (something that will clearly have no >> effect either way on the simplicity or otherwise of Nova) shouldn't be >> part of the Foundation because it will take focus away from >> Nova/OpenStack. > > Above, I was saying that the scope of the *OpenStack* community is > already too broad (IMHO). An example of projects that have made the > *OpenStack* community too broad are purpose-built telco applications > like Tacker [1] and Service Function Chaining. [2] > > I've also argued in the past that all distro- or vendor-specific > deployment tools (Fuel, Triple-O, etc [3]) should live outside of > OpenStack because these projects are more products and the relentless > drive of vendor product management (rightfully) pushes the scope of > these applications to gobble up more and more feature space that may or > may not have anything to do with the core OpenStack mission (and have > more to do with those companies' product roadmap). I'm still sad that we've never managed to come up with a single way to install OpenStack. The amount of duplicated effort expended on that problem is mind-boggling. At least we tried though. Excluding those projects from the community would have just meant giving up from the beginning. I think Thierry's new map, that collects installer services in a separate bucket (that may eventually come with a separate git namespace) is a helpful way of communicating to users what's happening without forcing those projects outside of the community. > On the other hand, my statement that the OpenStack Foundation having 4 > different focus areas leads to a lack of, well, focus, is a general > statement on the OpenStack *Foundation* simultaneously expanding its > sphere of influence while at the same time losing sight of OpenStack > itself -- and thus the push to create an Open Infrastructure Foundation > that would be able to compete with the larger mission of the Linux > Foundation. > > [1] This is nothing against Tacker itself. I just don't believe that > *applications* that are specially built for one particular industry > belong in the OpenStack set of projects. I had repeatedly stated this on > Tacker's application to become an OpenStack project, FWIW: > > https://review.openstack.org/#/c/276417/ > > [2] There is also nothing wrong with service function chains. I just > don't believe they belong in *OpenStack*. They more appropriately belong > in the (Open)NFV community because they just are not applicable outside > of that community's scope and mission. > > [3] It's interesting to note that Airship was put into its own > playground outside the bounds of the OpenStack community (but inside the > bounds of the OpenStack Foundation). I wouldn't say it's inside the bounds of the Foundation, and in fact confusion about that is a large part of why I wrote the blog post. It is a 100% unofficial project that just happens to be hosted on our infra. Saying it's inside the bounds of the Foundation is like saying Kubernetes is inside the bounds of GitHub. > Airship is AT&T's specific > deployment tooling for "the edge!". I actually think this was the > correct move for this vendor-opinionated deployment tool. > >> So to answer your question: >> >> zaneb: yeah... nobody I know who argues for a small stable >> core (in Nova) has ever said there should be fewer higher layer services. >> zaneb: I'm not entirely sure where you got that idea from. > > Note the emphasis on *Nova* above? > > Also note that when I've said that *OpenStack* should have a smaller > mission and scope, that doesn't mean that higher-level services aren't > necessary or wanted. Thank you for saying this, and could I please ask you to repeat this disclaimer whenever you talk about a smaller scope for OpenStack. Because for those of us working on higher-level services it feels like there has been a non-stop chorus (both inside and outside the project) of people wanting to redefine OpenStack as something that doesn't include us. The reason I haven't dropped this discussion is because I really want to know if _all_ of those people were actually talking about something else (e.g. a smaller scope for Nova), or if it's just you. Because you and I are in complete agreement that Nova has grown a lot of obscure capabilities that make it fiendishly difficult to maintain, and that in many cases might never have been requested if we'd had higher-level tools that could meet the same use cases by composing simpler operations. IMHO some of the contributing factors to that were: * The aforementioned hostility from some quarters to the existence of higher-level projects in OpenStack. * The ongoing hostility of operators to deploying any projects outside of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the Barbican vs. Castellan debate, where we can't even correct one of OpenStack's original sins and bake in a secret store - something k8s managed from day one - because people don't want to install another ReST API even over a backend that they'll already have to install anyway). * The illegibility of public Nova interfaces to potential higher-level tools. > It's just that Nova has been a dumping ground over the past 7+ years for > features that, looking back, should never have been added to Nova (or at > least, never added to the Compute API) [4]. > > What we were discussing yesterday on IRC was this: > > "Which parts of the Compute API should have been implemented in other > services?" > > What we are discussing here is this: > > "Which projects in the OpenStack community expanded the scope of the > OpenStack mission beyond infrastructure-as-a-service?" > > and, following that: > > "What should we do about projects that expanded the scope of the > OpenStack mission beyond infrastructure-as-a-service?" > > Note that, clearly, my opinion is that OpenStack's mission should be to > provide infrastructure as a service projects (both plumbing and porcelain). > > This is MHO only. The actual OpenStack mission statement [5] is > sufficiently vague as to provide no meaningful filtering value for > determining new entrants to the project ecosystem. I think this is inevitable, in that if you want to define cloud computing in a single sentence it will necessarily be very vague. That's the reason for pursuing a technical vision statement (brainstorming for which is how this discussion started), so we can spell it out in a longer form. cheers, Zane. > I *personally* believe that should change in order for the *OpenStack* > community to have some meaningful definition and differentiation from > the broader cloud computing, application development, and network > orchestration ecosystems. > > All the best, > -jay > > [4] ... or never brought into the Compute API to begin with. You know, > vestigial tail and all that. > > [5] for reference: "The OpenStack Mission is to produce a ubiquitous > Open Source Cloud Computing platform that is easy to use, simple to > implement, interoperable between deployments, works well at all scales, > and meets the needs of users and operators of both public and private > clouds." > >> I guess from all the people who keep saying it ;) >> >> Apparently somebody was saying it a year ago too :D >> https://twitter.com/zerobanana/status/883052105791156225 >> >> cheers, >> Zane. >> >> [1] >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From akekane at redhat.com Thu Jun 28 06:03:09 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 28 Jun 2018 11:33:09 +0530 Subject: [openstack-dev] [glance][glance_store] Functional testing of multiple backend Message-ID: Hi All, In Rocky I have proposed a spec [1] for adding support for multiple backend in glance. I have completed the coding part and so far tested this feature with file, rbd and swift store. However I need support in testing this feature thoroughly. So kindly help me (or provide a way to configure cinder, sheepdog and vmware stores using devstack) in functional testing for remaining drivers. I have created one etherpad [2] with steps to configure this feature and some scenarios I have tested with file, rbd and swift drivers. Please do the needful. [1] https://review.openstack.org/562467 [2] https://etherpad.openstack.org/p/multi-store-scenarios Summary of upstream patches: https://review.openstack.org/#/q/topic:bp/multi-store+(status:open+OR+status:merged) Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From hejianle at unitedstack.com Thu Jun 28 07:00:10 2018 From: hejianle at unitedstack.com (=?utf-8?B?5L2V5YGl5LmQ?=) Date: Thu, 28 Jun 2018 15:00:10 +0800 Subject: [openstack-dev] [openstack][karbor] Can karbor restore resource's backup to other region? Message-ID: Hi All, There are two questions that are bothering me now. Firstly, can Karbor restore resource to other region? Secondly, I have notice that when we create a restore with CLI,the command will be like this: openstack data protection restore create cf56bd3e-97a7-4078-b6d5-f36246333fd9 c2ddf803-3655-4e26-8605-de36bdbeb701 --restore_target http://xx.xx.xx.xx/indentity --restore_username demo --restore_password admin --parameters resource_type=OS::Nova::Server,restore_net_id=c6b392d4-20ec-483f-9411-d188b3ba79ae,restore_name=vm_restore what do the "--restart_target", "--restore_username" and "--restore_password" parameters be used for? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozerov at selectel.com Thu Jun 28 07:11:54 2018 From: ozerov at selectel.com (Andrei Ozerov) Date: Thu, 28 Jun 2018 10:11:54 +0300 Subject: [openstack-dev] [magnum] Problems with multi-regional OpenStack installation Message-ID: Greetings. Has anyone successfully deployed Magnum in the multi-regional OpenStack installation? In my case different services (Nova, Heat) have different public endpoint in every region. I couldn't start Kube-apiserver until I added "region" to a kube_openstack_config. I created a story with full description of that problem: https://storyboard.openstack.org/#!/story/2002728 and opened a review with a small fix: https://review.openstack.org/#/c/578356. But apart from that I have another problem with such kind of OpenStack installation. Say I have two regions. When I create a cluster in the second OpenStack region, Heat-container-engine tries to fetch Stack data from the first region. It then throws the following error: "The Stack (hame-uuid) could not be found". I can see GET requests for that stack in logs of Heat-API in the first region but I don't see them in the second one (where that Heat stack actually exists). I'm assuming that Heat-container-engine doesn't pass "region_name" when it searches for Heat endpoints: https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149 . I've tried to change it but it's tricky because the Heat-container-engine is installed via Docker system-image and it won't work after restart if it's failed in the initial bootstrap (because /var/run/heat-config/heat-config is empty). Can someone help me with that? I guess it's better to create a separate story for that issue? -- Ozerov Andrei ozerov at selectel.com +7 (800) 555 06 75 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Thu Jun 28 07:35:29 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Thu, 28 Jun 2018 07:35:29 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: Hi, I’ve added the following pros and cons to the different options: * One Glance with multiple backends [1] * Pros: * Relatively easy to implement based on the current Glance architecture * Cons: * Requires the same Glance backend in every edge cloud instance * Requires the same OpenStack version in every edge cloud instance (apart from during upgrade) * Sensitivity for network connection loss is not clear * Several Glances with an independent syncronisation service, sych via Glance API [2] * Pros: * Every edge cloud instance can have a different Glance backend * Can support multiple OpenStack versions in the different edge cloud instances * Can be extended to support multiple VIM types * Cons: * Needs a new synchronisation service * Several Glances with an independent syncronisation service, synch using the backend [3] * Pros: * I could not find any * Cons: * Needs a new synchronisation service * One Glance and multiple Glance API servers [4] * Pros: * Implicitly location aware * Cons: * First usage of an image always takes a long time * In case of network connection error to the central Galnce Nova will have access to the images, but will not be able to figure out if the user have rights to use the image and will not have path to the images data Are these correct? Do I miss anything? Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API [3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend [4]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_and_multiple_Glance_API_servers From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Monday, June 11, 2018 4:29 PM To: Waines, Greg ; OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Thanks for the comments. I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 8, 2018 1:46 PM To: Csatari, Gergely (Nokia - HU/Budapest) >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: * What is the API between Glance and the Glance backends? * How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? * Is it possible to have different OpenStack versions in the different cloud instances? * Can a cloud instance use the locally synchronised images in case of a network connection break? * Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: * If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jun 28 08:47:31 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Jun 2018 10:47:31 +0200 Subject: [openstack-dev] [all] [ptg] PTG high-level schedule Message-ID: Hi everyone, In the attached picture you will find the proposed schedule for the various tracks at the Denver PTG in September. We did our best to avoid the key conflicts that the track leads (PTLs, SIG leads...) mentioned in their PTG survey responses, although there was no perfect solution that would avoid all conflicts. If there is a critical conflict that was missed, please let us know, but otherwise we are not planning to change this proposal. You'll notice that: - The Ops meetup team is still evaluating what days would be best for the Ops meetup that will be co-located with the PTG. We'll communicate about it as soon as we have the information. - Keystone track is split in two: one day on Monday for cross-project discussions around identity management, and two days on Thursday/Friday for team discussions. - The "Ask me anything" project helproom on Monday/Tuesday is for horizontal support teams (infrastructure, release management, stable maint, requirements...) to provide support for other teams, SIGs and workgroups and answer their questions. Goal champions should also be available there to help with Stein goal completion questions. - Like in Dublin, a number of tracks do not get pre-allocated time, and will be scheduled on the spot in available rooms at the time that makes the most sense for the participants. - Every track will be able to book extra time and space in available extra rooms at the event. To find more information about the event, register or book a room at the event hotel, visit: https://www.openstack.org/ptg Note that the first round of applications for travel support to the event is closing at the end of this week ! Apply if you need financial help attending the event: https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 See you there ! -- Thierry Carrez (ttx) -------------- next part -------------- A non-text attachment was scrubbed... Name: ptg4.png Type: image/png Size: 80930 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Thu Jun 28 11:41:26 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 28 Jun 2018 19:41:26 +0800 Subject: [openstack-dev] [barbican][heat] Identifying secrets in Barbican In-Reply-To: References: Message-ID: For now we found two ways to get a secret, with secret href or with secret URI(which is `secrets/UUID`). We will turn to use secret URI for now for Heat multi cloud support, but is there any reason for Barbican client not to accept only secrets UUID (Secret incorrectly specified error will shows up when only provide UUID)? On Thu, Jun 28, 2018 at 4:40 AM Zane Bitter wrote: > We're looking at using Barbican to implement a feature in Heat[1] and > ran into some questions about how secrets are identified in the client. > > With most openstack clients, resources are identified by a UUID. You > pass the UUID on the command line (or via the Python API or whatever) > and the client combines that with the endpoint of the service obtained > from the service catalog and a path to the resource type to generate the > URL used to access the resource. > > While there appears to be no technical reason that barbicanclient > couldn't also do this, instead of just the UUID it uses the full URL as > the identifier for the resource. This is extremely cumbersome for the > user, and invites confused-deputy attacks where if the attacker can > control the URL, they can get barbicanclient to send a token to an > arbitrary URL. What is the rationale for doing it this way? > > > In a tangentially related question, since secrets are immutable once > they've been uploaded, what's the best way to handle a case where you > need to rotate a secret without causing a temporary condition where > there is no version of the secret available? (The fact that there's no > way to do this for Nova keypairs is a perpetual problem for people, and > I'd anticipate similar use cases for Barbican.) I'm going to guess it's: > > * Create a new secret with the same name > * GET /v1/secrets/?name=&sort=created:desc&limit=1 to find out the > URL for the newest secret with that name > * Use that URL when accessing the secret > * Once the new secret is created, delete the old one > > Should this, or whatever the actual recommended way of doing it is, be > baked in to the client somehow so that not every user needs to > reimplement it? > > > Bottom line: how should Heat expect/require a user to refer to a > Barbican secret in a property of a Heat resource, given that: > - We don't want Heat to become the deputy in "confused deputy attack". > - We shouldn't do things radically differently to the way Barbican does > them, because users will need to interact with Barbican first to store > the secret. > - Many services will likely end up implementing integration with > Barbican and we'd like them all to have similar user interfaces. > - Users will need to rotate credentials without downtime. > > cheers, > Zane. > > BTW the user documentation for Barbican is really hard to find. Y'all > might want to look in to cross-linking all of the docs you have > together. e.g. there is no link from the Barbican docs to the > python-barbicanclient docs or vice-versa. > > [1] https://storyboard.openstack.org/#!/story/2002126 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Thu Jun 28 12:45:12 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Thu, 28 Jun 2018 14:45:12 +0200 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> Message-ID: Hi, > Go ahead and post WIP reviews and we can look at it further. To merge > I'll want all of the usual tests, docs, release notes, etc but don't > wait if that is not all done up front. Hier sind die zwei WIP reviews: cursive: https://review.openstack.org/#/c/578767/ osc: https://review.openstack.org/#/c/578769/ Auf unserem System funktionierte folgender Test: 1.A) Generate Private and Public Key without password openssl genrsa -out image_signing_key.pem 4096 openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem 1.B) Generate Private and Public Key with password export PASSWORD="my-little-secret" openssl genrsa -aes256 -passout pass:$PASSWORD -out image_signing_key.pem 4096 openssl rsa -pubout -in image_signing_key.pem -passin pass:$PASSWORD -out image_signing_pubkey.pem 2.) generate Public Key certificate  openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem openssl req -new -key image_signing_key.pem -out image_signing_cert_req.csr openssl x509 -req -days 365 -in image_signing_cert_req.csr -signkey image_signing_key.pem -out image_signing_cert.crt 3.) upload certificate to Barbican openstack secret store --name image-signing-cert --algorithm RSA --expiration 2020-01-01 --secret-type certificate --payload-content-type "application/octet-stream" --payload-content-encoding base64 --payload "$(base64 image_signing_cert.crt)" 4.) sign & upload image to Glance openstack image create --sign key-path=image_signing_key.pem,cert-id=$CERT_UUID --container-format bare --disk-format raw --file $IMAGE_FILE $IMAGE_NAME From josephine.seifert at secustack.com Thu Jun 28 13:04:55 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Thu, 28 Jun 2018 15:04:55 +0200 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> Message-ID: <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> Sorry, I wrote partially german in my last mail. Here is the english version ;) > Go ahead and post WIP reviews and we can look at it further. To merge > I'll want all of the usual tests, docs, release notes, etc but don't > wait if that is not all done up front. Here are the two WIP reviews: cursive: https://review.openstack.org/#/c/578767/ osc: https://review.openstack.org/#/c/578769/ On our setup the following tests succeeded: 1.A) Generate Private and Public Key without password openssl genrsa -out image_signing_key.pem 4096 openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem 1.B) Generate Private and Public Key with password export PASSWORD="my-little-secret" openssl genrsa -aes256 -passout pass:$PASSWORD -out image_signing_key.pem 4096 openssl rsa -pubout -in image_signing_key.pem -passin pass:$PASSWORD -out image_signing_pubkey.pem 2.) generate Public Key certificate  openssl rsa -pubout -in image_signing_key.pem -out image_signing_pubkey.pem openssl req -new -key image_signing_key.pem -out image_signing_cert_req.csr openssl x509 -req -days 365 -in image_signing_cert_req.csr -signkey image_signing_key.pem -out image_signing_cert.crt 3.) upload certificate to Barbican openstack secret store --name image-signing-cert --algorithm RSA --expiration 2020-01-01 --secret-type certificate --payload-content-type "application/octet-stream" --payload-content-encoding base64 --payload "$(base64 image_signing_cert.crt)" 4.) sign & upload image to Glance openstack image create --sign key-path=image_signing_key.pem,cert-id=$CERT_UUID --container-format bare --disk-format raw --file $IMAGE_FILE $IMAGE_NAME __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Thu Jun 28 13:58:11 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Jun 2018 22:58:11 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <20180627190834.GA3924@sm-workstation> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <20180626103258.vpk5462pjoujwqz5@sileht.net> <1643ee8d2e8.ce28ce5c18064.8682327260105512968@ghanshyammann.com> <20180627190834.GA3924@sm-workstation> Message-ID: <16446afeca7.107824aa047183.9149156111973063735@ghanshyammann.com> ---- On Thu, 28 Jun 2018 04:08:35 +0900 Sean McGinnis wrote ---- > > > > There is no issue of backward incompatibility from Tempest and on Gate. GATE > > is always good as it is going with mater version or minimum supported version > > in plugins as you mentioned. We take care of all these things you mentioned > > which is our main goal also. > > > > But If we think from Cloud tester perspective where they use older version of > > tempest for particular OpenStack release but there is no corresponding > > tag/version from plugins to use them for that OpenStack release. > > > > Idea is here to have a tag from Plugins also like Tempest does currently for > > each OpenStack release so that user can pickup those tag and test their > > Complete Cloud. > > > > Thanks for the further explanation Ghanshyam. So it's not so much that newer > versions of tempest may break the current repo plugins, it's more to the fact > that any random plugin that gets pulled in has no way of knowing if it can take > advantage of a potentially older version of tempest that had not yet introduced > something the plugin is relying on. > > I think it makes sense for the tempest plugins to be following the > cycle-with-intermediary model. This would allow plugins to be released at any > point during a given cycle and would then have a way to match up a "release" of > the plugin. > > Release repo deliverable placeholders are being proposed for all the tempest > plugin repos we could find. Thanks to Doug for pulling this all together: > > https://review.openstack.org/#/c/578141/ > > Please comment there if you see any issues. Thanks. That's correct understanding and goal of this thread which is from production cloud testing point of view not just *gate*. cycle-with-intermediary model fulfill the user's requirement which they asked in summit. Doug patch lgtm. -gmann > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Thu Jun 28 15:05:09 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 28 Jun 2018 17:05:09 +0200 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> Message-ID: <217fcd71-dca1-5c05-4477-0631f28b0500@redhat.com> On 06/27/2018 03:17 AM, Ghanshyam Mann wrote: > > > > ---- On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann wrote ---- > > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > > > Dmitry Tantsur wrote: > > > > > > > [...] > > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > > (of both services and plugins) OR be branched. > > > > > > > [...] > > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > > is branchless was because there was no compatibility issue. If the split > > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > > release-independent... > > > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > > when the current source of tension is that the Tempest API itself > > > > is breaking for its plugins. > > > > > > > > Maybe rather than talking about how to release compatible things > > > > together, we should go back and talk about why Tempest's API is changing > > > > in a way that can't be made backwards-compatible. Can you give some more > > > > detail about that? > > > > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > > provided by Tempest's library and plugin interface. I've not ever heard of > > > these kind of backwards incompatibilities in those interfaces and we go to > > > all effort to make sure we don't break them. Where did the idea that > > > backwards incompatible changes where being introduced come from? > > > > In his original post, gmann said, "There might be some changes in > > Tempest which might not work with older version of Tempest Plugins." > > I was surprised to hear that, but I'm not sure how else to interpret > > that statement. > > I did not mean to say that Tempest will introduce the changes in backward incompatible way which can break plugins. That cannot happen as all plugins and tempest are branchless and they are being tested with master Tempest so if we change anything backward incompatible then it break the plugins gate. Even we have to remove any deprecated interfaces from Tempest, we fix all plugins first like - https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) > > What I mean to say here is that adding new or removing deprecated interface in Tempest might not work with all released version or unreleased Plugins. That point is from point of view of using Tempest and Plugins in production cloud testing not gate(where we keep the compatibility). Production Cloud user use Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 not latest version (though latest version might work). > > This thread is not just for gate testing point of view (which seems to be always interpreted), this is more for user using Tempest and Plugins for their cloud testing. I am looping operator mail list also which i forgot in initial post. > > We do not have any tag/release from plugins to know what version of plugin can work with what version of tempest. For Example If There is new interface introduced by Tempest 19.0.0 and pluginX start using it. Now it can create issues for pluginX in both release model 1. plugins with no release (I will call this PluginNR), 2. plugins with independent release (I will call it PluginIR). > > Users (Not Gate) will face below issues: > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. > - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. In these discussions I always think: how is it solved outside of the openstack world. And the solutions seem to be: 1. for PluginNR - do releases 2. for PluginIR - declare their minimum version of tempest in requirements.txt Why isn't it sufficient for us? Dmitry > > Let me make it more clear via diagram: > PluginNR PluginIR > > Tempest 19.0.0 > Add NewInterface Use NewInterface Use NewInterface > > > Tempest 18.0.0 > NewInterface not present No version of PluginNR Unknown version of PluginIR > > > GATE (No Issue as latest things always being tested live ): OK OK > > User issues: X (does not work) Hard to find compatible version > > > We need a particular tag from Plugins for OpenStack release, EOL of OpenStack release like Tempest does so that user can test their old release Cloud in easy way. > > -gmann > > > > > > As for this whole thread I don't understand any of the points being brought up > > > in the original post or any of the follow ons, things seem to have been confused > > > from the start. The ask from users at the summit was simple. When a new OpenStack > > > release is pushed we push a tempest release to mark that (the next one will be > > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > > corresponding version to mark support for a new release. So when trying to run > > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > > various services at different sha1s which have to be manually looked up based > > > on dates. All users wanted at the summit was a tag for plugins like tempest > > > does with the first number in: > > > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > > accomplish this, because I agree with much of what plugin maintainers were > > > saying on the thread about wanting to control their own releases. But the > > > desire to make sure users have a tag they can pull for the addition or > > > removal of a supported release makes sense as something a plugin should do. > > > > We don't coordinate versions across projects anywhere else, for a > > bunch of reasons including the complexity of coordinating the details > > and the confusion it causes when the first version of something is > > 19.0.0. Instead, we list the compatible versions of everything > > together on a series-specific page on releases.o.o. That seems to > > be enough to help anyone wanting to know which versions of tools > > work together. The data is also available in YAML files, so it's easy > > enough to consume by automation. > > > > Would that work for tempest and it's plugins, too? > > > > Is the problem that the versions are not the same, or that some of the > > plugins are not being tagged at all? > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From derekh at redhat.com Thu Jun 28 15:05:32 2018 From: derekh at redhat.com (Derek Higgins) Date: Thu, 28 Jun 2018 16:05:32 +0100 Subject: [openstack-dev] [tripleo]Testing ironic in the overcloud In-Reply-To: References: Message-ID: On 23 February 2018 at 14:48, Derek Higgins wrote: > > > On 1 February 2018 at 16:18, Emilien Macchi wrote: > >> On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins wrote: >> [...] >> >>> o Should I create a new tempest test for baremetal as some of the >>>>> networking stuff is different? >>>>> >>>> >>>> I think we would need to run baremetal tests for this new featureset, >>>> see existing files for examples. >>>> >>> Do you mean that we should use existing tests somewhere or create new >>> ones? >>> >> >> I mean we should use existing tempest tests from ironic, etc. Maybe just >> a baremetal scenario that spawn a baremetal server and test ssh into it, >> like we already have with other jobs. >> > Done, the current set of patches sets up a new non voting job > "tripleo-ci-centos-7-scenario011-multinode-oooq-container" which setup up > ironic in the overcloud and run the ironic tempest job > "ironic_tempest_plugin.tests.scenario.test_baremetal_basic_ > ops.BaremetalBasicOps.test_baremetal_server_ops" > > its currently passing so I'd appreciate a few eyes on it before it becomes > out of date again > there are 4 patches starting here https://review.openstack. > org/#/c/509728/19 > This is now working again so If anybody has the time I'd appreciate some reviews while its still current See scenario011 on https://review.openstack.org/#/c/509728/ > > >> >> o Is running a script on the controller with NodeExtraConfigPost the best >>>>> way to set this up or should I be doing something with quickstart? I don't >>>>> think quickstart currently runs things on the controler does it? >>>>> >>>> >>>> What kind of thing do you want to run exactly? >>>> >>> The contents to this file will give you an idea, somewhere I need to >>> setup a node that ironic will control with ipmi >>> https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml >>> >> >> extraconfig works for me in that case, I guess. Since we don't productize >> this code and it's for CI only, it can live here imho. >> >> Thanks, >> -- >> Emilien Macchi >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Thu Jun 28 15:18:37 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 28 Jun 2018 16:18:37 +0100 Subject: [openstack-dev] [nova] Shared envdir across tox environments Message-ID: Just a quick heads up that an upcoming change to nova's 'tox.ini' will change the behaviour of multiple environments slightly. https://review.openstack.org/#/c/534382/9 With this change applied, tox will start sharing environment directories (e.g. '.tox/py27') among environments with identical requirements and Python versions. This will mean you won't need to download dependencies for every environment, which should massively reduce the amount of time taken to (re)initialize many environments and save a bit of disk space to boot. This shouldn't affect most people but it could affect people that use some fancy tooling that depends on these directories. If this _is_ going to affect you, be sure to make your concerns known on the review sooner rather than later so we can resolve said concerns. Cheers, Stephen From jaypipes at gmail.com Thu Jun 28 15:26:59 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 28 Jun 2018 11:26:59 -0400 Subject: [openstack-dev] [nova] Shared envdir across tox environments In-Reply-To: References: Message-ID: <3d3a5fbc-05be-0536-3fd3-ddef2e6763dd@gmail.com> On 06/28/2018 11:18 AM, Stephen Finucane wrote: > Just a quick heads up that an upcoming change to nova's 'tox.ini' will > change the behaviour of multiple environments slightly. > > https://review.openstack.org/#/c/534382/9 > > With this change applied, tox will start sharing environment > directories (e.g. '.tox/py27') among environments with identical > requirements and Python versions. This will mean you won't need to > download dependencies for every environment, which should massively > reduce the amount of time taken to (re)initialize many environments and > save a bit of disk space to boot. This shouldn't affect most people > but it could affect people that use some fancy tooling that depends on > these directories. If this _is_ going to affect you, be sure to make > your concerns known on the review sooner rather than later so we can > resolve said concerns. +100 From doug at doughellmann.com Thu Jun 28 16:49:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 28 Jun 2018 12:49:49 -0400 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <217fcd71-dca1-5c05-4477-0631f28b0500@redhat.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> <217fcd71-dca1-5c05-4477-0631f28b0500@redhat.com> Message-ID: <1530204513-sup-9069@lrrr.local> Excerpts from Dmitry Tantsur's message of 2018-06-28 17:05:09 +0200: > On 06/27/2018 03:17 AM, Ghanshyam Mann wrote: > > Users (Not Gate) will face below issues: > > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. > > - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. > > In these discussions I always think: how is it solved outside of the openstack > world. And the solutions seem to be: > 1. for PluginNR - do releases > 2. for PluginIR - declare their minimum version of tempest in requirements.txt > > Why isn't it sufficient for us? It is. We just haven't been doing it; in part I think because most developers interact with the plugins via the CI system and don't realize they are also "libraries" that need to be released so that refstack users can install them. Doug From mnaser at vexxhost.com Thu Jun 28 16:56:22 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 28 Jun 2018 12:56:22 -0400 Subject: [openstack-dev] [openstack-ansible] dropping selinux support Message-ID: Hi everyone: This email is to ask if there is anyone out there opposed to removing SELinux bits from OpenStack ansible, it's blocking some of the gates and the maintainers for them are no longer working on the project unfortunately. I'd like to propose removing any SELinux stuff from OSA based on the following: 1) We don't gate on it, we don't test it, we don't support it. If you're running OSA with SELinux enforcing, please let us know how :-) 2) It extends beyond the scope of the deployment project and there are no active maintainers with the resources to deal with them 3) With the work currently in place to let OpenStack Ansible install distro packages, we can rely on upstream `openstack-selinux` package to deliver deployments that run with SELinux on. Is there anyone opposed to removing it? If so, please let us know. :-) Thanks! Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From mnaser at vexxhost.com Thu Jun 28 17:00:03 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 28 Jun 2018 13:00:03 -0400 Subject: [openstack-dev] [openstack-ansible] dropping selinux support In-Reply-To: References: Message-ID: Also, this is the change that drops it, so feel free to vote with your opinion there too: https://review.openstack.org/578887 Drop SELinux support from os_swift On Thu, Jun 28, 2018 at 12:56 PM, Mohammed Naser wrote: > Hi everyone: > > This email is to ask if there is anyone out there opposed to removing > SELinux bits from OpenStack ansible, it's blocking some of the gates > and the maintainers for them are no longer working on the project > unfortunately. > > I'd like to propose removing any SELinux stuff from OSA based on the following: > > 1) We don't gate on it, we don't test it, we don't support it. If > you're running OSA with SELinux enforcing, please let us know how :-) > 2) It extends beyond the scope of the deployment project and there are > no active maintainers with the resources to deal with them > 3) With the work currently in place to let OpenStack Ansible install > distro packages, we can rely on upstream `openstack-selinux` package > to deliver deployments that run with SELinux on. > > Is there anyone opposed to removing it? If so, please let us know. :-) > > Thanks! > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. http://vexxhost.com -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From sean.mcginnis at gmx.com Thu Jun 28 17:20:09 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 28 Jun 2018 12:20:09 -0500 Subject: [openstack-dev] [release] Release countdown for week R-8, July 2-6 Message-ID: <20180628172008.GA25669@sm-workstation> Your long awaited countdown email... Development Focus ----------------- Teams should be focused on implementing planned work for the cycle. It is also a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- We have a few deadlines coming up as we get closer to the end of the cycle: * Non-client libraries (generally, any library that is not python-${PROJECT}client) must have a final release by July 19. Only critical bugfixes will be allowed past this point. Please make sure any important feature works has required library changes by this time. * Client libraries must have a final release by July 26. Thierry posted an initial schedule for the PTG in September. Please take a look and make sure it looks OK for your team: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html Upcoming Deadlines & Dates -------------------------- Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From msm at redhat.com Thu Jun 28 18:04:08 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 28 Jun 2018 14:04:08 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting covered a few topics, but was mainly focused on a few updates to the errors guideline. We began with a review of last week's actions. Ed Leafe has sent an email[7] to mailing list to let the folks working on the GraphQL experiments know that the API-SIG StoryBoard was available for them to use to track their progress. We mentioned the proposed time slot[7] for the API-SIG at the upcoming PTG, but as we are still a few months out from that event no other actions were proposed. Next we moved into a discussion of two guideline updates[8][9] that Chris Dent is proposing. The first review adds concrete examples for the error responses described in the guideline, and the second adds some clarifying language and background on the intent of error codes. During discussion among the group, a few minor areas of improvement were identified and recorded on the reviews with updates to be made by Chris. We discussed the transition to StoryBoard[10] during our bug discussion, noting the places where the workflow differs from Launchpad. Chris also showed us a Board[11] that he created to help figure out how to best use this new tool. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Add links to errors-example.json https://review.openstack.org/#/c/578369/ * Expand error code document to expect clarity https://review.openstack.org/#/c/577118/ * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131881.html [8] https://review.openstack.org/#/c/578369/ [9] https://review.openstack.org/#/c/577118/ [10] https://storyboard.openstack.org/#!/project/1039 [11] https://storyboard.openstack.org/#!/board/91 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From dmendiza at redhat.com Thu Jun 28 19:00:16 2018 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Thu, 28 Jun 2018 14:00:16 -0500 Subject: [openstack-dev] [barbican][heat] Identifying secrets in Barbican In-Reply-To: References: Message-ID: <78c1cd708b9a9992b96dc56033dc8c5ed74fc658.camel@redhat.com> Replying inline. On Wed, 2018-06-27 at 16:39 -0400, Zane Bitter wrote: > We're looking at using Barbican to implement a feature in Heat[1] > and > ran into some questions about how secrets are identified in the > client. > > With most openstack clients, resources are identified by a UUID. You > pass the UUID on the command line (or via the Python API or > whatever) > and the client combines that with the endpoint of the service > obtained > from the service catalog and a path to the resource type to generate > the > URL used to access the resource. > > While there appears to be no technical reason that barbicanclient > couldn't also do this, instead of just the UUID it uses the full URL > as > the identifier for the resource. This is extremely cumbersome for > the > user, and invites confused-deputy attacks where if the attacker can > control the URL, they can get barbicanclient to send a token to an > arbitrary URL. What is the rationale for doing it this way? > IIRC, using URIs instead of UUIDs was a federation pre-optimization done many years ago when Barbican was brand new and we knew we wanted federation but had no idea how it would work. The rationale was that the URI would contain both the ID of the secret as well as the location of where it was stored. In retrospect, that was a terrible idea, and using UUIDs for consistency with the rest of OpenStack would have been a better choice. I've added a story to the python-barbicanclient storyboard to enable usage of UUIDs instead of URLs: https://storyboard.openstack.org/#!/story/2002754 I'm sure you've noticed, but the URI that identifies the secret includes the UUID that Barbican uses to identify the secret internally: http://{barbican-host}:9311/v1/secrets/{UUID} So you don't actually need to store the URI, since it can be reconstructed by just saving the UUID and then using whatever URL Barbican has in the service catalog. > > In a tangentially related question, since secrets are immutable once > they've been uploaded, what's the best way to handle a case where > you > need to rotate a secret without causing a temporary condition where > there is no version of the secret available? (The fact that there's > no > way to do this for Nova keypairs is a perpetual problem for people, > and > I'd anticipate similar use cases for Barbican.) I'm going to guess > it's: > > * Create a new secret with the same name > * GET /v1/secrets/?name=&sort=created:desc&limit=1 to find out > the > URL for the newest secret with that name > * Use that URL when accessing the secret > * Once the new secret is created, delete the old one > > Should this, or whatever the actual recommended way of doing it is, > be > baked in to the client somehow so that not every user needs to > reimplement it? > When you store a secret (e.g. using POST /v1/secrets), the response includes the URI both in the JSON body and in the Location: header. There is no need for you to mess around with searching by name, since Barbican does not use the name to identify a secret. You should just save the URI (or UUID) from the response, and then update the resource using the old secret to point to the new secret instead. > > Bottom line: how should Heat expect/require a user to refer to a > Barbican secret in a property of a Heat resource, given that: > - We don't want Heat to become the deputy in "confused deputy > attack". > - We shouldn't do things radically differently to the way Barbican > does > them, because users will need to interact with Barbican first to > store > the secret. > - Many services will likely end up implementing integration with > Barbican and we'd like them all to have similar user interfaces. > - Users will need to rotate credentials without downtime. > > cheers, > Zane. > > BTW the user documentation for Barbican is really hard to find. > Y'all > might want to look in to cross-linking all of the docs you have > together. e.g. there is no link from the Barbican docs to the > python-barbicanclient docs or vice-versa. > > [1] https://storyboard.openstack.org/#!/story/2002126 > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Kevin.Fox at pnnl.gov Thu Jun 28 19:09:01 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 28 Jun 2018 19:09:01 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> , Message-ID: <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> I'll weigh in a bit with my operator hat on as recent experience it pertains to the current conversation.... Kubernetes has largely succeeded in common distribution tools where OpenStack has not been able to. kubeadm was created as a way to centralize deployment best practices, config, and upgrade stuff into a common code based that other deployment tools can build on. I think this has been successful for a few reasons: * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating its own dogfood) * was willing to make their api robust enough to handle that self enhancement. (secrets are a thing, orchestration is not optional, etc) * they decided to produce a reference product (very important to adoption IMO. You don't have to "build from source" to kick the tires.) * made the barrier to testing/development as low as 'curl http://......minikube; minikube start' (this spurs adoption and contribution) * not having large silo's in deployment projects allowed better communication on common tooling. * Operator focused architecture, not project based architecture. This simplifies the deployment situation greatly. * try whenever possible to focus on just the commons and push vendor specific needs to plugins so vendors can deal with vendor issues directly and not corrupt the core. I've upgraded many OpenStacks since Essex and usually it is multiple weeks of prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, something breaks only on the production system and needs hot patching on the spot. About 10% of the time, I've had to write the patch personally. I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, what did I have to do? A couple hours of looking at release notes and trying to dig up examples of where things broke for others. Nothing popped up. Then: on the controller, I ran: yum install -y kubeadm #get the newest kubeadm kubeadm upgrade plan #check things out It told me I had 2 choices. I could: * kubeadm upgrade v1.9.8 * kubeadm upgrade v1.10.5 I ran: kubeadm upgrade v1.10.5 The control plane was down for under 60 seconds and then the cluster was upgraded. The rest of the services did a rolling upgrade live and took a few more minutes. I can take my time to upgrade kubelets as mixed kubelet versions works well. Upgrading kubelet is about as easy. Done. There's a lot of things to learn from the governance / architecture of Kubernetes.. Fundamentally, there isn't huge differences in what Kubernetes and OpenStack tries to provide users. Scheduling a VM or a Container via an api with some kind of networking and storage is the same kind of thing in either case. The how to get the software (openstack or k8s) running is about as polar opposite you can get though. I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) * focus on the commons first. * simplify the architecture for ops: * make as much as possible stateless and centralize remaining state. * stop moving config options around with every release. Make it promote automatically and persist it somewhere. * improve serial performance before sharding. k8s can do 5000 nodes on one control plane. No reason to do nova cells and make ops deal with it except for the most huge of clouds * consider a reference product (think Linux vanilla kernel. distro's can provide their own variants. thats ok) * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. And I know its hard to talk about, but consider just adopting k8s as the commons and build on top of it. OpenStack's api's are good. The implementations right now are very very heavy for ops. You could tie in K8s's pod scheduler with vm stuff running in containers and get a vastly simpler architecture for operators to deal with. Yes, this would be a major disruptive change to OpenStack. But long term, I think it would make for a much healthier OpenStack. Thanks, Kevin ________________________________________ From: Zane Bitter [zbitter at redhat.com] Sent: Wednesday, June 27, 2018 4:23 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26 On 27/06/18 07:55, Jay Pipes wrote: > WARNING: > > Danger, Will Robinson! Strong opinions ahead! I'd have been disappointed with anything less :) > On 06/26/2018 10:00 PM, Zane Bitter wrote: >> On 26/06/18 09:12, Jay Pipes wrote: >>> Is (one of) the problem(s) with our community that we have too small >>> of a scope/footprint? No. Not in the slightest. >> >> Incidentally, this is an interesting/amusing example of what we talked >> about this morning on IRC[1]: you say your concern is that the scope >> of *Nova* is too big and that you'd be happy to have *more* services >> in OpenStack if they took the orchestration load off Nova and left it >> just to handle the 'plumbing' part (which I agree with, while noting >> that nobody knows how to get there from here); but here you're >> implying that Kata Containers (something that will clearly have no >> effect either way on the simplicity or otherwise of Nova) shouldn't be >> part of the Foundation because it will take focus away from >> Nova/OpenStack. > > Above, I was saying that the scope of the *OpenStack* community is > already too broad (IMHO). An example of projects that have made the > *OpenStack* community too broad are purpose-built telco applications > like Tacker [1] and Service Function Chaining. [2] > > I've also argued in the past that all distro- or vendor-specific > deployment tools (Fuel, Triple-O, etc [3]) should live outside of > OpenStack because these projects are more products and the relentless > drive of vendor product management (rightfully) pushes the scope of > these applications to gobble up more and more feature space that may or > may not have anything to do with the core OpenStack mission (and have > more to do with those companies' product roadmap). I'm still sad that we've never managed to come up with a single way to install OpenStack. The amount of duplicated effort expended on that problem is mind-boggling. At least we tried though. Excluding those projects from the community would have just meant giving up from the beginning. I think Thierry's new map, that collects installer services in a separate bucket (that may eventually come with a separate git namespace) is a helpful way of communicating to users what's happening without forcing those projects outside of the community. > On the other hand, my statement that the OpenStack Foundation having 4 > different focus areas leads to a lack of, well, focus, is a general > statement on the OpenStack *Foundation* simultaneously expanding its > sphere of influence while at the same time losing sight of OpenStack > itself -- and thus the push to create an Open Infrastructure Foundation > that would be able to compete with the larger mission of the Linux > Foundation. > > [1] This is nothing against Tacker itself. I just don't believe that > *applications* that are specially built for one particular industry > belong in the OpenStack set of projects. I had repeatedly stated this on > Tacker's application to become an OpenStack project, FWIW: > > https://review.openstack.org/#/c/276417/ > > [2] There is also nothing wrong with service function chains. I just > don't believe they belong in *OpenStack*. They more appropriately belong > in the (Open)NFV community because they just are not applicable outside > of that community's scope and mission. > > [3] It's interesting to note that Airship was put into its own > playground outside the bounds of the OpenStack community (but inside the > bounds of the OpenStack Foundation). I wouldn't say it's inside the bounds of the Foundation, and in fact confusion about that is a large part of why I wrote the blog post. It is a 100% unofficial project that just happens to be hosted on our infra. Saying it's inside the bounds of the Foundation is like saying Kubernetes is inside the bounds of GitHub. > Airship is AT&T's specific > deployment tooling for "the edge!". I actually think this was the > correct move for this vendor-opinionated deployment tool. > >> So to answer your question: >> >> zaneb: yeah... nobody I know who argues for a small stable >> core (in Nova) has ever said there should be fewer higher layer services. >> zaneb: I'm not entirely sure where you got that idea from. > > Note the emphasis on *Nova* above? > > Also note that when I've said that *OpenStack* should have a smaller > mission and scope, that doesn't mean that higher-level services aren't > necessary or wanted. Thank you for saying this, and could I please ask you to repeat this disclaimer whenever you talk about a smaller scope for OpenStack. Because for those of us working on higher-level services it feels like there has been a non-stop chorus (both inside and outside the project) of people wanting to redefine OpenStack as something that doesn't include us. The reason I haven't dropped this discussion is because I really want to know if _all_ of those people were actually talking about something else (e.g. a smaller scope for Nova), or if it's just you. Because you and I are in complete agreement that Nova has grown a lot of obscure capabilities that make it fiendishly difficult to maintain, and that in many cases might never have been requested if we'd had higher-level tools that could meet the same use cases by composing simpler operations. IMHO some of the contributing factors to that were: * The aforementioned hostility from some quarters to the existence of higher-level projects in OpenStack. * The ongoing hostility of operators to deploying any projects outside of Keystone/Nova/Glance/Neutron/Cinder (*still* seen playing out in the Barbican vs. Castellan debate, where we can't even correct one of OpenStack's original sins and bake in a secret store - something k8s managed from day one - because people don't want to install another ReST API even over a backend that they'll already have to install anyway). * The illegibility of public Nova interfaces to potential higher-level tools. > It's just that Nova has been a dumping ground over the past 7+ years for > features that, looking back, should never have been added to Nova (or at > least, never added to the Compute API) [4]. > > What we were discussing yesterday on IRC was this: > > "Which parts of the Compute API should have been implemented in other > services?" > > What we are discussing here is this: > > "Which projects in the OpenStack community expanded the scope of the > OpenStack mission beyond infrastructure-as-a-service?" > > and, following that: > > "What should we do about projects that expanded the scope of the > OpenStack mission beyond infrastructure-as-a-service?" > > Note that, clearly, my opinion is that OpenStack's mission should be to > provide infrastructure as a service projects (both plumbing and porcelain). > > This is MHO only. The actual OpenStack mission statement [5] is > sufficiently vague as to provide no meaningful filtering value for > determining new entrants to the project ecosystem. I think this is inevitable, in that if you want to define cloud computing in a single sentence it will necessarily be very vague. That's the reason for pursuing a technical vision statement (brainstorming for which is how this discussion started), so we can spell it out in a longer form. cheers, Zane. > I *personally* believe that should change in order for the *OpenStack* > community to have some meaningful definition and differentiation from > the broader cloud computing, application development, and network > orchestration ecosystems. > > All the best, > -jay > > [4] ... or never brought into the Compute API to begin with. You know, > vestigial tail and all that. > > [5] for reference: "The OpenStack Mission is to produce a ubiquitous > Open Source Cloud Computing platform that is easy to use, simple to > implement, interoperable between deployments, works well at all scales, > and meets the needs of users and operators of both public and private > clouds." > >> I guess from all the people who keep saying it ;) >> >> Apparently somebody was saying it a year ago too :D >> https://twitter.com/zerobanana/status/883052105791156225 >> >> cheers, >> Zane. >> >> [1] >> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-06-26.log.html#t2018-06-26T15:30:33 >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Thu Jun 28 21:03:34 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 28 Jun 2018 17:03:34 -0400 Subject: [openstack-dev] [openstack-ansible] dropping selinux support In-Reply-To: References: Message-ID: <20180628210334.GA17798@localhost.localdomain> On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote: > Hi everyone: > > This email is to ask if there is anyone out there opposed to removing > SELinux bits from OpenStack ansible, it's blocking some of the gates > and the maintainers for them are no longer working on the project > unfortunately. > > I'd like to propose removing any SELinux stuff from OSA based on the following: > > 1) We don't gate on it, we don't test it, we don't support it. If > you're running OSA with SELinux enforcing, please let us know how :-) > 2) It extends beyond the scope of the deployment project and there are > no active maintainers with the resources to deal with them > 3) With the work currently in place to let OpenStack Ansible install > distro packages, we can rely on upstream `openstack-selinux` package > to deliver deployments that run with SELinux on. > > Is there anyone opposed to removing it? If so, please let us know. :-) > While I don't use OSA, I would be surprised to learn that selinux wouldn't be supported. I also understand it requires time and care to maintain. Have you tried reaching out to people in #RDO, IIRC all those packages should support selinux. As for gating, maybe default to selinux passive for it to report errors, but not fail. And if anybody is interested in support it, they can do so and enable enforcing again when everything is fixed. - Paul From mnaser at vexxhost.com Thu Jun 28 21:08:19 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 28 Jun 2018 17:08:19 -0400 Subject: [openstack-dev] [openstack-ansible] dropping selinux support In-Reply-To: <20180628210334.GA17798@localhost.localdomain> References: <20180628210334.GA17798@localhost.localdomain> Message-ID: Hi Paul: On Thu, Jun 28, 2018 at 5:03 PM, Paul Belanger wrote: > On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote: >> Hi everyone: >> >> This email is to ask if there is anyone out there opposed to removing >> SELinux bits from OpenStack ansible, it's blocking some of the gates >> and the maintainers for them are no longer working on the project >> unfortunately. >> >> I'd like to propose removing any SELinux stuff from OSA based on the following: >> >> 1) We don't gate on it, we don't test it, we don't support it. If >> you're running OSA with SELinux enforcing, please let us know how :-) >> 2) It extends beyond the scope of the deployment project and there are >> no active maintainers with the resources to deal with them >> 3) With the work currently in place to let OpenStack Ansible install >> distro packages, we can rely on upstream `openstack-selinux` package >> to deliver deployments that run with SELinux on. >> >> Is there anyone opposed to removing it? If so, please let us know. :-) >> > While I don't use OSA, I would be surprised to learn that selinux wouldn't be > supported. I also understand it requires time and care to maintain. Have you > tried reaching out to people in #RDO, IIRC all those packages should support > selinux. Indeed, the support from RDO for SELinux works very well. In this case however, OpenStack ansible deploys from source and therefore places binaries in different places than the default expected locations for the upstream `openstack-selinux`. As we work towards adding 'distro' support (which to clarify, it means install from RPMs or DEBs rather than from source), we'll be able to pull in that package and automagically get SELinux support that's supported by an upstream that tracks it. > As for gating, maybe default to selinux passive for it to report errors, but not > fail. And if anybody is interested in support it, they can do so and enable > enforcing again when everything is fixed. That's reasonable. However, right now we have bugs around the distribution of SELinux modules and how they are compiled inside the the containers, which means that we're not having problems with the rules as much as uploading the rules and getting them compiled inside the server. I hope I cleared up a bit more of our side of things, I'm actually looking forward for us being able to support upstream distro packages. > - Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From dtroyer at gmail.com Thu Jun 28 21:18:44 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 28 Jun 2018 16:18:44 -0500 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> Message-ID: On Thu, Jun 28, 2018 at 8:04 AM, Josephine Seifert wrote: >> Go ahead and post WIP reviews and we can look at it further. To merge >> I'll want all of the usual tests, docs, release notes, etc but don't >> wait if that is not all done up front. > Here are the two WIP reviews: > > cursive: https://review.openstack.org/#/c/578767/ > osc: https://review.openstack.org/#/c/578769/ So one problem I have here is the dependencies of cursive, all of which become OSC dependencies if cursive is added. It includes oslo.log which OSC does not use and doesn't want to use for $REASONS that boil down to assumptions it makes for server-side use that are not good for client-side use. cursive includes castellan which also includes oslo.log and oslo.context, which I must admit I don't know how it affects a CLI because we've never tried to include it before. python-barbicanclient is also included by cursive, which would make that a new permanent dependency. This may be acceptable, it is partially up to the barbican team if they want to be subject to OSC testing that they may not have now. Looking at the changes you have to cursive, if that is all you need from it those bits could easily go somewhere in osc or osc-lib if you don't also need them elsewhere. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Thu Jun 28 21:18:43 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 28 Jun 2018 16:18:43 -0500 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> Message-ID: On Thu, Jun 28, 2018 at 8:04 AM, Josephine Seifert wrote: >> Go ahead and post WIP reviews and we can look at it further. To merge >> I'll want all of the usual tests, docs, release notes, etc but don't >> wait if that is not all done up front. > Here are the two WIP reviews: > > cursive: https://review.openstack.org/#/c/578767/ > osc: https://review.openstack.org/#/c/578769/ So one problem I have here is the dependencies of cursive, all of which become OSC dependencies if cursive is added. It includes oslo.log which OSC does not use and doesn't want to use for $REASONS that boil down to assumptions it makes for server-side use that are not good for client-side use. cursive includes castellan which also includes oslo.log and oslo.context, which I must admit I don't know how it affects a CLI because we've never tried to include it before. python-barbicanclient is also included by cursive, which would make that a new permanent dependency. This may be acceptable, it is partially up to the barbican team if they want to be subject to OSC testing that they may not have now. Looking at the changes you have to cursive, if that is all you need from it those bits could easily go somewhere in osc or osc-lib if you don't also need them elsewhere. dt -- Dean Troyer dtroyer at gmail.com From feilong at catalyst.net.nz Thu Jun 28 21:19:56 2018 From: feilong at catalyst.net.nz (Fei Long Wang) Date: Fri, 29 Jun 2018 09:19:56 +1200 Subject: [openstack-dev] [magnum] Problems with multi-regional OpenStack installation In-Reply-To: References: Message-ID: Hi Andrei, Thanks for raising this issue. I'm keen to review and happy to help. I just done a quick look for https://review.openstack.org/#/c/578356, it looks good for me. As for heat-container-eingine issue, it's probably a bug. I will test an propose a patch, which needs to release a new image then. Will update progress here. Cheers. On 28/06/18 19:11, Andrei Ozerov wrote: > Greetings. > > Has anyone successfully deployed Magnum in the multi-regional > OpenStack installation? > In my case different services (Nova, Heat) have different public > endpoint in every region. I couldn't start Kube-apiserver until I > added "region" to a kube_openstack_config. > I created a story with full description of that problem: > https://storyboard.openstack.org/#!/story/2002728 > and opened a > review with a small fix: https://review.openstack.org/#/c/578356. > > But apart from that I have another problem with such kind of OpenStack > installation. > Say I have two regions. When I create a cluster in the second > OpenStack region, Heat-container-engine tries to fetch Stack data from > the first region. > It then throws the following error: "The Stack (hame-uuid) could not > be found". I can see GET requests for that stack in logs of Heat-API > in the first region but I don't see them in the second one (where that > Heat stack actually exists). > > I'm assuming that Heat-container-engine doesn't pass "region_name" > when it searches for Heat endpoints: > https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149. > I've tried to change it but it's tricky because the > Heat-container-engine is installed via Docker system-image and it > won't work after restart if it's failed in the initial bootstrap > (because /var/run/heat-config/heat-config is empty). > Can someone help me with that? I guess it's better to create a > separate story for that issue? > > -- > Ozerov Andrei > ozerov at selectel.com > +7 (800) 555 06 75 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers & Best regards, Feilong Wang (王飞龙) -------------------------------------------------------------------------- Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Jun 28 21:32:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 28 Jun 2018 17:32:58 -0400 Subject: [openstack-dev] [barbican][heat] Identifying secrets in Barbican In-Reply-To: <78c1cd708b9a9992b96dc56033dc8c5ed74fc658.camel@redhat.com> References: <78c1cd708b9a9992b96dc56033dc8c5ed74fc658.camel@redhat.com> Message-ID: On 28/06/18 15:00, Douglas Mendizabal wrote: > Replying inline. [snip] > IIRC, using URIs instead of UUIDs was a federation pre-optimization > done many years ago when Barbican was brand new and we knew we wanted > federation but had no idea how it would work. The rationale was that > the URI would contain both the ID of the secret as well as the location > of where it was stored. > > In retrospect, that was a terrible idea, and using UUIDs for > consistency with the rest of OpenStack would have been a better choice. > I've added a story to the python-barbicanclient storyboard to enable > usage of UUIDs instead of URLs: > > https://storyboard.openstack.org/#!/story/2002754 Cool, thanks for clearing that up. If UUID is going to become the/a standard way to reference stuff in the future then we'll just use the UUID for the property value. > I'm sure you've noticed, but the URI that identifies the secret > includes the UUID that Barbican uses to identify the secret internally: > > http://{barbican-host}:9311/v1/secrets/{UUID} > > So you don't actually need to store the URI, since it can be > reconstructed by just saving the UUID and then using whatever URL > Barbican has in the service catalog. > >> >> In a tangentially related question, since secrets are immutable once >> they've been uploaded, what's the best way to handle a case where >> you >> need to rotate a secret without causing a temporary condition where >> there is no version of the secret available? (The fact that there's >> no >> way to do this for Nova keypairs is a perpetual problem for people, >> and >> I'd anticipate similar use cases for Barbican.) I'm going to guess >> it's: >> >> * Create a new secret with the same name >> * GET /v1/secrets/?name=&sort=created:desc&limit=1 to find out >> the >> URL for the newest secret with that name >> * Use that URL when accessing the secret >> * Once the new secret is created, delete the old one >> >> Should this, or whatever the actual recommended way of doing it is, >> be >> baked in to the client somehow so that not every user needs to >> reimplement it? >> > > When you store a secret (e.g. using POST /v1/secrets), the response > includes the URI both in the JSON body and in the Location: header. > > There is no need for you to mess around with searching by name, since > Barbican does not use the name to identify a secret. You should just > save the URI (or UUID) from the response, and then update the resource > using the old secret to point to the new secret instead. Sometimes user will want to be able to rotate secrets without updating all of the places that they're referenced from though. cheers, Zane. From miguel at mlavalle.com Thu Jun 28 23:31:39 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 28 Jun 2018 18:31:39 -0500 Subject: [openstack-dev] [neutron] Canceling Neutron drivers meeting on June 29th Message-ID: Dear Neutron Team, This week we don't have RFEs in the triaged stage to be discussed during our weekly drivers meeting. As a consequence, I am canceling the meeting on June 29th at 1400UTC. We have new RFEs and RFEs in the confirmed stage. I encourage the team to look and them, add your opinion and help to move them to the triaged stage Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jun 28 23:53:46 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Jun 2018 08:53:46 +0900 Subject: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins In-Reply-To: <217fcd71-dca1-5c05-4477-0631f28b0500@redhat.com> References: <1643b637954.12a76afca1193.5658117153151589198@ghanshyammann.com> <1643b8715e6.fca543903252.1902631162047144959@ghanshyammann.com> <0dbace6e-e3be-1c43-44bc-06f2be7bcdb0@openstack.org> <1530017472-sup-6339@lrrr.local> <20180626135209.GA15436@zeong> <1530021936-sup-5714@lrrr.local> <1643ed12ccd.c62ee6db17998.6818813333945980470@ghanshyammann.com> <217fcd71-dca1-5c05-4477-0631f28b0500@redhat.com> Message-ID: <16448d1319c.1051b3c3959748.434903455228904433@ghanshyammann.com> ---- On Fri, 29 Jun 2018 00:05:09 +0900 Dmitry Tantsur wrote ---- > On 06/27/2018 03:17 AM, Ghanshyam Mann wrote: > > > > > > > > ---- On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann wrote ---- > > > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: > > > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: > > > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: > > > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, wrote: > > > > > > > > > > > > > Dmitry Tantsur wrote: > > > > > > > > [...] > > > > > > > > My suggestion: tempest has to be compatible with all supported releases > > > > > > > > (of both services and plugins) OR be branched. > > > > > > > > [...] > > > > > > > I tend to agree with Dmitry... We have a model for things that need > > > > > > > release alignment, and that's the cycle-bound series. The reason tempest > > > > > > > is branchless was because there was no compatibility issue. If the split > > > > > > > of tempest plugins introduces a potential incompatibility, then I would > > > > > > > prefer aligning tempest to the existing model rather than introduce a > > > > > > > parallel tempest-specific cycle just so that tempest can stay > > > > > > > release-independent... > > > > > > > > > > > > > > I seem to remember there were drawbacks in branching tempest, though... > > > > > > > Can someone with functioning memory brain cells summarize them again ? > > > > > > > > > > > > > > > > > > > > > > > > > Branchless Tempest enforces api stability across branches. > > > > > > > > > > I'm sorry, but I'm having a hard time taking this statement seriously > > > > > when the current source of tension is that the Tempest API itself > > > > > is breaking for its plugins. > > > > > > > > > > Maybe rather than talking about how to release compatible things > > > > > together, we should go back and talk about why Tempest's API is changing > > > > > in a way that can't be made backwards-compatible. Can you give some more > > > > > detail about that? > > > > > > > > > > > > > Well it's not, if it did that would violate all the stability guarantees > > > > provided by Tempest's library and plugin interface. I've not ever heard of > > > > these kind of backwards incompatibilities in those interfaces and we go to > > > > all effort to make sure we don't break them. Where did the idea that > > > > backwards incompatible changes where being introduced come from? > > > > > > In his original post, gmann said, "There might be some changes in > > > Tempest which might not work with older version of Tempest Plugins." > > > I was surprised to hear that, but I'm not sure how else to interpret > > > that statement. > > > > I did not mean to say that Tempest will introduce the changes in backward incompatible way which can break plugins. That cannot happen as all plugins and tempest are branchless and they are being tested with master Tempest so if we change anything backward incompatible then it break the plugins gate. Even we have to remove any deprecated interfaces from Tempest, we fix all plugins first like - https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged) > > > > What I mean to say here is that adding new or removing deprecated interface in Tempest might not work with all released version or unreleased Plugins. That point is from point of view of using Tempest and Plugins in production cloud testing not gate(where we keep the compatibility). Production Cloud user use Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 not latest version (though latest version might work). > > > > This thread is not just for gate testing point of view (which seems to be always interpreted), this is more for user using Tempest and Plugins for their cloud testing. I am looping operator mail list also which i forgot in initial post. > > > > We do not have any tag/release from plugins to know what version of plugin can work with what version of tempest. For Example If There is new interface introduced by Tempest 19.0.0 and pluginX start using it. Now it can create issues for pluginX in both release model 1. plugins with no release (I will call this PluginNR), 2. plugins with independent release (I will call it PluginIR). > > > > Users (Not Gate) will face below issues: > > - User cannot use PluginNR with Tempest <19.0.0 (where that new interface was not present). And there is no PluginNR release/tag as this is unreleased and not branched software. > > - User cannot find a PluginIR particular tag/release which can work with tempest <19.0.0 (where that new interface was not present). Only way for user to make it work is to manually find out the PluginIR tag/commit before PluginIR started consuming the new interface. > > In these discussions I always think: how is it solved outside of the openstack > world. And the solutions seem to be: > 1. for PluginNR - do releases > 2. for PluginIR - declare their minimum version of tempest in requirements.txt > > Why isn't it sufficient for us? It is sufficient for many cases (i think almost all the plugins have Tempest min version in requirements.txt) but for testing Cloud based on old release is little difficult again. For example, to test the CloudA which is based on OpenStack Ocata need to pick the Tempest version 16.0.0. with independent release model (current model)- for Ocata cycle, there is no corresponding version of plugins available, So they have to find the Plugin version manually which has their minimum Tempest version compatible with 16.0.0. With cycle-with-intermediary release model, it become easy for users to know the cycle released version of Tempest and Plugins. -gmann > > Dmitry > > > > > Let me make it more clear via diagram: > > PluginNR PluginIR > > > > Tempest 19.0.0 > > Add NewInterface Use NewInterface Use NewInterface > > > > > > Tempest 18.0.0 > > NewInterface not present No version of PluginNR Unknown version of PluginIR > > > > > > GATE (No Issue as latest things always being tested live ): OK OK > > > > User issues: X (does not work) Hard to find compatible version > > > > > > We need a particular tag from Plugins for OpenStack release, EOL of OpenStack release like Tempest does so that user can test their old release Cloud in easy way. > > > > -gmann > > > > > > > > > As for this whole thread I don't understand any of the points being brought up > > > > in the original post or any of the follow ons, things seem to have been confused > > > > from the start. The ask from users at the summit was simple. When a new OpenStack > > > > release is pushed we push a tempest release to mark that (the next one will be > > > > 19.0.0 to mark Rocky). Users were complaining that many plugins don't have a > > > > corresponding version to mark support for a new release. So when trying to run > > > > against a rocky cloud you get tempest 19.0.0 and then a bunch of plugins for > > > > various services at different sha1s which have to be manually looked up based > > > > on dates. All users wanted at the summit was a tag for plugins like tempest > > > > does with the first number in: > > > > > > > > https://docs.openstack.org/tempest/latest/overview.html#release-versioning > > > > > > > > which didn't seem like a bad idea to me. I'm not sure the best mechanism to > > > > accomplish this, because I agree with much of what plugin maintainers were > > > > saying on the thread about wanting to control their own releases. But the > > > > desire to make sure users have a tag they can pull for the addition or > > > > removal of a supported release makes sense as something a plugin should do. > > > > > > We don't coordinate versions across projects anywhere else, for a > > > bunch of reasons including the complexity of coordinating the details > > > and the confusion it causes when the first version of something is > > > 19.0.0. Instead, we list the compatible versions of everything > > > together on a series-specific page on releases.o.o. That seems to > > > be enough to help anyone wanting to know which versions of tools > > > work together. The data is also available in YAML files, so it's easy > > > enough to consume by automation. > > > > > > Would that work for tempest and it's plugins, too? > > > > > > Is the problem that the versions are not the same, or that some of the > > > plugins are not being tagged at all? > > > > > > Doug > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lars at redhat.com Fri Jun 29 00:04:02 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 28 Jun 2018 20:04:02 -0400 Subject: [openstack-dev] [Puppet] Requirements for running puppet unit tests? Message-ID: <20180629000402.cuf2tpdc4fsagnkk@redhat.com> Hey folks, I'm looking for some guidance on how to successfully run rspec tests for openstack puppet modules (specifically, puppet-keystone). I started with CentOS 7, but running the 'bundle install command' told me: Gem::InstallError: public_suffix requires Ruby version >= 2.1. An error occurred while installing public_suffix (3.0.2), and Bundler cannot continue. Make sure that `gem install public_suffix -v '3.0.2'` succeeds before bundling. So I tried it on my Fedora 28 system, and while the 'bundle install' completed successfully, running `bundle exec rake lint` told me: $ bundle exec rake lint /home/lars/vendor/bundle/ruby/2.4.0/gems/puppet-2.7.26/lib/puppet/util/monkey_patches.rb:93: warning: constant ::Fixnum is deprecated rake aborted! NoMethodError: undefined method `<<' for nil:NilClass ...followed by a traceback. So then I tried it on Ubuntu 18.04, and the bundle install fails with: Gem::RuntimeRequirementNotMetError: grpc requires Ruby version < 2.5, >= 2.0. The current ruby version is 2.5.0. An error occurred while installing grpc (1.7.0), and Bundler cannot continue. And finally I tried Ubuntu 17.10. The bundle install completed successfully, but the 'rake lint' failed with: $ bundle exec rake lint /home/lars/vendor/bundle/ruby/2.3.0/gems/puppet-2.7.26/lib/puppet/defaults.rb:164: warning: key :queue_type is duplicated and overwritten on line 165 rake aborted! can't modify frozen Symbol What is required to successfully run the rspec tests? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From Greg.Waines at windriver.com Fri Jun 29 02:24:45 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 29 Jun 2018 02:24:45 +0000 Subject: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation In-Reply-To: References: <54898258-0FC0-46F3-9C64-FE4CEEA2B78C@windriver.com> Message-ID: <0B139046-4F69-452E-B390-C756543EA270@windriver.com> In-lined comments / questions below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 28, 2018 at 3:35 AM To: "ekuvaja at redhat.com" >, Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I’ve added the following pros and cons to the different options: * One Glance with multiple backends [1] [Greg] I’m not sure I understand this option. Is each Glance Backend completely independent ? e.g. when I do a “glance image-create ...” am I specifying a backend and that’s where the image is to be stored ? This is what I was originally thinking. So I was thinking that synchronization of images to Edge Clouds is simply done by doing “glance image-create ...” to the appropriate backends. But then you say “The syncronisation of the image data is the responsibility of the backend (eg.: CEPH).” ... which makes it sound like my thinking above is wrong and the Backends are NOT completely independent, but instead in some sort of replication configuration ... is this leveraging ceph replication factor or something (for example) ? * Pros: * Relatively easy to implement based on the current Glance architecture * Cons: * Requires the same Glance backend in every edge cloud instance * Requires the same OpenStack version in every edge cloud instance (apart from during upgrade) * Sensitivity for network connection loss is not clear [Greg] I could be wrong, but even though the OpenStack services in the edge clouds are using the images in their glance backend with a direct URL, I think the OpenStack services (e.g. nova) still need to get the direct URL via the Glance API which is ONLY available at the central site. So don’t think this option supports autonomy of edge Subcloud when connectivity is lost to central site. * Several Glances with an independent syncronisation service, sych via Glance API [2] * Pros: * Every edge cloud instance can have a different Glance backend * Can support multiple OpenStack versions in the different edge cloud instances * Can be extended to support multiple VIM types * Cons: * Needs a new synchronisation service [Greg] Don’t believe this is a big con ... suspect we are going to need this new synchronization service for synchronizing resources of a number of other openstack services ... not just glance. * Several Glances with an independent syncronisation service, synch using the backend [3] [Greg] This option seems a little odd to me. We are synching the GLANCE DB via some new synchronization service, but synching the Images themselves via the backend ... I think that would be tricky to ensure consistency. * Pros: * I could not find any * Cons: * Needs a new synchronisation service * One Glance and multiple Glance API servers [4] * Pros: * Implicitly location aware * Cons: * First usage of an image always takes a long time * In case of network connection error to the central Galnce Nova will have access to the images, but will not be able to figure out if the user have rights to use the image and will not have path to the images data [Greg] Yeah we tripped over the issue that although the Glance API can cache the image itself, it does NOT cache the image meta data (which I am guessing has info like “user access” etc.) ... so this option improves latency of access to image itself but does NOT provide autonomy. We plan on looking at options to resolve this, as we like the “implicit location awareness” of this option ... and believe it is an option that some customers will like. If anyone has any ideas ? Are these correct? Do I miss anything? Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends [2]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_sych_via_Glance_API [3]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend [4]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_and_multiple_Glance_API_servers From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Monday, June 11, 2018 4:29 PM To: Waines, Greg ; OpenStack Development Mailing List (not for usage questions) ; edge-computing at lists.openstack.org Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Thanks for the comments. I’ve updated the wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#Several_Glances_with_an_independent_syncronisation_service.2C_synch_using_the_backend Br, Gerg0 From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Friday, June 8, 2018 1:46 PM To: Csatari, Gergely (Nokia - HU/Budapest) >; OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Responses in-lined below, Greg. From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Friday, June 8, 2018 at 3:39 AM To: Greg Waines >, "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: RE: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, Going inline. From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Thursday, June 7, 2018 2:24 PM I had some additional questions/comments on the Image Synchronization Options ( https://wiki.openstack.org/wiki/Image_handling_in_edge_environment ): One Glance with multiple backends * In this scenario, are all Edge Clouds simply configured with the one central glance for its GLANCE ENDPOINT ? * i.e. GLANCE is a typical shared service in a multi-region environment ? [G0]: In my understanding yes. * If so, how does this OPTION support the requirement for Edge Cloud Operation when disconnected from Central Location ? [G0]: This is an open question for me also. Several Glances with an independent synchronization service (PUSH) * I refer to this as the PUSH model * I don’t believe you have to ( or necessarily should) rely on the backend to do the synchronization of the images * i.e. the ‘Synch Service’ could do this strictly through Glance REST APIs (making it independent of the particular Glance backend ... and allowing the Glance Backends at Central and Edge sites to actually be different) [G0]: Okay, I can update the wiki to reflect this. Should we keep the “synchronization by the backend” option as an other alternative? [Greg] Yeah we should keep it as an alternative. * I think the ‘Synch Service’ MUST be able to support ‘selective/multicast’ distribution of Images from Central to Edge for Image Synchronization * i.e. you don’t want Central Site pushing ALL images to ALL Edge Sites ... especially for the small Edge Sites [G0]: Yes, the question is how to define these synchronization policies. [Greg] Agreed ... we’ve had some very high-level discussions with end users, but haven’t put together a proposal yet. * Not sure ... but I didn’t think this was the model being used in mixmatch ... thought mixmatch was more the PULL model (below) [G0]: Yes, this is more or less my understanding. I remove the mixmatch reference from this chapter. One Glance and multiple Glance API Servers (PULL) * I refer to this as the PULL model * This is the current model supported in StarlingX’s Distributed Cloud sub-project * We run glance-api on all Edge Clouds ... that talk to glance-registry on the Central Cloud, and * We have glance-api setup for caching such that only the first access to an particular image incurs the latency of the image transfer from Central to Edge [G0]: Do you do image caching in Glance API or do you rely in the image cache in Nova? In the Forum session there were some discussions about this and I think the conclusion was that using the image cache of Nova is enough. [Greg] We enabled image caching in the Glance API. I believe that Nova Image Caching caches at the compute node ... this would work ok for all-in-one edge clouds or small edge clouds. But glance-api caching caches at the edge cloud level, so works better for large edge clouds with lots of compute nodes. * * this PULL model affectively implements the location aware synchronization you talk about below, (i.e. synchronise images only to those cloud instances where they are needed)? In StarlingX Distributed Cloud, We plan on supporting both the PUSH and PULL model ... suspect there are use cases for both. [G0]: This means that you need an architecture supporting both. Just for my curiosity what is the use case for the pull model once you have the push model in place? [Greg] The PULL model certainly results in the most efficient distribution of images ... basically images are distributed ONLY to edge clouds that explicitly use the image. Also if the use case is NOT concerned about incurring the latency of the image transfer from Central to Edge on the FIRST use of image then the PULL model could be preferred ... TBD. Here is the updated wiki: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [Greg] Looks good. Greg. Thanks, Gerg0 From: "Csatari, Gergely (Nokia - HU/Budapest)" > Date: Thursday, June 7, 2018 at 6:49 AM To: "openstack-dev at lists.openstack.org" >, "edge-computing at lists.openstack.org" > Subject: Re: [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation Hi, I did some work ont he figures and realised, that I have some questions related to the alternative options: Multiple backends option: - What is the API between Glance and the Glance backends? - How is it possible to implement location aware synchronisation (synchronise images only to those cloud instances where they are needed)? - Is it possible to have different OpenStack versions in the different cloud instances? - Can a cloud instance use the locally synchronised images in case of a network connection break? - Is it possible to implement this without storing database credentials ont he edge cloud instances? Independent synchronisation service: - If I understood [1] correctly mixmatch can help Nova to attach a remote volume, but it will not help in synchronizing the images. is this true? As I promised in the Edge Compute Group call I plan to organize an IRC review meeting to check the wiki. Please indicate your availability in [2]. [1]: https://mixmatch.readthedocs.io/en/latest/ [2]: https://doodle.com/poll/bddg65vyh4qwxpk5 Br, Gerg0 From: Csatari, Gergely (Nokia - HU/Budapest) Sent: Wednesday, May 23, 2018 8:59 PM To: OpenStack Development Mailing List (not for usage questions) >; edge-computing at lists.openstack.org Subject: [edge][glance]: Wiki of the possible architectures for image synchronisation Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From shu.mutow at gmail.com Fri Jun 29 02:40:53 2018 From: shu.mutow at gmail.com (Shu M.) Date: Fri, 29 Jun 2018 11:40:53 +0900 Subject: [openstack-dev] [zun][zun-ui] Priorities of new feature on Zun UI In-Reply-To: References: Message-ID: Hi Hongbin, Thank you for filling your opinion! I'd like to consider plan for Stein's Zun UI. Best regards, Shu 2018年6月27日(水) 21:45 Hongbin Lu : > Hi Shu, > > Thanks for the raising this discussion. I have filled my opinion in the > etherpad. In general, I am quite satisfied by the current feature set > provided by the Zun UI. Thanks for the great work from the UI team. > > Best regards, > Hongbin > > On Wed, Jun 27, 2018 at 12:18 AM Shu M. wrote: > >> Hi folks, >> >> Could you let me know your thoughts for priorities of new features on Zun >> UI. >> Could you jump to following pad, and fill your opinions? >> https://etherpad.openstack.org/p/zun-ui >> >> Best regards, >> Shu >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at redhat.com Fri Jun 29 03:08:13 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 28 Jun 2018 23:08:13 -0400 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: References: <20180619142940.mnhp3k5of6iynhwp@redhat.com> <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> Message-ID: <20180629030813.hu2d7z6cvxbro3jn@redhat.com> On Tue, Jun 19, 2018 at 10:12:54AM -0600, Alex Schultz wrote: > -1 to more services. We take a Heat time penalty for each new > composable service we add and in this case I don't think this should > be a service itself. I think for this case, it would be better suited > as a host prep task than a defined service. Providing a way for users > to define external host prep tasks might make more sense. But right now, the only way to define a host_prep_task is via a service template, right? What I've done for this particular case is create a new service template that exists only to provide a set of host_prep_tasks: https://github.com/CCI-MOC/rhosp-director-config/blob/master/templates/services/patch-puppet-modules.yaml Is there a better way to do this? -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From lars at redhat.com Fri Jun 29 03:12:12 2018 From: lars at redhat.com (Lars Kellogg-Stedman) Date: Thu, 28 Jun 2018 23:12:12 -0400 Subject: [openstack-dev] DeployArtifacts considered...complicated? In-Reply-To: <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> References: <20180619142940.mnhp3k5of6iynhwp@redhat.com> <5469805b-bd93-9508-f4a6-fb91a22d4961@redhat.com> Message-ID: <20180629031212.47ql6kbhepfusstj@redhat.com> On Tue, Jun 19, 2018 at 05:17:36PM +0200, Jiří Stránský wrote: > For the puppet modules specifically, we might also add another > directory+mount into the docker-puppet container, which would be blank by > default (unlike the existing, already populated /etc/puppet and > /usr/share/openstack-puppet/modules). And we'd put that directory at the > very start of modulepath. Then i *think* puppet would use a particular > module from that dir *only*, not merge the contents with the rest of > modulepath... No, you would still have the problem that types/providers from *all* available paths are activated, so if in your container you have /etc/puppet/modules/themodule/lib/puppet/provider/something/foo.rb, and you mount into the container /container/puppet/modules/themodule/lib/puppet/provider/something/bar.rb, then you end up with both foo.rb and bar.rb active and possibly conflicting. This only affects module lib directories. As Alex pointed out, puppet classes themselves behave differently and don't conflict in this fashion. -- Lars Kellogg-Stedman | larsks @ {irc,twitter,github} http://blog.oddbit.com/ | From duttaa at hotmail.com Fri Jun 29 09:49:39 2018 From: duttaa at hotmail.com (Abhijit Dutta) Date: Fri, 29 Jun 2018 09:49:39 +0000 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: References: , , Message-ID: Hi, As advised I installed Fedora 27 (Workstation) and tried with the latest version of devstack (pulled from git). However this time I got the following error - ./stack.sh:1313:start_placement /opt/stack/devstack/lib/placement:184:start_placement_api /opt/stack/devstack/lib/placement:179:die [ERROR] /opt/stack/devstack/lib/placement:179 placement-api did not start Error on exit World dumping... see /opt/stack/logs/worlddump-2018-06-29-071219.txt for details The local.cnf has been configured as: [[local|localrc]] FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.11.12.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0 ADMIN_PASSWORD=supersecret DATABASE_PASSWORD=iheartdatabases RABBIT_PASSWORD=flopsymopsy SERVICE_PASSWORD=iheartksl I have configured a static IP which is 192.168.1.201 in my laptop, which has dual core and 3gigs RAM. Please let me know, what can cause this error. ~Thanx Abhijit ________________________________ From: Dr. Jens Harbott (frickler) Sent: Wednesday, June 27, 2018 3:53 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Abhijit Dutta Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue 2018-06-27 16:58 GMT+02:00 Amy Marrich : > Abhijit, > > I'm forwarding your issue to the OpenStack-dev list so that the right people > might see your issue and respond. > > Thanks, > > Amy (spotz) > > ---------- Forwarded message ---------- > From: Abhijit Dutta > Date: Wed, Jun 27, 2018 at 5:23 AM > Subject: [openstack-community] DevStack Installation issue > To: "community at lists.openstack.org" > > > Hi, > > > I am trying to install DevStack for the first time in a baremetal with > Fedora 28 installed. While executing the stack.sh I am getting the > following error: > > > No match for argument: Django > Error: Unable to find a match > > Can anybody in the community help me out with this problem. We are aware of some issues with deploying devstack on Fedora 28, these are being worked on, see https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28 If you want a quick solution, you could try deploying on Fedora 27 or Centos 7 instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From josephine.seifert at secustack.com Fri Jun 29 10:38:27 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Fri, 29 Jun 2018 12:38:27 +0200 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> Message-ID: <0e359b8a-669e-92c0-2d2d-3493b9898d90@secustack.com> Hello Dean, thanks for your code comments so far. > Looking at the changes you have to cursive, if that is all you need > from it those bits could easily go somewhere in osc or osc-lib if you > don't also need them elsewhere. There lies the problem, because we also want to implement signature generation in nova for the "server image create". Do you have a suggestion, where we could implement this instead of cursive? Regards, Josephine From jean-philippe at evrard.me Fri Jun 29 12:23:50 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 29 Jun 2018 14:23:50 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-26 In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> References: <70afeb87-37c9-1595-ffa4-aadbd1a90228@gmail.com> <1A3C52DFCD06494D8528644858247BF01C141B52@EX10MBOX03.pnnl.gov> Message-ID: My two cents: > I think if OpenStack wants to gain back some of the steam it had before, it needs to adjust to the new world it is living in. This means: > * Consider abolishing the project walls. They are driving bad architecture (not intentionally but as a side affect of structure) As long as there is no walled garden, everything should be done in a modular way. I don't think having separated nova from cinder prevented some contributions, quite the contrary. (Optionally, watch [1]). I am not familiar with the modularity and ease of contribution in k8s, so the modularity could be there in a different form. [1]: https://www.youtube.com/watch?v=xYkh1sAu0UM > * focus on the commons first. Good point. > * simplify the architecture for ops: Good point, but I don't see how code, org structure, or project classification changes things here. > * come up with an architecture team for the whole, not the subsystem. The whole thing needs to work well. Couldn't that be done with a group TC sponsored? > * encourage current OpenStack devs to test/deploy Kubernetes. It has some very good ideas that OpenStack could benefit from. If you don't know what they are, you can't adopt them. Good idea. > > And I know its hard to talk about, but consider just adopting k8s as the commons and build on top of it. OpenStack's api's are good. The implementations right now are very very heavy for ops. You could tie in K8s's pod scheduler with vm stuff running in containers and get a vastly simpler architecture for operators to deal with. Yes, this would be a major disruptive change to OpenStack. But long term, I think it would make for a much healthier OpenStack. Well, I know operators that wouldn't like k8s and openstack components on top. If you're talking about just a shim between k8s concepts and openstack apis, that sounds like a good project : p >> I've also argued in the past that all distro- or vendor-specific >> deployment tools (Fuel, Triple-O, etc [3]) should live outside of >> OpenStack because these projects are more products and the relentless >> drive of vendor product management (rightfully) pushes the scope of >> these applications to gobble up more and more feature space that may or >> may not have anything to do with the core OpenStack mission (and have >> more to do with those companies' product roadmap). > > I'm still sad that we've never managed to come up with a single way to > install OpenStack. The amount of duplicated effort expended on that > problem is mind-boggling. At least we tried though. Excluding those > projects from the community would have just meant giving up from the > beginning. Well, I think it's a blessing and a curse. Sometimes, I'd rather have only one tool, so that we all work on it, and not dilute the community into small groups. But when I started deploying OpenStack years ago, I was glad I could find a community way to deploy it using , and not . So for me, I am glad (what became) OpenStack-Ansible existed and I am glad it still exists. The effort your are talking about is not purely duplicated: - Example: whether openstack-ansible existed or not, people used to Ansible would still prefer deploying openstack with Ansible than with puppet or chef (because of their experience) if not relying on a vendor. In that case, they would probably create their own series of playbooks. (I've seen some). That's the real waste, IMO. - Deployments projects talk to each other. Talking about living outside OpenStack, where would, for you, OpenStack-Ansible, the puppet modules, or OpenStack-Chef be? For OSA, I consider our community now as NOT vendor specific, as many actors are now playing with it. We've spent a considerable effort in outreaching and ensuring everyone can get involved. So we should be in openstack/ right? But what about 4 years ago? Every project starts with a sponsor. I am not sure a classification (is it outside, is it inside openstack/?) matters in this case. > > I think Thierry's new map, that collects installer services in a > separate bucket (that may eventually come with a separate git namespace) > is a helpful way of communicating to users what's happening without > forcing those projects outside of the community. Side note: I'd be super happy if OpenStack-Ansible could be on that bucket! Cheers, JP (evrardjp) From jean-philippe at evrard.me Fri Jun 29 12:31:53 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 29 Jun 2018 14:31:53 +0200 Subject: [openstack-dev] [openstack-ansible] dropping selinux support In-Reply-To: References: <20180628210334.GA17798@localhost.localdomain> Message-ID: This title seems very scary. It was to be read as "... for source installs" : ) To be honest, I feel very sad about the lack of involvement in CentOS in OSA over the years. We didn't get many contributors over time for it. This has always been a labour of love, and the honeymoon seems over for many. So... Please help us if you want to keep your sourced based installs + CentOS + selinux. Else, you can still use packages! :D Thanks mnaser for starting this hard topic and community decision process. JP (evrardjp) From mordred at inaugust.com Fri Jun 29 12:42:13 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 29 Jun 2018 07:42:13 -0500 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: <0e359b8a-669e-92c0-2d2d-3493b9898d90@secustack.com> References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> <0e359b8a-669e-92c0-2d2d-3493b9898d90@secustack.com> Message-ID: <981ccd87-56bc-a335-3cab-db3c79ada2ca@inaugust.com> On 06/29/2018 05:38 AM, Josephine Seifert wrote: > Hello Dean, > > thanks for your code comments so far. > >> Looking at the changes you have to cursive, if that is all you need >> from it those bits could easily go somewhere in osc or osc-lib if you >> don't also need them elsewhere. > There lies the problem, because we also want to implement signature > generation in nova for the "server image create". Do you have a > suggestion, where we could implement this instead of cursive? I was just chatting with Dean about this in IRC. I'd like to suggest putting the image signing code into openstacksdk. Users of openstacksdk would almost certainly also want to be able to sign images they're going to upload. That would take care of having it in a library and also having that library be something OSC depends on. We aren't using SDK in nova yet - but it shouldn't be hard to get some POC patches up to include it ... and to simplify a few other things. I'd be more than happy to work with you on getting the code in. Monty From cdent+os at anticdent.org Fri Jun 29 13:03:29 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 29 Jun 2018 14:03:29 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-26 Message-ID: HTML: https://anticdent.org/placement-update-18-26.html This is placement update 18-26, a weekly update of ongoing development related to the [OpenStack](https://www.openstack.org/) [placement service](https://developer.openstack.org/api-ref/placement/). This is an expand version. For the next few weeks the "Specs" section will not be present. When we start reviewing specs for Stein, I'll add it back in. # Most Important Nested allocation candidates are getting very close, but remain a critical piece of functionality. After that is making sure that we are progressing on the /reshapher functionality and bringing the various virt drivers into line with all this nice new functionality (which mostly means ProviderTrees). All that nice new functionality means bugs. Experiment. Break stuff. Find bugs. Fix them. Speaking of bugs: a collection of problems—not covered by tests— with consumer generations was discovered this week. Also a problem with the limit functionality on GET /allocation_candidates and how that works when force_hosts is being used. Fixes are in progress but these issues are a strong indicator of our need to make sure that we are experimenting with things: it's where features integrate with each other that problems are showing up. # What's Changed Quite a lot of bug fixes and bug demonstrations have merged in this week, but features mostly stable. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 19, two more than last week. We've got some work either starting or killing these. * [In progress placement bugs](https://goo.gl/vzGGDQ) 10, +2 on last time. # Questions As far as I can tell there was no discussion on last week's question, so here it is again: In [IRC [last week]](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-06-21.log.html#t2018-06-21T13:21:14) we had an extensive discussion about being able to set custom resource classes on the resource provider representing a compute node, outside the virt driver. At the moment the virt driver will clobber it. Is this what we always want? # Main Themes ## Documentation This is a section for reminding us to document all the fun stuff we are enabling. Open areas include: * Documenting optional placement database. A bug, [1778227](https://bugs.launchpad.net/nova/+bug/1778227) has been created to track this. This has started, for the install docs, but there are more places that need to be touched. * "How to deploy / model shared disk. Seems fairly straight-forward, and we could even maybe create a multi-node ceph job that does this - wouldn't that be awesome?!?!", says an enthusiastic Matt Riedemann. * The when's and where's of re-shaping and VGPUs. ## Nested providers in allocation candidates As far as I can tell the main thing left here is to turn it on in a microversion. That code is at: * ## Consumer Generations There are new patches in progress on this, related to the bugs that were discovered: * * There are a patches left on the consumer generation topic, some tidy ups, and some stuff related to healing allocations: * Is someone already working on code for making use of this in the resource tracker? ## Reshape Provider Trees This allows moving inventory and allocations that were on resource provider A to resource provider B in an atomic fashion. The blueprint topic is: * There are WIPs for the HTTP parts and the resource tracker parts, on that topic. ## Mirror Host Aggregates This needs a command line tool: * ## Extraction A while back, Jay made a first pass at an [os-resource-classes](https://github.com/jaypipes/os-resource-classes/), which needs some additional eyes on it. I personally thought it might be heavier than required. If you have ideas please share them. An area we will need to prepare for is dealing with the various infra and co-gating issues that will come up once placement is extracted. We also need to think about how to manage the fixtures currently made available by nova that we might need or want to use in placement. Some of them might be worth sharing. How should we do that? # Other 18 entries last week. 24 now. * Purge comp_node and res_prvdr records during deletion of cells/hosts * A huge pile of improvements to osc-placement * Get resource provider by uuid or name (osc-placement) * Tighten up ReportClient use of generation * Add unit test for non-placement resize * cover migration cases with functional tests * Move refresh time from report client to prov tree * PCPU resource class * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * normalize_name helper (in os-traits) * Convert driver supported capabilities to compute node provider traits * Use placement.inventory.inuse in report client * ironic: Report resources as reserved when needed * Test for multiple limit/group_policy qparams * [placement] api-ref: add traits parameter * Convert 'placement_api_docs' into a Sphinx extension * [placement] Fix bug in consumer generation handling (This is likely to be replaced by something better, but including it for reference.) * Test for multiple limit/group_policy qparams * Fix placement incompatible with webob 1.7 * Disable limits if force_hosts or force_nodes is set * Rename auth_uri to www_authenticate_uri * Blazar's work on using placement # End A butterfly just used my house as a south to north shortcut. That'll do. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From josephine.seifert at secustack.com Fri Jun 29 13:16:53 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Fri, 29 Jun 2018 15:16:53 +0200 Subject: [openstack-dev] [osc][python-openstackclient] osc-included image signing In-Reply-To: <981ccd87-56bc-a335-3cab-db3c79ada2ca@inaugust.com> References: <898fcace-cafd-bc0b-faed-7ec1b5780653@secustack.com> <5027e00f-afdb-eaa6-7775-b161abee67d2@secustack.com> <0e359b8a-669e-92c0-2d2d-3493b9898d90@secustack.com> <981ccd87-56bc-a335-3cab-db3c79ada2ca@inaugust.com> Message-ID: <6c157121-4c23-c260-153c-0c9bf912185f@secustack.com> > On 06/29/2018 05:38 AM, Josephine Seifert wrote: >> Hello Dean, >> >> thanks for your code comments so far. >> >>> Looking at the changes you have to cursive, if that is all you need >>> from it those bits could easily go somewhere in osc or osc-lib if you >>> don't also need them elsewhere. >> There lies the problem, because we also want to implement signature >> generation in nova for the "server image create". Do you have a >> suggestion, where we could implement this instead of cursive? > > I was just chatting with Dean about this in IRC. I'd like to suggest > putting the image signing code into openstacksdk. Users of > openstacksdk would almost certainly also want to be able to sign > images they're going to upload. That would take care of having it in a > library and also having that library be something OSC depends on. > > We aren't using SDK in nova yet - but it shouldn't be hard to get some > POC patches up to include it ... and to simplify a few other things. > > I'd be more than happy to work with you on getting the code in. > > Monty That sounds like a good idea. We will try to integrate the code from cursive into openstacksdk and update the review, story and etherpad accordingly. From skaplons at redhat.com Fri Jun 29 13:25:11 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Fri, 29 Jun 2018 15:25:11 +0200 Subject: [openstack-dev] [neutron] CI meeting 3.07.2018 Message-ID: <293B460F-BFBA-4067-8DB3-3F0A76D494A6@redhat.com> Hi, Next week I will not be able to lead CI meeting. As there is also holiday on 4th July in US, I think that it would be good to cancel this meeting. Next one will be as usually on 10.07 and Miguel Lavalle will be chair of it. I’m coming back on 17.07 — Slawek Kaplonski Senior software engineer Red Hat From pkovar at redhat.com Fri Jun 29 14:45:53 2018 From: pkovar at redhat.com (Petr Kovar) Date: Fri, 29 Jun 2018 16:45:53 +0200 Subject: [openstack-dev] [docs][all] Front page template for project team documentation Message-ID: <20180629164553.258c79a096fd7a300c31faee@redhat.com> Hi all, Feedback from the Queens PTG included requests for the Documentation Project to provide guidance and recommendations on how to structure common content typically found on the front page for project team docs, located at doc/source/index.rst in the project team repository. I've created a new docs spec, proposing a template to be used by project teams, and would like to ask the OpenStack community and, specifically, the project teams, to take a look, submit feedback on the spec, share comments, ideas, or concerns: https://review.openstack.org/#/c/579177/ The main goal of providing and using this template is to make it easier for users to find, navigate, and consume project team documentation, and for contributors to set up and maintain the project team docs. The template would also serve as the basis for one of the future governance docs tags, which is a long-term plan for the docs team. Thank you, pk From colleen at gazlene.net Fri Jun 29 15:37:27 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 29 Jun 2018 17:37:27 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 25 June 2018 Message-ID: <1530286647.1511578.1424852928.72198720@webmail.messagingengine.com> # Keystone Team Update - Week of 25 June 2018 ## News ### Policy Auditing Auditing the keystone APIs and resolving what roles they need under which scope types is the next step in implementing basic default roles. This was already done for barbican but we still need to go through the exercise for keystone[1]. However, the ongoing Flask work[2] will have implications for our policy handling and we may want to wait to complete that work before proceeding so that we don't end up having to do it twice[3]. [1] https://docs.google.com/spreadsheets/d/1kd3OJCLMsIkPgULN31WFw9PA9-3-X99yaDnjWDGOvm0/edit?usp=sharing [2] https://bugs.launchpad.net/keystone/+bug/1776504 [3] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-26-16.00.log.html#l-56 ### Flask Work The flaskification work has necessitated a new mechanism for policy enforcement[4] which will replace @protected. Take a look at the change that introduces the RBACEnforcer[5] and try to get familiar with it. [4] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-06-26-16.00.log.html#l-229 [5] https://review.openstack.org/576639 ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 10 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 57 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 6 new bugs and closed 2. Bugs opened (6) Bug #1778603 (keystone:High) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1778603 Bug #1778945 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1778945 Bug #1778989 (keystone:Undecided) opened by Lars Kellogg-Stedman https://bugs.launchpad.net/keystone/+bug/1778989 Bug #1779286 (keystone:Undecided) opened by Dmitry https://bugs.launchpad.net/keystone/+bug/1779286 Bug #1778949 (oslo.policy:Undecided) opened by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1778949 Bug #1779172 (oslo.policy:Undecided) opened by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1779172 Bugs closed (0) Bugs fixed (2) Bug #1757022 (keystone:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1757022 Bug #1778109 (keystone:Undecided) fixed by Jeremy Freudberg https://bugs.launchpad.net/keystone/+bug/1778109 This report was generated with http://paste.openstack.org/show/724598/ and https://github.com/lbragstad/launchpad-toolkit#building-bug-reports ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Keystone's feature freeze is in just two weeks. Please help out by reviewing our major feature work: https://review.openstack.org/#/q/is:open+topic:bp/mfa-auth-receipt https://review.openstack.org/#/q/is:open+topic:bp/whitelist-extension-for-app-creds https://review.openstack.org/#/q/is:open+topic:bp/strict-two-level-model As well as the flaskification work which will have a major impact on other ongoing work: https://review.openstack.org/#/q/is:open+topic:bug/1776504 ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From fungi at yuggoth.org Fri Jun 29 15:50:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 29 Jun 2018 15:50:22 +0000 Subject: [openstack-dev] [release] [infra] Retiring the release-tools repository Message-ID: <20180629155022.23aie5vvipmtt2vj@yuggoth.org> This is just a heads up that the openstack-infra/release-tools Git repository, previously used by the Release Management team, is getting retired[*]. Its previous functions have been relocated into other repositories like openstack-infra/project-config and openstack/releases. If you have any questions, feel free to follow up to this notification or pop into the #openstack-release channel on the Freenode IRC network. Thanks! [*] https://review.openstack.org/579188 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mriedemos at gmail.com Fri Jun 29 16:25:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 29 Jun 2018 11:25:09 -0500 Subject: [openstack-dev] [nova] [placement] placement update 18-26 In-Reply-To: References: Message-ID: On 6/29/2018 8:03 AM, Chris Dent wrote: > # Questions > > As far as I can tell there was no discussion on last week's > question, so here it is again: > > In [IRC > [last > week]](http://eavesdrop.openstack.org/irclogs/%23openstack-placement/%23openstack-placement.2018-06-21.log.html#t2018-06-21T13:21:14) > > we had an extensive discussion about being able to set custom > resource classes on the resource provider representing a > compute node, outside the virt driver. At the moment the virt driver > will clobber it. Is this what we always want? > We've always said the virt driver is the "owner" of resource classes for the compute node provider right? If something external wants to put custom inventory in that tree, they'd do so with a child provider (like neutron will do with bandwidth providers). We have said that we should merge externally-defined traits with compute-defined traits, and I think that is OK. > > * "How to deploy / model shared disk. Seems fairly straight-forward, >   and we could even maybe create a multi-node ceph job that does >   this - wouldn't that be awesome?!?!", says an enthusiastic Matt >   Riedemann. > Another thing with this is move operations don't really work with shared providers yet, there are TODOs in the conductor task code for when we move the allocations from the instance on the source host to the migration record - those don't deal with shared providers. > > ## Nested providers in allocation candidates > > As far as I can tell the main thing left here is to turn it on in a > microversion. That code is at: > > * Merged. > > ## Consumer Generations > > There are new patches in progress on this, related to the bugs that > were discovered: > > * > * > > There are a patches left on the consumer generation topic, some tidy > ups, and some stuff related to healing allocations: > > * > > Is someone already working on code for making use of this in the > resource tracker? > In what way? The RT, except for I think the Ironic driver, shouldn't be dealing with allocations (PUTing them anyway). > > * >   A huge pile of improvements to osc-placement > Several of these are making progress now (getting review I mean), but as of <1 hour ago I need to redo something in the bottom change in the series. > > * >   PCPU resource class > I dropped the procedural -2 on this since the spec was never approved in time for Rocky. > > * > > >   Blazar's work on using placement Cool I was just looking at Blazar the other night for interest in the dedicated hosts feature request from the public cloud SIG and was wondering if they'd started integrating more with placement rather than the compute APIs (and notifications). Seems like a good long-term strategy on their part. -- Thanks, Matt From cdent+os at anticdent.org Fri Jun 29 16:34:16 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 29 Jun 2018 17:34:16 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-26 In-Reply-To: References: Message-ID: Thanks for the notes Matt, I'll try to incorporate this stuff into the next one where it makes some. A response within. On Fri, 29 Jun 2018, Matt Riedemann wrote: > On 6/29/2018 8:03 AM, Chris Dent wrote: >> There are a patches left on the consumer generation topic, some tidy >> ups, and some stuff related to healing allocations: >> >> * >> >> Is someone already working on code for making use of this in the >> resource tracker? >> > > In what way? The RT, except for I think the Ironic driver, shouldn't be > dealing with allocations (PUTing them anyway). I know such things never happen in my writing, but that's basically a typo. It should say "report client". By which I mean, is anyone working on handling a generation conflict when PUT /allocations from nova-scheduler? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zbitter at redhat.com Fri Jun 29 18:38:05 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 29 Jun 2018 14:38:05 -0400 Subject: [openstack-dev] [service-broker] openstack-service-broker project update Message-ID: <97bbac40-27e4-234d-e1e5-91848b28273e@redhat.com> (This follows on from http://lists.openstack.org/pipermail/openstack-dev/2018-June/131183.html in case you are confused) Hi folks, Now that the project creation process is largely complete, here are some housekeeping updates: * Let's use the [service-broker] tag in emails to openstack-dev (starting with this one!) * By popular demand, I set up an IRC channel too. It's #openstack-service-broker on FreeNode. * The project repo is available here: http://git.openstack.org/cgit/openstack/openstack-service-broker Since there's no code yet, the only Zuul job is the one to build (but not publish) the docs, but it is working and patches are merging. * As a reminder, this is the current core review team: https://review.openstack.org/#/admin/groups/1925,members (and more volunteers are welcome) * The project storyboard is available here: https://storyboard.openstack.org/#!/project/1038 * I started adding some stories to the storyboard that should get us to an initial prototype, and added them to this worklist: https://storyboard.openstack.org/#!/worklist/391 Folks from the Automation Broker team volunteered to help out by writing some example playbooks that we can start from. So the most important thing I think we can work on to start with is to build the tooling that will enable us to test them, both locally for devs and folks checking out the project and in the gate. * It would probably be helpful to set up a meeting time - at least for an initial kickoff (thanks Artem for the suggestion), although I see we've managed to collect people in just about every time zone so it might be challenging. Here is a poll we can use to try to pick a time: https://framadate.org/xlKuh4vtozew8gL8 (note: assume all the times are in UTC). Everyone who is interested please respond to that in the next few days. (I chose the date for July 10th to avoid the days right before/after the July 4th holiday in the US, although I personally will be around on those days.) cheers, Zane. From zbitter at redhat.com Fri Jun 29 18:41:12 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 29 Jun 2018 14:41:12 -0400 Subject: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker In-Reply-To: References: Message-ID: <403373aa-95a4-2407-5594-4a03afeba014@redhat.com> Now that the project is set up, let's tag future messages on this topic with [service-broker]. Here's one to start with that will help you find everything: http://lists.openstack.org/pipermail/openstack-dev/2018-June/131923.html cheers, Zane. On 05/06/18 12:19, Zane Bitter wrote: > I've been doing some investigation into the Service Catalog in > Kubernetes and how we can get OpenStack resources to show up in the > catalog for use by applications running in Kubernetes. (The Big 3 public > clouds already support this.) The short answer is via an implementation > of something called the Open Service Broker API, but there are shortcuts > available to make it easier to do. > > I'm convinced that this is readily achievable and something we ought to > do as a community. > > I've put together a (long-winded) FAQ below to answer all of your > questions about it. > > Would you be interested in working on a new project to implement this > integration? Reply to this thread and let's collect a list of volunteers > to form the initial core review team. > > cheers, > Zane. > > > What is the Open Service Broker API? > ------------------------------------ > > The Open Service Broker API[1] is a standard way to expose external > resources to applications running in a PaaS. It was originally developed > in the context of CloudFoundry, but the same standard was adopted by > Kubernetes (and hence OpenShift) in the form of the Service Catalog > extension[2]. (The Service Catalog in Kubernetes is the component that > calls out to a service broker.) So a single implementation can cover the > most popular open-source PaaS offerings. > > In many cases, the services take the form of simply a pre-packaged > application that also runs inside the PaaS. But they don't have to be - > services can be anything. Provisioning via the service broker ensures > that the services requested are tied in to the PaaS's orchestration of > the application's lifecycle. > > (This is certainly not the be-all and end-all of integration between > OpenStack and containers - we also need ways to tie PaaS-based > applications into the OpenStack's orchestration of a larger group of > resources. Some applications may even use both. But it's an important > part of the story.) > > What sorts of services would OpenStack expose? > ---------------------------------------------- > > Some example use cases might be: > > * The application needs a reliable message queue. Rather than spinning > up multiple storage-backed containers with anti-affinity policies and > dealing with the overhead of managing e.g. RabbitMQ, the application > requests a Zaqar queue from an OpenStack cloud. The overhead of running > the queueing service is amortised across all of the applications in the > cloud. The queue gets cleaned up correctly when the application is > removed, since it is tied into the application definition. > > * The application needs a database. Rather than spinning one up in a > storage-backed container and dealing with the overhead of managing it, > the application requests a Trove DB from an OpenStack cloud. > > * The application includes a service that needs to run on bare metal for > performance reasons (e.g. could also be a database). The application > requests a bare-metal server from Nova w/ Ironic for the purpose. (The > same applies to requesting a VM, but there are alternatives like > KubeVirt - which also operates through the Service Catalog - available > for getting a VM in Kubernetes. There are no non-proprietary > alternatives for getting a bare-metal server.) > > AWS[3], Azure[4], and GCP[5] all have service brokers available that > support these and many more services that they provide. I don't know of > any reason in principle not to expose every type of resource that > OpenStack provides via a service broker. > > How is this different from cloud-provider-openstack? > ---------------------------------------------------- > > The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself > to access features of the cloud to provide its service. For example, if > k8s needs persistent storage for a container then it can request that > from Cinder through cloud-provider-openstack[7]. It can also request a > load balancer from Octavia instead of having to start a container > running HAProxy to load balance between multiple instances of an > application container (thus enabling use of hardware load balancers via > the cloud's abstraction for them). > > In contrast, the Service Catalog interface allows the *application* > running on Kubernetes to access features of the cloud. > > What does a service broker look like? > ------------------------------------- > > A service broker provides an HTTP API with 5 actions: > > * List the services provided by the broker > * Create an instance of a resource > * Bind the resource into an instance of the application > * Unbind the resource from an instance of the application > * Delete the resource > > The binding step is used for things like providing a set of DB > credentials to a container. You can rotate credentials when replacing a > container by revoking the existing credentials on unbind and creating a > new set on bind, without replacing the entire resource. > > Is there an easier way? > ----------------------- > > Yes! Folks from OpenShift came up with a project called the Automation > Broker[8]. To add support for a service to Automation Broker you just > create a container with an Ansible playbook to handle each of the > actions (create/bind/unbind/delete). This eliminates the need to write > another implementation of the service broker API, and allows us to > simply write Ansible playbooks instead.[9] > > (Aside: Heat uses a comparable method to allow users to manage an > external resource using Mistral workflows: the > OS::Mistral::ExternalResource resource type.) > > Support for accessing AWS resources through a service broker is also > implemented using these Ansible Playbook Bundles.[3] > > Does this mean maintaining another client interface? > ---------------------------------------------------- > > Maybe not. We already have per-project Python libraries, (deprecated) > per-project CLIs, openstackclient CLIs, openstack-sdk, shade, Heat > resource plugins, and Horizon dashboards. (Mistral actions are generated > automatically from the clients.) Some consolidation is already planned, > but it would be great not to require projects to maintain yet another > interface. > > One option is to implement a tool that generates a set of playbooks for > each of the resources already exposed (via shade) in the OpenStack > Ansible modules. Then in theory we'd only need to implement the common > parts once, and then every service with support in shade would get this > for free. Ideally the same broker could be used against any OpenStack > cloud (so e.g. k8s might be running in your private cloud, but you may > want its service catalog to allow you to connect to resources in one or > more public clouds) - using shade is an advantage there because it is > designed to abstract the differences between clouds. > > Another option might be to write or generate Heat templates for each > resource type we want to expose. Then we'd only need to implement a > common way of creating a Heat stack, and just have a different template > for each resource type. This is the approach taken by the AWS playbook > bundles (except with CloudFormation, obviously). An advantage is that > this allows Heat to do any checking and type conversion required on the > input parameters. Heat templates can also be made to be fairly > cloud-independent, mainly because they make it easier to be explicit > about things like ports and subnets than on the command line, where it's > more tempting to allow things to happen in a magical but cloud-specific > way. > > I'd prefer to go with the pure-Ansible autogenerated way so we can have > support for everything, but looking at the GCP[5]/Azure[4]/AWS[3] > brokers they have 10, 11 and 17 services respectively, so arguably we > could get a comparable number of features exposed without investing > crazy amounts of time if we had to write templates explicitly. > > How would authentication work? > ------------------------------ > > There are two main deployment topologies we need to consider: Kubernetes > deployed by an OpenStack tenant (Magnum-style, though not necessarily > using Magnum) and accessing resources in that tenant's project in the > local cloud, or accessing resources in some remote OpenStack cloud. > > We also need to take into account that in the second case, the > Kubernetes cluster may 'belong' to a single cloud tenant (as in the > first case) or may be shared by applications that each need to > authenticate to different OpenStack tenants. (Kubernetes has > traditionally assumed the former, but I expect it to move in the > direction of allowing the latter, and it's already fairly common for > OpenShift deployments.) > > The way e.g. the AWS broker[3] works is that you can either use the > credentials provisioned to the VM that k8s is installed on (a 'Role' in > AWS parlance - note that this is completely different to a Keystone > Role), or supply credentials to authenticate to AWS remotely. > > OpenStack doesn't yet support per-instance credentials, although we're > working on it. (One thing to keep in mind is that ideally we'll want a > way to provide different permissions to the service broker and > cloud-provider-openstack.) An option in the meantime might be to provide > a way to set up credentials as part of the k8s installation. We'd also > need to have a way to specify credentials manually. Unlike for > proprietary clouds, the credentials also need to include the Keystone > auth_url. We should try to reuse openstacksdk's clouds.yaml/secure.yaml > format[10] if possible. > > The OpenShift Ansible Broker works by starting up an Ansible container > on k8s to run a playbook from the bundle, so presumably credentials can > be passed as regular k8s secrets. > > In all cases we'll want to encourage users to authenticate using > Keystone Application Credentials[11]. > > How would network integration work? > ----------------------------------- > > Kuryr[12] allows us to connect application containers in Kubernetes to > Neutron networks in OpenStack. It would be desirable if, when the user > requests a VM or bare-metal server through the service broker, it were > possible to choose between attaching to the same network as Kubernetes > pods, or to a different network. > > > [1] https://www.openservicebrokerapi.org/ > [2] https://kubernetes.io/docs/concepts/service-catalog/ > [3] https://github.com/awslabs/aws-servicebroker#aws-service-broker > [4] > https://github.com/Azure/open-service-broker-azure#open-service-broker-for-azure > > [5] > https://github.com/GoogleCloudPlatform/gcp-service-broker#cloud-foundry-service-broker-for-google-cloud-platform > > [6] > https://github.com/kubernetes/community/blob/master/keps/0002-controller-manager.md#remove-cloud-provider-code-from-kubernetes-core > > [7] > https://github.com/kubernetes/cloud-provider-openstack#openstack-cloud-controller-manager > > [8] http://automationbroker.io/ > [9] https://docs.openshift.org/latest/apb_devel/index.html > [10] > https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html#config-files > > [11] > https://docs.openstack.org/keystone/latest/user/application_credentials.html > > [12] > https://docs.openstack.org/kuryr/latest/devref/goals_and_use_cases.html From akekane at redhat.com Sat Jun 30 13:40:49 2018 From: akekane at redhat.com (Abhishek Kekane) Date: Sat, 30 Jun 2018 19:10:49 +0530 Subject: [openstack-dev] [glance][glance_store] Functional testing of multiple backend In-Reply-To: References: Message-ID: Hi Tomoki, Thank you for your efforts, I will check this out. Seems like with the patch you have mentioned multi store works with cinder as well. Thank you again. On Sat 30 Jun, 2018, 18:05 Tomoki Sekiyama, wrote: > Hi Abhishek, > > Thanks for your work. > I have added a way to configure cinder stores using devstack > that I have used to test the multi-backend feature in cinder stores: > > https://etherpad.openstack.org/p/multi-store-scenarios > > Please note that it currently require additional bugfix patch for > glance_store: > https://review.openstack.org/#/c/579335/ > > Thanks, > Tomoki Sekiyama > > > 2018年6月28日(木) 15:03 Abhishek Kekane : > >> Hi All, >> >> In Rocky I have proposed a spec [1] for adding support for multiple >> backend in glance. I have completed the coding part and so far tested this >> feature with file, rbd and swift store. However I need support in testing >> this feature thoroughly. So kindly help me (or provide a way to configure >> cinder, sheepdog and vmware stores using devstack) in functional testing >> for remaining drivers. >> >> I have created one etherpad [2] with steps to configure this feature and >> some scenarios I have tested with file, rbd and swift drivers. >> >> Please do the needful. >> >> [1] https://review.openstack.org/562467 >> [2] https://etherpad.openstack.org/p/multi-store-scenarios >> >> Summary of upstream patches: >> >> https://review.openstack.org/#/q/topic:bp/multi-store+(status:open+OR+status:merged) >> >> >> >> Thanks & Best Regards, >> >> Abhishek Kekane >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duttaa at hotmail.com Sat Jun 30 16:51:12 2018 From: duttaa at hotmail.com (Abhijit Dutta) Date: Sat, 30 Jun 2018 16:51:12 +0000 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: References: , , Message-ID: Hi All, Any help here will be appreciated. ~Thanx Abhijit ________________________________ From: Abhijit Dutta Sent: Friday, June 29, 2018 8:10 AM To: Dr. Jens Harbott (frickler); OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue Hi, As advised I installed Fedora 27 (Workstation) and tried with the latest version of devstack (pulled from git). However this time I got the following error - ./stack.sh:1313:start_placement /opt/stack/devstack/lib/placement:184:start_placement_api /opt/stack/devstack/lib/placement:179:die [ERROR] /opt/stack/devstack/lib/placement:179 placement-api did not start Error on exit World dumping... see /opt/stack/logs/worlddump-2018-06-29-071219.txt for details (attached) The local.cnf has been configured as: [[local|localrc]] FLOATING_RANGE=192.168.1.224/27 FIXED_RANGE=10.11.12.0/24 FIXED_NETWORK_SIZE=256 FLAT_INTERFACE=eth0 ADMIN_PASSWORD=supersecret DATABASE_PASSWORD=iheartdatabases RABBIT_PASSWORD=flopsymopsy SERVICE_PASSWORD=iheartksl I have configured a static IP which is 192.168.1.201 in my laptop, which has dual core and 3gigs RAM. Please let me know, what can cause this error. ~Thanx Abhijit ________________________________ From: Dr. Jens Harbott (frickler) Sent: Wednesday, June 27, 2018 3:53 PM To: OpenStack Development Mailing List (not for usage questions) Cc: Abhijit Dutta Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue 2018-06-27 16:58 GMT+02:00 Amy Marrich : > Abhijit, > > I'm forwarding your issue to the OpenStack-dev list so that the right people > might see your issue and respond. > > Thanks, > > Amy (spotz) > > ---------- Forwarded message ---------- > From: Abhijit Dutta > Date: Wed, Jun 27, 2018 at 5:23 AM > Subject: [openstack-community] DevStack Installation issue > To: "community at lists.openstack.org" > > > Hi, > > > I am trying to install DevStack for the first time in a baremetal with > Fedora 28 installed. While executing the stack.sh I am getting the > following error: > > > No match for argument: Django > Error: Unable to find a match > > Can anybody in the community help me out with this problem. We are aware of some issues with deploying devstack on Fedora 28, these are being worked on, see https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28 If you want a quick solution, you could try deploying on Fedora 27 or Centos 7 instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Jun 30 17:57:56 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sat, 30 Jun 2018 19:57:56 +0200 Subject: [openstack-dev] [openstack-community] DevStack Installation issue In-Reply-To: References: Message-ID: <16C9C3E5-3EDE-4415-8EFB-CD2035A6CC0F@redhat.com> Hi, In error log there is info that placement API service didn’t start properly. You should then go to placement API logs (/var/log/nova/ or /opt/stack/logs/nova probably) and check there what was wrong with it. > Wiadomość napisana przez Abhijit Dutta w dniu 30.06.2018, o godz. 18:51: > > Hi All, > > Any help here will be appreciated. > > ~Thanx > Abhijit > > From: Abhijit Dutta > Sent: Friday, June 29, 2018 8:10 AM > To: Dr. Jens Harbott (frickler); OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue > > Hi, > > As advised I installed Fedora 27 (Workstation) and tried with the latest version of devstack (pulled from git). However this time I got the following error - > > ./stack.sh:1313:start_placement > /opt/stack/devstack/lib/placement:184:start_placement_api > /opt/stack/devstack/lib/placement:179:die > [ERROR] /opt/stack/devstack/lib/placement:179 placement-api did not start > Error on exit > World dumping... see /opt/stack/logs/worlddump-2018-06-29-071219.txt for details (attached) > > The local.cnf has been configured as: > > [[local|localrc]] > FLOATING_RANGE=192.168.1.224/27 > FIXED_RANGE=10.11.12.0/24 > FIXED_NETWORK_SIZE=256 > FLAT_INTERFACE=eth0 > ADMIN_PASSWORD=supersecret > DATABASE_PASSWORD=iheartdatabases > RABBIT_PASSWORD=flopsymopsy > SERVICE_PASSWORD=iheartksl > > I have configured a static IP which is 192.168.1.201 in my laptop, which has dual core and 3gigs RAM. > > Please let me know, what can cause this error. > > ~Thanx > Abhijit > > > > > From: Dr. Jens Harbott (frickler) > Sent: Wednesday, June 27, 2018 3:53 PM > To: OpenStack Development Mailing List (not for usage questions) > Cc: Abhijit Dutta > Subject: Re: [openstack-dev] [openstack-community] DevStack Installation issue > > 2018-06-27 16:58 GMT+02:00 Amy Marrich : > > Abhijit, > > > > I'm forwarding your issue to the OpenStack-dev list so that the right people > > might see your issue and respond. > > > > Thanks, > > > > Amy (spotz) > > > > ---------- Forwarded message ---------- > > From: Abhijit Dutta > > Date: Wed, Jun 27, 2018 at 5:23 AM > > Subject: [openstack-community] DevStack Installation issue > > To: "community at lists.openstack.org" > > > > > > Hi, > > > > > > I am trying to install DevStack for the first time in a baremetal with > > Fedora 28 installed. While executing the stack.sh I am getting the > > following error: > > > > > > No match for argument: Django > > Error: Unable to find a match > > > > Can anybody in the community help me out with this problem. > > We are aware of some issues with deploying devstack on Fedora 28, > these are being worked on, see > https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:uwsgi-f28 > > If you want a quick solution, you could try deploying on Fedora 27 or > Centos 7 instead. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From ianyrchoi at gmail.com Sat Jun 30 18:33:45 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sun, 1 Jul 2018 03:33:45 +0900 Subject: [openstack-dev] [docs] Office hours instead of regular docs project meetings? In-Reply-To: <20180627161455.6076f0abb3250571e6002fb5@redhat.com> References: <20180620142157.6701a2de41326adba9574ea5@redhat.com> <20180627161455.6076f0abb3250571e6002fb5@redhat.com> Message-ID: Hello Petr, Thanks a lot for dealing with this - I like Docs team office hours :) I can adjust my availablity to current office hour slots although I live in APAC region, but if there is more demand on APAC-friendly time slot, I would like to happily volunteer APAC-friendly Docs team office hour slots. With many thanks, /Ian Petr Kovar wrote on 6/27/2018 11:14 PM: > Hi again, > > Haven't got much feedback so far on the meeting format change, so let's > proceed with turning formal docs meetings into office hours, keeping the > same time for now, until we decide to make further adjustments based on the > attendance. > > The patch is here: > > https://review.openstack.org/#/c/578398/ > > Thanks, > pk > > > On Wed, 20 Jun 2018 14:21:57 +0200 > Petr Kovar wrote: > >> Hi all, >> >> Due to low attendance in docs project meetings in recent months, I'd like >> to propose turning regular docs meetings into office hours, like many other >> OpenStack teams did. >> >> My idea is to hold office hours every Wednesday, same time we held our >> docs meetings, at 16:00 UTC, in our team channel #openstack-doc where >> many community members already hang out and ask their questions about >> OpenStack docs. >> >> Objections, comments, thoughts? >> >> Would there be interest to also hold office hours during a more >> APAC-friendly time slot? We'd need to volunteers to take care of it, please >> let me know! >> >> Thanks, >> pk >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev