From mikal at stillhq.com Sat Dec 1 11:17:27 2018 From: mikal at stillhq.com (Michael Still) Date: Sat, 1 Dec 2018 22:17:27 +1100 Subject: [dev][nova] Yay Friday gate regressions! In-Reply-To: <07708c7b-5b34-f5ab-1f22-dfc8f1d75fc0@gmail.com> References: <07708c7b-5b34-f5ab-1f22-dfc8f1d75fc0@gmail.com> Message-ID: The first one of those was trivial (although I am confused as to why it didn't fail for the test run where the patch was proposed, I can't see an obvious race condition). I have proposed a fix at https://review.openstack.org/#/c/621346/ . Michael On Sat, Dec 1, 2018 at 8:30 AM Matt Riedemann wrote: > Just FYI there are a couple of regressions hitting nova unit and > functional test jobs right now: > > https://bugs.launchpad.net/nova/+bug/1806123 > > ^ Is likely due to mocking a global for the new I/O concurrency > semaphore stuff in the libvirt driver. I'm not sure what we should do > about that one. During the code review I suggested writing a fixture > which would essentially maintain the context manager (so we have > __enter__ and __exit__) but just make it a noop, but we still want to > make sure it's called in places where it's used. > > https://bugs.launchpad.net/nova/+bug/1806126 > > ^ Is a bit hairier since I'm seeing both weird, potentially global mock, > failures and also timeouts, also potentially because of mocking globals. > Since there is no functional code change tied to that one yet (it's > still being reviewed, this was a recreate test change only), I have > proposed a revert to try and stem the bleeding unless someone (mdbooth?) > can root cause and fix it faster. > > Happy rechecking! > > -- > > Thanks, > > Matt > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ervikrant06 at gmail.com Sun Dec 2 04:03:56 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Sun, 2 Dec 2018 09:33:56 +0530 Subject: [openstack-dev] [kuryr] can we start kuryr libnetwork in container inside the nova VM. In-Reply-To: <7b585278013291fda1d55b5d74965b26d317e637.camel@redhat.com> References: <7b585278013291fda1d55b5d74965b26d317e637.camel@redhat.com> Message-ID: Thanks Michal. Yes, my scenario is same which you mentioned. But I don't want to use COE atm. So. the OVS and neutron-agent running inside the VM will be communicating with compute node neutron agent? Thanks & Regards, Vikrant Aggarwal On Fri, Nov 30, 2018 at 9:31 PM Michał Dulko wrote: > On Fri, 2018-11-30 at 09:38 +0530, Vikrant Aggarwal wrote: > > Hello Team, > > > > I have seen the steps of starting the kuryr libnetwork container on > > compute node. But If I need to run the same container inside the VM > > running on compute node, is't possible to do that? > > > > I am not sure how can I map the /var/run/openvswitch inside the > > nested VM because this is present on compute node. > > I think that if you want to run Neutron-networked Docker containers on > an OpenStack VM, you'll need OpenvSwitch and neutron-agent installed on > that VM as well. > > A better-suited approach would be to run K8s on OpenStack and use > kuryr-kubernetes instead. That way Neutron subports are used to network > pods. We have such a K8s-on-VM use case described in the docs [1]. > > [1] > https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html > > > https://docs.openstack.org/kuryr-libnetwork/latest/readme.html > > > > Thanks & Regards, > > Vikrant Aggarwal > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Sun Dec 2 14:43:46 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 2 Dec 2018 06:43:46 -0800 Subject: Proposing KaiFeng Wang for ironic-core Message-ID: I'd like to propose adding KaiFeng to the ironic-core reviewer group. Previously, we had granted KaiFeng rights on ironic-inspector-core and I personally think they have done a great job there. Kaifeng has also been reviewing other repositories in ironic's scope[1]. Their reviews and feedback have been insightful and meaningful. They have also been very active[2] at reviewing which is an asset for any project. I believe they will be an awesome addition to the team. -Julia [1]: http://stackalytics.com/?module=ironic-group&user_id=kaifeng [2]: http://stackalytics.com/report/contribution/ironic-group/90 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ichihara.hirofumi at gmail.com Sun Dec 2 14:08:25 2018 From: ichihara.hirofumi at gmail.com (Hirofumi Ichihara) Date: Sun, 2 Dec 2018 23:08:25 +0900 Subject: [openstack-dev] Stepping down from Neutron core team Message-ID: Hi all, I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. My start of neutron was 5 years ago. I had many good experiences from neutron team. Today neutron is great project. Neutron gets new reviewers, contributors and, users. Keep on being a great community. Thanks, Hirofumi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From miguel at mlavalle.com Sun Dec 2 20:57:30 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 2 Dec 2018 14:57:30 -0600 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: Hi Hirofumi, Thanks for your contributions to the project over these years. You will be missed. We also wish the best in your future endeavors. Best regards Miguel On Sun, Dec 2, 2018 at 8:11 AM Hirofumi Ichihara < ichihara.hirofumi at gmail.com> wrote: > Hi all, > > > I’m stepping down from the core team because my role changed and I cannot > have responsibilities of neutron core. > > > My start of neutron was 5 years ago. I had many good experiences from > neutron team. > > Today neutron is great project. Neutron gets new reviewers, contributors > and, users. > > Keep on being a great community. > > > Thanks, > > Hirofumi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Mon Dec 3 01:24:11 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Dec 2018 10:24:11 +0900 Subject: [dev][qa][devstack] Release management for QA toold and plugins In-Reply-To: <6be6bbf3-a862-df85-5120-b90f5c74c1cf@openstack.org> References: <20f06939-9590-4b93-3381-02c32570b990@openstack.org> <20181129191636.GA26514@sinanju.localdomain> <167627c0fb2.fedfd67828981.6889702971134127091@ghanshyammann.com> <167638457ad.fe9b8693739.8314446462346139058@ghanshyammann.com> <6be6bbf3-a862-df85-5120-b90f5c74c1cf@openstack.org> Message-ID: <16771aa6641.f7035e7c28699.479407897586326349@ghanshyammann.com> ---- On Fri, 30 Nov 2018 19:05:36 +0900 Thierry Carrez wrote ---- > Ghanshyam Mann wrote: > > ---- On Fri, 30 Nov 2018 14:29:24 +0900 Duc Truong wrote ---- > > > On Thu, Nov 29, 2018 at 6:39 PM Ghanshyam Mann wrote: > > > > > > > > > > * devstack-vagrant: never released, no change over the past year. Is it > > > > > > meant to be released in the future (cycle-independent) or considered > > > > > > continuously published (release-management:none) or should it be retired ? > > > > > > > > > > > > > I am ok to retire this based on no one using it. > > > > > > I actually use devstack-vagrant and find it useful to setup consistent > > > devstack environments with minimal effort. > > > > Ok, then we can keep that. I will be happy to add you as maintainer of that if you would like to. I saw you have done some commit and review in Tempest and will be good to have you as devstack-vagrant . > > OK so in summary: > > eslint-config-openstack, karma-subunit-reporter, devstack-tools -> > should be considered cycle-independent (with older releases history > imported). Any future release would be done through openstack/releases > > devstack-vagrant -> does not need releases or release management, will > be marked release-management:none in governance > > devstack-plugin-ceph -> does not need releases or cycle-related > branching, so will be marked release-management:none in governance > > Other devstack-plugins maintainers should pick whether they need to be > branched every cycle or not. Oslo-maintained plugins like > devstack-plugin-zmq and devstack-plugin-pika will, for example. > > Unless someone objects, I'll push the related changes where needed. > Thanks for the clarification ! +1. Those looks good. Thanks. -gmann > > -- > Thierry Carrez (ttx) > > From gmann at ghanshyammann.com Mon Dec 3 01:58:51 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Dec 2018 10:58:51 +0900 Subject: [Interop-wg] [dev] [cinder] [qa] Strict Validation for Volume API using JSON Schema In-Reply-To: <8337FB0D-81D3-4E6C-9039-47BD749C3862@vmware.com> References: <16760426d56.ef4c345622903.2195899647060980382@ghanshyammann.com> <29d271ff-d5a7-2a28-53c1-3be7b868ad20@gmail.com> <8337FB0D-81D3-4E6C-9039-47BD749C3862@vmware.com> Message-ID: <16771ca2086.c57557fb28769.9145911937981210050@ghanshyammann.com> ---- On Sat, 01 Dec 2018 02:58:45 +0900 Mark Voelker wrote ---- > > > On Nov 29, 2018, at 9:28 PM, Matt Riedemann wrote: > > > > On 11/29/2018 10:17 AM, Ghanshyam Mann wrote: > >> - To improve the volume API testing to avoid the backward compatible changes. Sometime we accidentally change the API in backward incompatible way and strict validation with JSON schema help to block those. > > > > +1 this is very useful to avoid unintentionally breaking the API. > > > >> We want to hear from cinder and interop team about any impact of this change to them. > > > > I'm mostly interested in what the interop WG would do about this given it's a potentially breaking change for interop without changes to the guidelines. Would there be some sort of grace period for clouds to conform to the changes in tempest? > > > > That’s more or less what eventually happened when we began enforcing strict validation on Nova a few years ago after considerable debate. Clouds that were compliant with the interop guidelines before the strict validation patch landed and started failing once it went in could apply for a waiver while they worked on removing or upstreaming the nonstandard stuff. For those not familiar, here’s the patch that created a waiver program: > > https://review.openstack.org/#/c/333067/ > > Note that this expired with the 2017.01 Guideline: > > https://review.openstack.org/#/c/512447/ > > While not everyone was totally happy with the solution, it seemed to work out as a middle ground solution that helped get clouds on a better path in the end. I think we’ll discuss whether we’d need to do something like this again here. I’d love to hear: > > 1.) If anyone knows of clouds/products that would be fail interop testing because of this. Not looking to name and shame, just to get an idea of whether or not we have a concrete problem and how big it is. > > 2.) Opinions on how the waiver program went last time and whether the rest of the community feels like it’s something we should consider again. > > Personally I’m supportive of the general notion of improving API interoperability here…as usual it’s figuring out the mechanics of the transition that take a little figuring. =) Thanks Mark for response. I think point 1 is important, it is good to get the list of clouds or failure due to this this strict validation change. And accordingly, we can wait on Tempest side to merge those changes for this cycle (but personally I do not want to delay that if everything is fine), so that we can avoid the immediate failure of interop program. -gmann > > At Your Service, > > Mark T. Voelker > > > > -- > > > > Thanks, > > > > Matt > > > > _______________________________________________ > > Interop-wg mailing list > > Interop-wg at lists.openstack.org > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Finterop-wg&data=02%7C01%7Cmvoelker%40vmware.com%7C82a07fe28afe488c2eea08d6566b9734%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C636791417437738014&sdata=lEx%2BbbTVzC%2FRC7ebmARDrFhfMsToM7Rwx8EKYtE7iFM%3D&reserved=0 > > From lijie at unitedstack.com Mon Dec 3 02:02:05 2018 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Mon, 3 Dec 2018 10:02:05 +0800 Subject: [openstack-dev] [nova] about notification in nova Message-ID: Hi, all: I have a question about the notification in nova, that is the actual operator is different from the operator was record in panko. Such as the delete action, we create the VM as user1, and we delete the VM as user2, but the operator is user1 who delete the VM in panko event, not the actual operator user2. Can you tell me more about this?Thank you very much. Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dangtrinhnt at gmail.com Mon Dec 3 02:06:28 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 3 Dec 2018 11:06:28 +0900 Subject: [Searchlight] Meeting today at 13:30 UTC Message-ID: Hi team, We will have the team meeting today at 13:30 UTC on #openstack-searchlight. Please join me and the others! Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Dec 3 03:24:42 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 3 Dec 2018 16:24:42 +1300 Subject: [forum][tc] Summary of the Vision for OpenStack Clouds session Message-ID: <7c223ec3-5deb-718f-573f-98903a40089c@redhat.com> During the Berlin summit there were a couple of discussions about the Vision for OpenStack Clouds, culminating in the initial version being merged. You can find it published here: https://governance.openstack.org/tc/reference/technical-vision.html During the joint leadership meeting on the Monday before the Summit, the TC presented the latest draft of the vision to the Foundation Board.[1] All of the feedback we received from board members was positive. Paper copies of the draft were available at the meeting, and it was also posted to the foundation mailing list afterwards.[2] This was followed on Thursday by a Forum session to discuss the next steps: https://etherpad.openstack.org/p/BER-cloud-vision With the review having already accumulated the required number of votes from the TC and nobody objecting, we determined that the next step should be to approve the draft and publish it. This is intended to be a living document, so in that spirit we'll continue to make ongoing tweaks based on feedback. A number of evolutionary improvements have been suggested (my summary): * (mordred) The SDK supports a use case where each Region in a cloud has its own Keystone endpoint, with the data synced behind the scenes. The definition of 'region' contained in the vision does not cover this. ayoung will submit a patch to update the wording. * (gmann) We could mention something about feature discovery in the section on OpenStack-specific considerations, to help ensure interoperability. * (fungi) We could also more explicitly declare that leaking implementation details into the API and beyond is something we're trying to avoid for interoperability reasons. fungi will submit a patch to do that. Doug also suggested that we should try to get this document incorporated into the main openstack.org website somehow, since the governance site is not especially visible even within the technical community and might as well not exist for those outside it. There are technical challenges with keeping a non-static document in sync to openstack.org, and publishing a forward-looking document like this verbatim to a wider audience might risk creating some confusion. However, it would be a shame to squander this opportunity for the technical community to collaborate more with the Foundation's marketing team in how we portray the goals of the OpenStack project. I will contact Lauren Sell on behalf of the TC to start a discussion on how we can make the best use of the document. [1] https://docs.google.com/presentation/d/1wcG7InY2A5y67dt5lC14CI1gQHGZ3db8-yshOu-STk8/edit?usp=sharing#slide=id.g46a6072f4a_0_276 [2] http://lists.openstack.org/pipermail/foundation/2018-November/002657.html From shiina.hironori at fujitsu.com Mon Dec 3 06:09:20 2018 From: shiina.hironori at fujitsu.com (shiina.hironori at fujitsu.com) Date: Mon, 3 Dec 2018 06:09:20 +0000 Subject: Proposing KaiFeng Wang for ironic-core In-Reply-To: References: Message-ID: +1 --- Hironori > -----Original Message----- > From: Julia Kreger [mailto:juliaashleykreger at gmail.com] > Sent: Sunday, December 2, 2018 11:44 PM > To: openstack-discuss at lists.openstack.org > Subject: Proposing KaiFeng Wang for ironic-core > > I'd like to propose adding KaiFeng to the ironic-core reviewer group. Previously, we had granted KaiFeng rights > on ironic-inspector-core and I personally think they have done a great job there. > > Kaifeng has also been reviewing other repositories in ironic's scope[1]. Their reviews and feedback have been > insightful and meaningful. They have also been very active[2] at reviewing which is an asset for any project. > > I believe they will be an awesome addition to the team. > > -Julia > > [1]: http://stackalytics.com/?module=ironic-group&user_id=kaifeng > > [2]: http://stackalytics.com/report/contribution/ironic-group/90 > From skaplons at redhat.com Mon Dec 3 08:09:48 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 3 Dec 2018 09:09:48 +0100 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> Hi, Thanks for all Your work in Neutron and good luck in Your new role. — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Hirofumi Ichihara w dniu 02.12.2018, o godz. 15:08: > > Hi all, > > I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. > > My start of neutron was 5 years ago. I had many good experiences from neutron team. > Today neutron is great project. Neutron gets new reviewers, contributors and, users. > Keep on being a great community. > > Thanks, > Hirofumi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From bdobreli at redhat.com Mon Dec 3 09:34:50 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 3 Dec 2018 10:34:50 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> Message-ID: Hi Kevin. Puppet not only creates config files but also executes a service dependent steps, like db sync, so neither '[base] -> [puppet]' nor '[base] -> [service]' would not be enough on its own. That requires some services specific code to be included into *config* images as well. PS. There is a related spec [0] created by Dan, please take a look and propose you feedback [0] https://review.openstack.org/620062 On 11/30/18 6:48 PM, Fox, Kevin M wrote: > Still confused by: > [base] -> [service] -> [+ puppet] > not: > [base] -> [puppet] > and > [base] -> [service] > ? > > Thanks, > Kevin > ________________________________________ > From: Bogdan Dobrelya [bdobreli at redhat.com] > Sent: Friday, November 30, 2018 5:31 AM > To: Dan Prince; openstack-dev at lists.openstack.org; openstack-discuss at lists.openstack.org > Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes > > On 11/30/18 1:52 PM, Dan Prince wrote: >> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >>> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>>> >>>>>> >>>>>>> Reiterating again on previous points: >>>>>>> >>>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>>> not via 'rpm >>>>>>> -ev --nodeps'. >>>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>>> certainly put >>>>>>> them in a separate container outside of the runtime service >>>>>>> containers >>>>>>> but doing so would actually cost you much more >>>>>>> space/bandwidth for each >>>>>>> service container. As both of these have to get downloaded to >>>>>>> each node >>>>>>> anyway in order to generate config files with our current >>>>>>> mechanisms >>>>>>> I'm not sure this buys you anything. >>>>>> >>>>>> +1. I was actually under the impression that we concluded >>>>>> yesterday on >>>>>> IRC that this is the only thing that makes sense to seriously >>>>>> consider. >>>>>> But even then it's not a win-win -- we'd gain some security by >>>>>> leaner >>>>>> production images, but pay for it with space+bandwidth by >>>>>> duplicating >>>>>> image content (IOW we can help achieve one of the goals we had >>>>>> in mind >>>>>> by worsening the situation w/r/t the other goal we had in >>>>>> mind.) >>>>>> >>>>>> Personally i'm not sold yet but it's something that i'd >>>>>> consider if we >>>>>> got measurements of how much more space/bandwidth usage this >>>>>> would >>>>>> consume, and if we got some further details/examples about how >>>>>> serious >>>>>> are the security concerns if we leave config mgmt tools in >>>>>> runtime >>>>>> images. >>>>>> >>>>>> IIRC the other options (that were brought forward so far) were >>>>>> already >>>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>>> Bin/lib bind >>>>>> mounting being too hacky and fragile, and nsenter not really >>>>>> solving the >>>>>> problem (because it allows us to switch to having different >>>>>> bins/libs >>>>>> available, but it does not allow merging the availability of >>>>>> bins/libs >>>>>> from two containers into a single context). >>>>>> >>>>>>> We are going in circles here I think.... >>>>>> >>>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>>> to have >>>>>> config tools in runtime images", but IMO we all sorta agree >>>>>> that it >>>>>> would be better not to have them there, if it came at no cost. >>>>>> >>>>>> I think to move forward, it would be interesting to know: if we >>>>>> do this >>>>>> (i'll borrow Dan's drawing): >>>>>> >>>>>>> base container| --> |service container| --> |service >>>>>>> container w/ >>>>>> Puppet installed| >>>>>> >>>>>> How much more space and bandwidth would this consume per node >>>>>> (e.g. >>>>>> separately per controller, per compute). This could help with >>>>>> decision >>>>>> making. >>>>> >>>>> As I've already evaluated in the related bug, that is: >>>>> >>>>> puppet-* modules and manifests ~ 16MB >>>>> puppet with dependencies ~61MB >>>>> dependencies of the seemingly largest a dependency, systemd >>>>> ~190MB >>>>> >>>>> that would be an extra layer size for each of the container >>>>> images to be >>>>> downloaded/fetched into registries. >>>> >>>> Thanks, i tried to do the math of the reduction vs. inflation in >>>> sizes >>>> as follows. I think the crucial point here is the layering. If we >>>> do >>>> this image layering: >>>> >>>>> base| --> |+ service| --> |+ Puppet| >>>> >>>> we'd drop ~267 MB from base image, but we'd be installing that to >>>> the >>>> topmost level, per-component, right? >>> >>> Given we detached systemd from puppet, cronie et al, that would be >>> 267-190MB, so the math below would be looking much better >> >> Would it be worth writing a spec that summarizes what action items are >> bing taken to optimize our base image with regards to the systemd? > > Perhaps it would be. But honestly, I see nothing biggie to require a > full blown spec. Just changing RPM deps and layers for containers > images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, > it should be working as of fedora28(or 29) I hope) > > [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 > > >> >> It seems like the general consenses is that cleaning up some of the RPM >> dependencies so that we don't install Systemd is the biggest win. >> >> What confuses me is why are there still patches posted to move Puppet >> out of the base layer when we agree moving it out of the base layer >> would actually cause our resulting container image set to be larger in >> size. >> >> Dan >> >> >>> >>>> In my basic deployment, undercloud seems to have 17 "components" >>>> (49 >>>> containers), overcloud controller 15 components (48 containers), >>>> and >>>> overcloud compute 4 components (7 containers). Accounting for >>>> overlaps, >>>> the total number of "components" used seems to be 19. (By >>>> "components" >>>> here i mean whatever uses a different ConfigImage than other >>>> services. I >>>> just eyeballed it but i think i'm not too far off the correct >>>> number.) >>>> >>>> So we'd subtract 267 MB from base image and add that to 19 leaf >>>> images >>>> used in this deployment. That means difference of +4.8 GB to the >>>> current >>>> image sizes. My /var/lib/registry dir on undercloud with all the >>>> images >>>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>>> >>>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>>> CDNs >>>> (both external and e.g. internal within OpenStack Infra CI clouds). >>>> >>>> And for internal traffic between local registry and overcloud >>>> nodes, it >>>> gives +3.7 GB per controller and +800 MB per compute. That may not >>>> be so >>>> critical but still feels like a considerable downside. >>>> >>>> Another gut feeling is that this way of image layering would take >>>> longer >>>> time to build and to run the modify-image Ansible role which we use >>>> in >>>> CI, so that could endanger how our CI jobs fit into the time limit. >>>> We >>>> could also probably measure this but i'm not sure if it's worth >>>> spending >>>> the time. >>>> >>>> All in all i'd argue we should be looking at different options >>>> still. >>>> >>>>> Given that we should decouple systemd from all/some of the >>>>> dependencies >>>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>>> seems we >>>>> cannot break the love of puppet and systemd as it heavily relies >>>>> on the >>>>> latter and changing packaging like that would higly likely affect >>>>> baremetal deployments with puppet and systemd co-operating. >>>> >>>> Ack :/ >>>> >>>>> Long story short, we cannot shoot both rabbits with a single >>>>> shot, not >>>>> with puppet :) May be we could with ansible replacing puppet >>>>> fully... >>>>> So splitting config and runtime images is the only choice yet to >>>>> address >>>>> the raised security concerns. And let's forget about edge cases >>>>> for now. >>>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>>> computes ain't gonna be our the biggest problem for sure. >>>>> >>>>> [0] >>>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>>> >>>>>>> Dan >>>>>>> >>>>>> >>>>>> Thanks >>>>>> >>>>>> Jirka >>>>>> >>>>>> _______________________________________________________________ >>>>>> ___________ >>>>>> >>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>> Unsubscribe: >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> ___________________________________________________________________ >>>> _______ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>>> bscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Mon Dec 3 09:37:57 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 3 Dec 2018 10:37:57 +0100 Subject: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes In-Reply-To: References: <429cdff5-77b8-8417-8024-ef99ee2234dc@redhat.com> <48ec22eab4c90f248779b8bf7677b9a9a5bacc2f.camel@redhat.com> <4121346a-7341-184f-2dcb-32092409196b@redhat.com> <4efc561819a8ff90628902117e1514262f03b9cd.camel@redhat.com> <29497a3a-ce90-b7ec-d0f8-3f98c90016ce@redhat.com> <7d2fce52ca8bb5156b753a95cbb9e2df7ad741c8.camel@redhat.com> <1A3C52DFCD06494D8528644858247BF01C24B507@EX10MBOX03.pnnl.gov> Message-ID: On 12/3/18 10:34 AM, Bogdan Dobrelya wrote: > Hi Kevin. > Puppet not only creates config files but also executes a service > dependent steps, like db sync, so neither '[base] -> [puppet]' nor > '[base] -> [service]' would not be enough on its own. That requires some > services specific code to be included into *config* images as well. > > PS. There is a related spec [0] created by Dan, please take a look and > propose you feedback > > [0] https://review.openstack.org/620062 I'm terribly sorry, but that's a corrected link [0] to that spec. [0] https://review.openstack.org/620909 > > On 11/30/18 6:48 PM, Fox, Kevin M wrote: >> Still confused by: >> [base] -> [service] -> [+ puppet] >> not: >> [base] -> [puppet] >> and >> [base] -> [service] >> ? >> >> Thanks, >> Kevin >> ________________________________________ >> From: Bogdan Dobrelya [bdobreli at redhat.com] >> Sent: Friday, November 30, 2018 5:31 AM >> To: Dan Prince; openstack-dev at lists.openstack.org; >> openstack-discuss at lists.openstack.org >> Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of >> containers for security and size of images (maintenance) sakes >> >> On 11/30/18 1:52 PM, Dan Prince wrote: >>> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote: >>>> On 11/29/18 6:42 PM, Jiří Stránský wrote: >>>>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote: >>>>>> On 11/28/18 6:02 PM, Jiří Stránský wrote: >>>>>>> >>>>>>> >>>>>>>> Reiterating again on previous points: >>>>>>>> >>>>>>>> -I'd be fine removing systemd. But lets do it properly and >>>>>>>> not via 'rpm >>>>>>>> -ev --nodeps'. >>>>>>>> -Puppet and Ruby *are* required for configuration. We can >>>>>>>> certainly put >>>>>>>> them in a separate container outside of the runtime service >>>>>>>> containers >>>>>>>> but doing so would actually cost you much more >>>>>>>> space/bandwidth for each >>>>>>>> service container. As both of these have to get downloaded to >>>>>>>> each node >>>>>>>> anyway in order to generate config files with our current >>>>>>>> mechanisms >>>>>>>> I'm not sure this buys you anything. >>>>>>> >>>>>>> +1. I was actually under the impression that we concluded >>>>>>> yesterday on >>>>>>> IRC that this is the only thing that makes sense to seriously >>>>>>> consider. >>>>>>> But even then it's not a win-win -- we'd gain some security by >>>>>>> leaner >>>>>>> production images, but pay for it with space+bandwidth by >>>>>>> duplicating >>>>>>> image content (IOW we can help achieve one of the goals we had >>>>>>> in mind >>>>>>> by worsening the situation w/r/t the other goal we had in >>>>>>> mind.) >>>>>>> >>>>>>> Personally i'm not sold yet but it's something that i'd >>>>>>> consider if we >>>>>>> got measurements of how much more space/bandwidth usage this >>>>>>> would >>>>>>> consume, and if we got some further details/examples about how >>>>>>> serious >>>>>>> are the security concerns if we leave config mgmt tools in >>>>>>> runtime >>>>>>> images. >>>>>>> >>>>>>> IIRC the other options (that were brought forward so far) were >>>>>>> already >>>>>>> dismissed in yesterday's IRC discussion and on the reviews. >>>>>>> Bin/lib bind >>>>>>> mounting being too hacky and fragile, and nsenter not really >>>>>>> solving the >>>>>>> problem (because it allows us to switch to having different >>>>>>> bins/libs >>>>>>> available, but it does not allow merging the availability of >>>>>>> bins/libs >>>>>>> from two containers into a single context). >>>>>>> >>>>>>>> We are going in circles here I think.... >>>>>>> >>>>>>> +1. I think too much of the discussion focuses on "why it's bad >>>>>>> to have >>>>>>> config tools in runtime images", but IMO we all sorta agree >>>>>>> that it >>>>>>> would be better not to have them there, if it came at no cost. >>>>>>> >>>>>>> I think to move forward, it would be interesting to know: if we >>>>>>> do this >>>>>>> (i'll borrow Dan's drawing): >>>>>>> >>>>>>>> base container| --> |service container| --> |service >>>>>>>> container w/ >>>>>>> Puppet installed| >>>>>>> >>>>>>> How much more space and bandwidth would this consume per node >>>>>>> (e.g. >>>>>>> separately per controller, per compute). This could help with >>>>>>> decision >>>>>>> making. >>>>>> >>>>>> As I've already evaluated in the related bug, that is: >>>>>> >>>>>> puppet-* modules and manifests ~ 16MB >>>>>> puppet with dependencies ~61MB >>>>>> dependencies of the seemingly largest a dependency, systemd >>>>>> ~190MB >>>>>> >>>>>> that would be an extra layer size for each of the container >>>>>> images to be >>>>>> downloaded/fetched into registries. >>>>> >>>>> Thanks, i tried to do the math of the reduction vs. inflation in >>>>> sizes >>>>> as follows. I think the crucial point here is the layering. If we >>>>> do >>>>> this image layering: >>>>> >>>>>> base| --> |+ service| --> |+ Puppet| >>>>> >>>>> we'd drop ~267 MB from base image, but we'd be installing that to >>>>> the >>>>> topmost level, per-component, right? >>>> >>>> Given we detached systemd from puppet, cronie et al, that would be >>>> 267-190MB, so the math below would be looking much better >>> >>> Would it be worth writing a spec that summarizes what action items are >>> bing taken to optimize our base image with regards to the systemd? >> >> Perhaps it would be. But honestly, I see nothing biggie to require a >> full blown spec. Just changing RPM deps and layers for containers >> images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted, >> it should be working as of fedora28(or 29) I hope) >> >> [0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659 >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672 >> >> >>> >>> It seems like the general consenses is that cleaning up some of the RPM >>> dependencies so that we don't install Systemd is the biggest win. >>> >>> What confuses me is why are there still patches posted to move Puppet >>> out of the base layer when we agree moving it out of the base layer >>> would actually cause our resulting container image set to be larger in >>> size. >>> >>> Dan >>> >>> >>>> >>>>> In my basic deployment, undercloud seems to have 17 "components" >>>>> (49 >>>>> containers), overcloud controller 15 components (48 containers), >>>>> and >>>>> overcloud compute 4 components (7 containers). Accounting for >>>>> overlaps, >>>>> the total number of "components" used seems to be 19. (By >>>>> "components" >>>>> here i mean whatever uses a different ConfigImage than other >>>>> services. I >>>>> just eyeballed it but i think i'm not too far off the correct >>>>> number.) >>>>> >>>>> So we'd subtract 267 MB from base image and add that to 19 leaf >>>>> images >>>>> used in this deployment. That means difference of +4.8 GB to the >>>>> current >>>>> image sizes. My /var/lib/registry dir on undercloud with all the >>>>> images >>>>> currently has 5.1 GB. We'd almost double that to 9.9 GB. >>>>> >>>>> Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the >>>>> CDNs >>>>> (both external and e.g. internal within OpenStack Infra CI clouds). >>>>> >>>>> And for internal traffic between local registry and overcloud >>>>> nodes, it >>>>> gives +3.7 GB per controller and +800 MB per compute. That may not >>>>> be so >>>>> critical but still feels like a considerable downside. >>>>> >>>>> Another gut feeling is that this way of image layering would take >>>>> longer >>>>> time to build and to run the modify-image Ansible role which we use >>>>> in >>>>> CI, so that could endanger how our CI jobs fit into the time limit. >>>>> We >>>>> could also probably measure this but i'm not sure if it's worth >>>>> spending >>>>> the time. >>>>> >>>>> All in all i'd argue we should be looking at different options >>>>> still. >>>>> >>>>>> Given that we should decouple systemd from all/some of the >>>>>> dependencies >>>>>> (an example topic for RDO [0]), that could save a 190MB. But it >>>>>> seems we >>>>>> cannot break the love of puppet and systemd as it heavily relies >>>>>> on the >>>>>> latter and changing packaging like that would higly likely affect >>>>>> baremetal deployments with puppet and systemd co-operating. >>>>> >>>>> Ack :/ >>>>> >>>>>> Long story short, we cannot shoot both rabbits with a single >>>>>> shot, not >>>>>> with puppet :) May be we could with ansible replacing puppet >>>>>> fully... >>>>>> So splitting config and runtime images is the only choice yet to >>>>>> address >>>>>> the raised security concerns. And let's forget about edge cases >>>>>> for now. >>>>>> Tossing around a pair of extra bytes over 40,000 WAN-distributed >>>>>> computes ain't gonna be our the biggest problem for sure. >>>>>> >>>>>> [0] >>>>>> https://review.rdoproject.org/r/#/q/topic:base-container-reduction >>>>>> >>>>>>>> Dan >>>>>>>> >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Jirka >>>>>>> >>>>>>> _______________________________________________________________ >>>>>>> ___________ >>>>>>> >>>>>>> OpenStack Development Mailing List (not for usage questions) >>>>>>> Unsubscribe: >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>>> ___________________________________________________________________ >>>>> _______ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu >>>>> bscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tenobreg at redhat.com Mon Dec 3 12:06:46 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 3 Dec 2018 09:06:46 -0300 Subject: [openstack-dev][sahara] Sahara APIv2 Worklist Message-ID: Hi Saharans, One of the main focus of this cycle, and it has been for a while now, is the work on APIv2. During the last Sahara meeting we came up with a list of the reamining work left so we can release APIv2 as stable this cycle. I created a worklist[1] on storyboard so we can more easily track the progress of the few remaining tasks. Thanks all, [1] https://storyboard.openstack.org/#!/worklist/533 -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranjankrchaubey at gmail.com Mon Dec 3 01:57:25 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 07:27:25 +0530 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <1778B743-99D8-4735-B057-6C19A457BF05@gmail.com> Hi Team , I am getting error of Http 500 server not fullfill request by id please help me how to fix Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 2:27 AM, Miguel Lavalle wrote: > > Hi Hirofumi, > > Thanks for your contributions to the project over these years. You will be missed. We also wish the best in your future endeavors. > > Best regards > > Miguel > >> On Sun, Dec 2, 2018 at 8:11 AM Hirofumi Ichihara wrote: >> Hi all, >> >> I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. >> >> My start of neutron was 5 years ago. I had many good experiences from neutron team. >> Today neutron is great project. Neutron gets new reviewers, contributors and, users. >> Keep on being a great community. >> >> Thanks, >> Hirofumi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhengzhenyulixi at gmail.com Mon Dec 3 02:31:02 2018 From: zhengzhenyulixi at gmail.com (Zhenyu Zheng) Date: Mon, 3 Dec 2018 10:31:02 +0800 Subject: [openstack-dev] [nova] about notification in nova In-Reply-To: References: Message-ID: Hi, Are you using versioned notification? If you are using versioned nofitication, you should get an ``action_initiator_user`` and an ``action_initiator_project`` indicating who initiated this action, we had them since I649d8a27baa8840bc1bb567fef027c749c663432 . If you are not using versioned notification, then versioned notification will be recommanded. Thanks On Mon, Dec 3, 2018 at 10:06 AM Rambo wrote: > Hi, all: > I have a question about the notification in nova, that is the > actual operator is different from the operator was record in panko. Such > as the delete action, we create the VM as user1, and we delete the VM as > user2, but the operator is user1 who delete the VM in panko event, not the > actual operator user2. > Can you tell me more about this?Thank you very much. > > > > > > > > > Best Regards > Rambo > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ranjankrchaubey at gmail.com Mon Dec 3 02:40:06 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 08:10:06 +0530 Subject: [openstack-dev] [nova] about notification in nova In-Reply-To: References: Message-ID: <07726B1E-643C-40CF-A7E9-D906B909699F@gmail.com> This regarding keystone ? Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 8:01 AM, Zhenyu Zheng wrote: > > Hi, > > Are you using versioned notification? If you are using versioned nofitication, you should get an ``action_initiator_user`` and an ``action_initiator_project`` > indicating who initiated this action, we had them since I649d8a27baa8840bc1bb567fef027c749c663432 . If you are not using versioned notification, then > versioned notification will be recommanded. > > Thanks > >> On Mon, Dec 3, 2018 at 10:06 AM Rambo wrote: >> Hi, all: >> I have a question about the notification in nova, that is the actual operator is different from the operator was record in panko. Such as the delete action, we create the VM as user1, and we delete the VM as user2, but the operator is user1 who delete the VM in panko event, not the actual operator user2. >> Can you tell me more about this?Thank you very much. >> >> >> >> >> >> >> >> >> Best Regards >> Rambo >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ranjankrchaubey at gmail.com Mon Dec 3 13:06:06 2018 From: ranjankrchaubey at gmail.com (Ranjan Krchaubey) Date: Mon, 3 Dec 2018 18:36:06 +0530 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> References: <1588BF61-D40E-4CD4-BB2E-BBDEEC8B5C75@redhat.com> Message-ID: Hi all, Can any one help me to resvolve error 111 on keystone Thanks & Regards Ranjan Kumar Mob: 9284158762 > On 03-Dec-2018, at 1:39 PM, Slawomir Kaplonski wrote: > > Hi, > > Thanks for all Your work in Neutron and good luck in Your new role. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > >> Wiadomość napisana przez Hirofumi Ichihara w dniu 02.12.2018, o godz. 15:08: >> >> Hi all, >> >> I’m stepping down from the core team because my role changed and I cannot have responsibilities of neutron core. >> >> My start of neutron was 5 years ago. I had many good experiences from neutron team. >> Today neutron is great project. Neutron gets new reviewers, contributors and, users. >> Keep on being a great community. >> >> Thanks, >> Hirofumi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jim at jimrollenhagen.com Mon Dec 3 13:19:38 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 3 Dec 2018 08:19:38 -0500 Subject: Proposing KaiFeng Wang for ironic-core In-Reply-To: References: Message-ID: On Sun, Dec 2, 2018 at 9:46 AM Julia Kreger wrote: > I'd like to propose adding KaiFeng to the ironic-core reviewer group. > Previously, we had granted KaiFeng rights on ironic-inspector-core and I > personally think they have done a great job there. > > Kaifeng has also been reviewing other repositories in ironic's scope[1]. > Their reviews and feedback have been insightful and meaningful. They have > also been very active[2] at reviewing which is an asset for any project. > > I believe they will be an awesome addition to the team. > +2! // jim > > -Julia > > [1]: http://stackalytics.com/?module=ironic-group&user_id=kaifeng > [2]: http://stackalytics.com/report/contribution/ironic-group/90 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Mon Dec 3 13:24:28 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Mon, 3 Dec 2018 22:24:28 +0900 Subject: [Searchlight] Cancel meeting today Message-ID: Hi team, Everybody is busy today so we will cancel the meeting today. Please ping me on the channel #openstack-searchlight. Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Dec 3 13:25:51 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 3 Dec 2018 08:25:51 -0500 Subject: [openstack-ansible] Evaluating meeting schedule + method In-Reply-To: References: <1913981.aYPTVNe1F8@whitebase.usersys.redhat.com> <8c59e2dbcd07bda4d4e12952e5d5af0bf8c839f2.camel@evrard.me> Message-ID: On Wed, Nov 28, 2018 at 10:55 AM Michael McCune wrote: > On Wed, Nov 28, 2018 at 4:49 AM Jean-Philippe Evrard > wrote: > > Having an office hour there could ensure people could come to a certain > > time and we'd be there. Attendance would be identitical though, but > > it's not really an issue to me. > > > > just wanted to add another data point here. the API-SIG recently > migrated from regular meeting times to office hours. one of the big > reasons for this was that our regular weekly meeting attendance had > dropped to minimal levels. so far we've found that the office hours > have lessened the strain on the chairs and organizers while still > giving us a few very visible times for the community to interact with > us. > > the API-SIG isn't quite the same as a code central project, but i > think the pattern of meeting migration is more general in nature. hope > this helps =) > > peace o/ > > Unfortunately, I haven't heard a lot of the OpenStack Ansible side of things however I'm very happy to hear about what's going on with other projects and how to pick ways to manage our meetings. The first step that I'd like to maybe suggest is going to an office hours model? I'm still not sure how that will increase our engagement in meetings. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Dec 3 13:59:13 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 3 Dec 2018 08:59:13 -0500 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On 11/30/2018 05:52 PM, Mike Carden wrote: > > Have you set the placement_randomize_allocation_candidates CONF option > and are still seeing the packing behaviour? > > > No I haven't. Where would be the place to do that? In a nova.conf > somewhere that the nova-scheduler containers on the controller hosts > could pick it up? > > Just about to deploy for realz with about forty x86 compute nodes, so it > would be really nice to sort this first. :) Presuming you are deploying Rocky or Queens, It goes in the nova.conf file under the [placement] section: randomize_allocation_candidates = true The nova.conf file should be the one used by nova-scheduler. Best, -jay From gryf73 at gmail.com Mon Dec 3 14:08:19 2018 From: gryf73 at gmail.com (Roman Dobosz) Date: Mon, 3 Dec 2018 15:08:19 +0100 Subject: [nova][ops] Migrating instances with old extra specs info. Message-ID: Hi, We have an issue regarding live migrate old instances. On our infrastructure based on Newton, we had not well named our aggregation groups and we decided to completely change the naming conventions. We are using extra specs/metadata in flavors/aggregates and AggregateInstanceExtraSpecsFilter to select the set of hosts. After changing one of the key (lets call it 'kind') of extra spec/metadata to the new value, we are able to create new instances, but unable to live-migrate old one. This is because of stored flavor extra specs in instance_extra, which still holds the old value for the 'kind' key, and make the aggregate filter fail to match the hosts. In our opinion, it should be able to live-migrate such instance, since only the flavor metadata has changed, while other fields doesn't (vcpus, memory, disk). Is it a bug? Or maybe it is possible to change the instance flavor extra spec? -- Cheers, Roman Dobosz From nate.johnston at redhat.com Mon Dec 3 14:15:06 2018 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 3 Dec 2018 09:15:06 -0500 Subject: [openstack-dev] Stepping down from Neutron core team In-Reply-To: References: Message-ID: <20181203141506.usaxv36gz56f4vic@bishop> On Sun, Dec 02, 2018 at 11:08:25PM +0900, Hirofumi Ichihara wrote: > I’m stepping down from the core team because my role changed and I cannot > have responsibilities of neutron core. Thank you very much for all of the insightful reviews over the years. Good luck on your next adventure! Nate Johnston (njohnston) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Dec 3 14:53:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 03 Dec 2018 09:53:59 -0500 Subject: [goal][python3] week R-18 update Message-ID: This is the weekly update for the "Run under Python 3 by default" goal (https://governance.openstack.org/tc/goals/stein/python3-first.html). I've missed sending this email for a few weeks again, due to the Summit, holiday, and then illness. == Ongoing and Completed Work == We're still making good progress, but several teams have a ways to go on the tox defaults patches and 3.6 unit tests. Please prioritize these reviews so we can move on to the more complicated parts of the goal! +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | Team | tox defaults | Docs | 3.6 unit | Failing | Unreviewed | Total | Champion | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ | adjutant | 1/ 1 | - | + | 0 | 1 | 2 | Doug Hellmann | | barbican | + | 1/ 3 | + | 1 | 1 | 7 | Doug Hellmann | | blazar | + | + | + | 0 | 0 | 9 | Nguyen Hai | | Chef OpenStack | + | - | - | 0 | 0 | 2 | Doug Hellmann | | cinder | + | + | + | 0 | 0 | 11 | Doug Hellmann | | cloudkitty | + | + | + | 0 | 0 | 9 | Doug Hellmann | | congress | + | + | + | 0 | 0 | 9 | Nguyen Hai | | cyborg | + | + | + | 0 | 0 | 7 | Nguyen Hai | | designate | + | + | + | 0 | 0 | 9 | Nguyen Hai | | Documentation | + | + | + | 0 | 0 | 10 | Doug Hellmann | | ec2-api | + | + | + | 0 | 0 | 7 | | | freezer | + | + | + | 0 | 0 | 11 | | | glance | + | + | + | 0 | 0 | 10 | Nguyen Hai | | heat | 2/ 8 | + | 1/ 7 | 1 | 0 | 21 | Doug Hellmann | | horizon | + | + | + | 0 | 0 | 34 | Nguyen Hai | | I18n | + | - | - | 0 | 0 | 1 | Doug Hellmann | | InteropWG | 2/ 3 | + | 1/ 3 | 1 | 1 | 9 | Doug Hellmann | | ironic | 1/ 10 | + | + | 0 | 0 | 35 | Doug Hellmann | | karbor | + | + | + | 0 | 0 | 7 | Nguyen Hai | | keystone | + | + | + | 0 | 0 | 18 | Doug Hellmann | | kolla | + | + | + | 0 | 0 | 5 | | | kuryr | + | + | + | 0 | 0 | 9 | Doug Hellmann | | magnum | 2/ 5 | + | + | 0 | 1 | 10 | | | manila | + | + | + | 0 | 0 | 13 | Goutham Pacha Ravi | | masakari | 2/ 5 | + | - | 0 | 2 | 6 | Nguyen Hai | | mistral | + | + | + | 0 | 0 | 13 | Nguyen Hai | | monasca | 1/ 17 | + | + | 0 | 1 | 34 | Doug Hellmann | | murano | + | + | + | 0 | 0 | 14 | | | neutron | 5/ 19 | 1/ 14 | 1/ 13 | 4 | 3 | 46 | Doug Hellmann | | nova | + | + | + | 0 | 0 | 14 | | | octavia | + | + | + | 0 | 0 | 12 | Nguyen Hai | | OpenStack Charms | 8/ 73 | - | - | 8 | 3 | 73 | Doug Hellmann | | OpenStack-Helm | + | + | - | 0 | 0 | 4 | | | OpenStackAnsible | + | + | - | 0 | 0 | 152 | | | OpenStackClient | + | + | + | 0 | 0 | 11 | | | OpenStackSDK | + | + | + | 0 | 0 | 10 | | | oslo | + | + | + | 0 | 0 | 63 | Doug Hellmann | | Packaging-rpm | + | + | + | 0 | 0 | 6 | Doug Hellmann | | PowerVMStackers | - | - | + | 0 | 0 | 3 | Doug Hellmann | | Puppet OpenStack | + | + | - | 0 | 0 | 44 | Doug Hellmann | | qinling | + | + | + | 0 | 0 | 6 | | | Quality Assurance | 3/ 11 | + | + | 0 | 2 | 32 | Doug Hellmann | | rally | 1/ 3 | + | - | 1 | 1 | 5 | Nguyen Hai | | Release Management | - | - | + | 0 | 0 | 1 | Doug Hellmann | | requirements | - | + | + | 0 | 0 | 2 | Doug Hellmann | | sahara | 1/ 6 | + | + | 0 | 0 | 13 | Doug Hellmann | | searchlight | + | + | + | 0 | 0 | 9 | Nguyen Hai | | senlin | + | + | + | 0 | 0 | 9 | Nguyen Hai | | SIGs | + | + | + | 0 | 0 | 13 | Doug Hellmann | | solum | + | + | + | 0 | 0 | 7 | Nguyen Hai | | storlets | + | + | + | 0 | 0 | 4 | | | swift | 2/ 3 | + | + | 1 | 1 | 6 | Nguyen Hai | | tacker | 1/ 3 | + | + | 0 | 1 | 8 | Nguyen Hai | | Technical Committee | + | - | + | 0 | 0 | 4 | Doug Hellmann | | Telemetry | 1/ 7 | + | + | 0 | 1 | 19 | Doug Hellmann | | tricircle | + | + | + | 0 | 0 | 5 | Nguyen Hai | | tripleo | 5/ 55 | + | + | 3 | 1 | 93 | Doug Hellmann | | trove | 1/ 5 | + | + | 0 | 0 | 11 | Doug Hellmann | | User Committee | 3/ 3 | + | - | 0 | 2 | 5 | Doug Hellmann | | vitrage | + | + | + | 0 | 0 | 9 | Nguyen Hai | | watcher | + | + | + | 0 | 0 | 10 | Nguyen Hai | | winstackers | + | + | + | 0 | 0 | 6 | | | zaqar | + | + | + | 0 | 0 | 8 | | | zun | + | + | + | 0 | 0 | 8 | Nguyen Hai | | | 43/ 61 | 55/ 57 | 52/ 55 | 20 | 22 | 1075 | | +---------------------+--------------+---------+----------+---------+------------+-------+--------------------+ == Next Steps == We need to to approve the patches proposed by the goal champions, and then to expand functional test coverage for python 3. PTLs, please document your team's status in the wiki as well: https://wiki.openstack.org/wiki/Python3 == How can you help? == 1. Choose a patch that has failing tests and help fix it. https://review.openstack.org/#/q/topic:python3-first+status:open+(+label:Verified-1+OR+label:Verified-2+) 2. Review the patches for the zuul changes. Keep in mind that some of those patches will be on the stable branches for projects. 3. Work on adding functional test jobs that run under Python 3. == How can you ask for help? == If you have any questions, please post them here to the openstack-dev list with the topic tag [python3] in the subject line. Posting questions to the mailing list will give the widest audience the chance to see the answers. We are using the #openstack-dev IRC channel for discussion as well, but I'm not sure how good our timezone coverage is so it's probably better to use the mailing list. == Reference Material == Goal description: https://governance.openstack.org/tc/goals/stein/python3-first.html Open patches needing reviews: https://review.openstack.org/#/q/topic:python3-first+is:open Storyboard: https://storyboard.openstack.org/#!/board/104 Zuul migration notes: https://etherpad.openstack.org/p/python3-first Zuul migration tracking: https://storyboard.openstack.org/#!/story/2002586 Python 3 Wiki page: https://wiki.openstack.org/wiki/Python3 -- Doug From jaypipes at gmail.com Mon Dec 3 15:07:18 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 3 Dec 2018 10:07:18 -0500 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: References: Message-ID: On 12/03/2018 09:08 AM, Roman Dobosz wrote: > Hi, > > We have an issue regarding live migrate old instances. > > On our infrastructure based on Newton, we had not well named our > aggregation groups and we decided to completely change the naming > conventions. We are using extra specs/metadata in flavors/aggregates and > AggregateInstanceExtraSpecsFilter to select the set of hosts. > > After changing one of the key (lets call it 'kind') of extra > spec/metadata to the new value, we are able to create new instances, but > unable to live-migrate old one. This is because of stored flavor extra > specs in instance_extra, which still holds the old value for the 'kind' > key, and make the aggregate filter fail to match the hosts. > > In our opinion, it should be able to live-migrate such instance, since > only the flavor metadata has changed, while other fields doesn't (vcpus, > memory, disk). > > Is it a bug? Or maybe it is possible to change the instance flavor > extra spec? Not sure it's a bug, per-se... might just be something you need to manually change database records for, though. Sucks, since it's hacky, but not sure if it's something we should change in the API contract. Best, -jay From dangtrinhnt at gmail.com Mon Dec 3 15:28:30 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 4 Dec 2018 00:28:30 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only Message-ID: Hello, Currently, [1] fails tox py27 tests on Zuul check for just updating the log text. The tests are successful at local dev env. Just wondering there is any new change at Zuul CI? [1] https://review.openstack.org/#/c/619162/ Thanks, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahm.jawad118 at gmail.com Mon Dec 3 14:42:37 2018 From: ahm.jawad118 at gmail.com (Jawad Ahmed) Date: Mon, 3 Dec 2018 15:42:37 +0100 Subject: [Openstack-operators] Spice screen in horizon Message-ID: Hi all, How can I fit spice screen*(vm console| black screen)* to horizon console. Please see the attachment.Also below messages ,keyboard input is insecure etc.If someone can suggest workaround to remove those messages. I ll appreciate any kind of suggestions.Thank you. -- Greetings, Jawad -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-12-03 at 15.37.17.png Type: image/png Size: 145854 bytes Desc: not available URL: -------------- next part -------------- _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From smooney at redhat.com Mon Dec 3 15:34:25 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Dec 2018 15:34:25 +0000 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: References: Message-ID: On Mon, 2018-12-03 at 10:07 -0500, Jay Pipes wrote: > On 12/03/2018 09:08 AM, Roman Dobosz wrote: > > Hi, > > > > We have an issue regarding live migrate old instances. > > > > On our infrastructure based on Newton, we had not well named our > > aggregation groups and we decided to completely change the naming > > conventions. We are using extra specs/metadata in flavors/aggregates and > > AggregateInstanceExtraSpecsFilter to select the set of hosts. > > > > After changing one of the key (lets call it 'kind') of extra > > spec/metadata to the new value, we are able to create new instances, but > > unable to live-migrate old one. This is because of stored flavor extra > > specs in instance_extra, which still holds the old value for the 'kind' > > key, and make the aggregate filter fail to match the hosts. > > > > In our opinion, it should be able to live-migrate such instance, since > > only the flavor metadata has changed, while other fields doesn't (vcpus, > > memory, disk). > > > > Is it a bug? Or maybe it is possible to change the instance flavor > > extra spec? > > Not sure it's a bug, per-se... might just be something you need to > manually change database records for, though. Sucks, since it's hacky, > but not sure if it's something we should change in the API contract. depending on what that extra spec was nova could not assume that changing it would not impact that vms runtime or how its scheduled. As an RFE however i might be worth considering if we could/show allow a resize to the same flavor so that we can update the embeded flavor to pickup extra specs changes. if we allowed such a resize and the live resize spec makes progess then perhaps that woudl be a clean resolution to this usecase? > > Best, > -jay > From gryf73 at gmail.com Mon Dec 3 16:32:40 2018 From: gryf73 at gmail.com (Roman Dobosz) Date: Mon, 3 Dec 2018 17:32:40 +0100 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: References: Message-ID: <20181203173240.8de68c03dfc8527fe2cefb56@gmail.com> On Mon, 3 Dec 2018 10:07:18 -0500 Jay Pipes wrote: > > Is it a bug? Or maybe it is possible to change the instance flavor > > extra spec? > > Not sure it's a bug, per-se... might just be something you need to At least unexpected behavior :) > manually change database records for, though. Sucks, since it's hacky, > but not sure if it's something we should change in the API contract. I'd be happy with nova-manage command for update extra specs in instance info, it doesn't need to be an API change :) -- Cheers, Roman Dobosz From gryf73 at gmail.com Mon Dec 3 16:35:27 2018 From: gryf73 at gmail.com (Roman Dobosz) Date: Mon, 3 Dec 2018 17:35:27 +0100 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: References: Message-ID: <20181203173527.6ad5aa3f6d749ad9f8008be2@gmail.com> On Mon, 03 Dec 2018 15:34:25 +0000 Sean Mooney wrote: > if we allowed such a resize and the live resize spec makes progess > then perhaps that woudl be a clean resolution to this usecase? Unlike changing values during resize, overriding certain keys in extra specs could be tricky, and slippery a bit. I'd opt for nova-manage command for manipulating existing instances data instead. -- Cheers, Roman Dobosz From mriedemos at gmail.com Mon Dec 3 16:38:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 3 Dec 2018 10:38:01 -0600 Subject: [nova] When can/should we remove old nova-status upgrade checks? Message-ID: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> Questions came up in review [1] about dropping an old "nova-status upgrade check" which relies on using the in-tree placement database models for testing the check. The check in question, "Resource Providers", compares the number of compute node resource providers in the nova_api DB against the number of compute nodes in all cells. When the check was originally written in Ocata [2] it was meant to help ease the upgrade where nova-compute needed to be configured to report compute node resource provider inventory to placement so the scheduler could use placement. It looks for things like >0 compute nodes but 0 resource providers indicating the computes aren't reporting into placement like they should be. In Ocata, if that happened, and there were older compute nodes (from Newton), then the scheduler would fallback to not use placement. That fallback code has been removed. Also in Ocata, nova-compute would fail to start if nova.conf wasn't configured for placement [3] but that has also been removed. Now if nova.conf isn't configured for placement, I think we'll just log an exception traceback but not actually fail the service startup, and the node's resources wouldn't be available to the scheduler, so you could get NoValidHost failures during scheduling and need to dig into why a given compute node isn't being used during scheduling. The question is, given this was added in Ocata to ease with the upgrade to require placement, and we're long past that now, is the check still useful? The check still has lots of newton/ocata/pike comments in it, so it's showing its age. However, one could argue it is still useful for base install verification, or for someone doing FFU. If we keep this check, the related tests will need to be re-written to use the placement REST API fixture since the in-tree nova_api db tables will eventually go away because of extracted placement. The bigger question is, what sort of criteria do we have for dropping old checks like this besides when the related code, for which the check was added, is removed? FFU kind of throws a wrench in everything, but at the same time, I believe the prescribed FFU steps are that online data migrations (and upgrade checks) are meant to be run per-release you're fast-forward upgrading through. [1] https://review.openstack.org/#/c/617941/26/nova/tests/unit/cmd/test_status.py [2] https://review.openstack.org/#/c/413250/ [3] https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L1139 -- Thanks, Matt From cboylan at sapwetik.org Mon Dec 3 16:38:07 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 03 Dec 2018 08:38:07 -0800 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: Message-ID: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> On Mon, Dec 3, 2018, at 7:28 AM, Trinh Nguyen wrote: > Hello, > > Currently, [1] fails tox py27 tests on Zuul check for just updating the log > text. The tests are successful at local dev env. Just wondering there is > any new change at Zuul CI? > > [1] https://review.openstack.org/#/c/619162/ > Reading the exceptions [2] and the test setup code [3] it appears that elasticsearch isn't responding on its http port and is thus treated as having not started. With the info we currently have it is hard to say why. Instead of redirecting exec logs to /dev/null [4] maybe we can capture that data? Also probably worth grabbing the elasticsearch daemon log as well. Without that information it is hard to say why this happened. I am not aware of any changes in the CI system that would cause this, but we do rebuild our test node images daily. [2] http://logs.openstack.org/62/619162/5/check/openstack-tox-py27/9ce318d/job-output.txt.gz#_2018-11-27_05_32_48_854289 [3] https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n868 [4] https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n851 Clark From dangtrinhnt at gmail.com Mon Dec 3 16:41:57 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 4 Dec 2018 01:41:57 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> Message-ID: Hi Clark, Thanks for the update. I will try doing what you suggested. Bests, On Tue, Dec 4, 2018 at 1:39 AM Clark Boylan wrote: > On Mon, Dec 3, 2018, at 7:28 AM, Trinh Nguyen wrote: > > Hello, > > > > Currently, [1] fails tox py27 tests on Zuul check for just updating the > log > > text. The tests are successful at local dev env. Just wondering there is > > any new change at Zuul CI? > > > > [1] https://review.openstack.org/#/c/619162/ > > > > Reading the exceptions [2] and the test setup code [3] it appears that > elasticsearch isn't responding on its http port and is thus treated as > having not started. With the info we currently have it is hard to say why. > Instead of redirecting exec logs to /dev/null [4] maybe we can capture that > data? Also probably worth grabbing the elasticsearch daemon log as well. > > Without that information it is hard to say why this happened. I am not > aware of any changes in the CI system that would cause this, but we do > rebuild our test node images daily. > > [2] > http://logs.openstack.org/62/619162/5/check/openstack-tox-py27/9ce318d/job-output.txt.gz#_2018-11-27_05_32_48_854289 > [3] > https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n868 > [4] > https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n851 > > Clark > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Dec 3 16:48:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 3 Dec 2018 11:48:34 -0500 Subject: [nova] When can/should we remove old nova-status upgrade checks? In-Reply-To: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> References: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> Message-ID: <57d607fb-ae4a-0bb7-ad4f-a70f8217735e@gmail.com> On 12/03/2018 11:38 AM, Matt Riedemann wrote: > Questions came up in review [1] about dropping an old "nova-status > upgrade check" which relies on using the in-tree placement database > models for testing the check. The check in question, "Resource > Providers", compares the number of compute node resource providers in > the nova_api DB against the number of compute nodes in all cells. When > the check was originally written in Ocata [2] it was meant to help ease > the upgrade where nova-compute needed to be configured to report compute > node resource provider inventory to placement so the scheduler could use > placement. It looks for things like >0 compute nodes but 0 resource > providers indicating the computes aren't reporting into placement like > they should be. In Ocata, if that happened, and there were older compute > nodes (from Newton), then the scheduler would fallback to not use > placement. That fallback code has been removed. Also in Ocata, > nova-compute would fail to start if nova.conf wasn't configured for > placement [3] but that has also been removed. Now if nova.conf isn't > configured for placement, I think we'll just log an exception traceback > but not actually fail the service startup, and the node's resources > wouldn't be available to the scheduler, so you could get NoValidHost > failures during scheduling and need to dig into why a given compute node > isn't being used during scheduling. > > The question is, given this was added in Ocata to ease with the upgrade > to require placement, and we're long past that now, is the check still > useful? The check still has lots of newton/ocata/pike comments in it, so > it's showing its age. However, one could argue it is still useful for > base install verification, or for someone doing FFU. If we keep this > check, the related tests will need to be re-written to use the placement > REST API fixture since the in-tree nova_api db tables will eventually go > away because of extracted placement. > > The bigger question is, what sort of criteria do we have for dropping > old checks like this besides when the related code, for which the check > was added, is removed? I'm not sure there is any "standard" criteria other than evaluating each migration in the way you've done above and then removing the code that is past its useful life (due to the code touching now-irrelevant parts of code as you describe above for the placement-related checks). Best, -jay From jaypipes at gmail.com Mon Dec 3 16:56:17 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 3 Dec 2018 11:56:17 -0500 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: <20181203173240.8de68c03dfc8527fe2cefb56@gmail.com> References: <20181203173240.8de68c03dfc8527fe2cefb56@gmail.com> Message-ID: On 12/03/2018 11:32 AM, Roman Dobosz wrote: > On Mon, 3 Dec 2018 10:07:18 -0500 > Jay Pipes wrote: > >>> Is it a bug? Or maybe it is possible to change the instance flavor >>> extra spec? >> >> Not sure it's a bug, per-se... might just be something you need to > > At least unexpected behavior :) It's not unexpected behaviour at all. The behaviour of Nova when changing a flavor's details has been (for many releases) that the flavor's details are changed but flavor data for already-launched instances does not change. Just like you can delete a flavor but it doesn't affect already-launched instances because we store the details of the original instance. The root of the problem you are experiencing is that flavor extra-specs is a pile of half-standardized random key-value information instead of structured, standardized quantitative and qualitative information (*gasp* that's what resource classes and traits are in placement...) Which is why I say this isn't actually unexpected behaviour or really a bug. >> manually change database records for, though. Sucks, since it's hacky, >> but not sure if it's something we should change in the API contract. > > I'd be happy with nova-manage command for update extra specs in > instance info, it doesn't need to be an API change :) mysql -u$USER -p$PASSWORD -Dnova -e"UPDATE instance_extra SET flavor = REPLACE(flavor, 'old string', 'new string')" There, no need for a nova-manage command. Best, -jay p.s. test the above on a copy of your production DB before you trust it... :) From openstack at fried.cc Mon Dec 3 16:59:46 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 3 Dec 2018 10:59:46 -0600 Subject: [nova] When can/should we remove old nova-status upgrade checks? In-Reply-To: <57d607fb-ae4a-0bb7-ad4f-a70f8217735e@gmail.com> References: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> <57d607fb-ae4a-0bb7-ad4f-a70f8217735e@gmail.com> Message-ID: <9d50c6bd-7e9f-6ef9-7b4b-826c770b2c91@fried.cc> On 12/3/18 10:48, Jay Pipes wrote: > On 12/03/2018 11:38 AM, Matt Riedemann wrote: >> Questions came up in review [1] about dropping an old "nova-status >> upgrade check" which relies on using the in-tree placement database >> models for testing the check. The check in question, "Resource >> Providers", compares the number of compute node resource providers in >> the nova_api DB against the number of compute nodes in all cells. When >> the check was originally written in Ocata [2] it was meant to help >> ease the upgrade where nova-compute needed to be configured to report >> compute node resource provider inventory to placement so the scheduler >> could use placement. It looks for things like >0 compute nodes but 0 >> resource providers indicating the computes aren't reporting into >> placement like they should be. In Ocata, if that happened, and there >> were older compute nodes (from Newton), then the scheduler would >> fallback to not use placement. That fallback code has been removed. >> Also in Ocata, nova-compute would fail to start if nova.conf wasn't >> configured for placement [3] but that has also been removed. Now if >> nova.conf isn't configured for placement, I think we'll just log an >> exception traceback but not actually fail the service startup, and the >> node's resources wouldn't be available to the scheduler, so you could >> get NoValidHost failures during scheduling and need to dig into why a >> given compute node isn't being used during scheduling. I say remove the check. Its usefulness is negligible compared to the effort that would be required to maintain it. It certainly isn't worth writing a whole new placement feature to replace the db access. And using existing interfaces would be very heavy in large deployments. (Not that that's a show-stopper for running an upgrade check, but still.) -efried >> The question is, given this was added in Ocata to ease with the >> upgrade to require placement, and we're long past that now, is the >> check still useful? The check still has lots of newton/ocata/pike >> comments in it, so it's showing its age. However, one could argue it >> is still useful for base install verification, or for someone doing >> FFU. If we keep this check, the related tests will need to be >> re-written to use the placement REST API fixture since the in-tree >> nova_api db tables will eventually go away because of extracted >> placement. >> >> The bigger question is, what sort of criteria do we have for dropping >> old checks like this besides when the related code, for which the >> check was added, is removed? > > I'm not sure there is any "standard" criteria other than evaluating each > migration in the way you've done above and then removing the code that > is past its useful life (due to the code touching now-irrelevant parts > of code as you describe above for the placement-related checks). > > Best, > -jay > > From fungi at yuggoth.org Mon Dec 3 17:03:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 3 Dec 2018 17:03:40 +0000 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: Message-ID: <20181203170339.psadnws63wfywtrs@yuggoth.org> On 2018-12-04 00:28:30 +0900 (+0900), Trinh Nguyen wrote: > Currently, [1] fails tox py27 tests on Zuul check for just updating the log > text. The tests are successful at local dev env. Just wondering there is > any new change at Zuul CI? > > [1] https://review.openstack.org/#/c/619162/ I don't know of any recent changes which would result in the indicated test failures. According to the log it looks like it's a functional testsuite and the tests are failing to connect to the search API. I don't see your job collecting any service logs however, so it's unclear whether the API service is failing to start, or spontaneously crashes, or something else is going on. My first guess would be that one of your dependencies has released and brought some sort of regression. According to http://zuul.openstack.org/builds?job_name=openstack-tox-py27&project=openstack%2Fsearchlight&branch=master the last time that job passed for your repo was 2018-11-07 with the installed package versions listed in the http://logs.openstack.org/56/616056/1/gate/openstack-tox-py27/e413441/tox/py27-5.log file, and the first failure I see matching the errors in yours ran with the versions in http://logs.openstack.org/62/619162/1/check/openstack-tox-py27/809a281/tox/py27-5.log on 2018-11-21 (it wasn't run for the intervening 2 weeks so we have a fairly large window of potential external breakage there). A diff of those suggests the following dependencies updated between them: coverage: 4.5.1 -> 4.5.2 cryptography: 2.3.1 -> 2.4.1 httplib2: 0.11.3 -> 0.12.0 oslo.cache: 1.31.1 -> 1.31.0 (downgraded) oslo.service: 1.32.0 -> 1.33.0 python-neutronclient: 6.10.0 -> 6.11.0 requests: 2.20.0 -> 2.20.1 WebOb: 1.8.3 -> 1.8.4 Make sure with your local attempts at reproduction you're running with these newer versions of dependencies, for example by clearing any existing tox envs with the -r flag or `git clean -dfx` so that stale versions aren't used instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From eibm at fried.cc Mon Dec 3 16:58:10 2018 From: eibm at fried.cc (Eric Fried) Date: Mon, 3 Dec 2018 10:58:10 -0600 Subject: [nova] When can/should we remove old nova-status upgrade checks? In-Reply-To: <57d607fb-ae4a-0bb7-ad4f-a70f8217735e@gmail.com> References: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> <57d607fb-ae4a-0bb7-ad4f-a70f8217735e@gmail.com> Message-ID: <856e56e5-a66c-6c52-0900-01818a825f87@fried.cc> On 12/3/18 10:48, Jay Pipes wrote: > On 12/03/2018 11:38 AM, Matt Riedemann wrote: >> Questions came up in review [1] about dropping an old "nova-status >> upgrade check" which relies on using the in-tree placement database >> models for testing the check. The check in question, "Resource >> Providers", compares the number of compute node resource providers in >> the nova_api DB against the number of compute nodes in all cells. When >> the check was originally written in Ocata [2] it was meant to help >> ease the upgrade where nova-compute needed to be configured to report >> compute node resource provider inventory to placement so the scheduler >> could use placement. It looks for things like >0 compute nodes but 0 >> resource providers indicating the computes aren't reporting into >> placement like they should be. In Ocata, if that happened, and there >> were older compute nodes (from Newton), then the scheduler would >> fallback to not use placement. That fallback code has been removed. >> Also in Ocata, nova-compute would fail to start if nova.conf wasn't >> configured for placement [3] but that has also been removed. Now if >> nova.conf isn't configured for placement, I think we'll just log an >> exception traceback but not actually fail the service startup, and the >> node's resources wouldn't be available to the scheduler, so you could >> get NoValidHost failures during scheduling and need to dig into why a >> given compute node isn't being used during scheduling. I say remove the check. Its usefulness is negligible compared to the effort that would be required to maintain it. It certainly isn't worth writing a whole new placement feature to replace the db access. And using existing interfaces would be very heavy in large deployments. (Not that that's a show-stopper for running an upgrade check, but still.) -efried >> The question is, given this was added in Ocata to ease with the >> upgrade to require placement, and we're long past that now, is the >> check still useful? The check still has lots of newton/ocata/pike >> comments in it, so it's showing its age. However, one could argue it >> is still useful for base install verification, or for someone doing >> FFU. If we keep this check, the related tests will need to be >> re-written to use the placement REST API fixture since the in-tree >> nova_api db tables will eventually go away because of extracted >> placement. >> >> The bigger question is, what sort of criteria do we have for dropping >> old checks like this besides when the related code, for which the >> check was added, is removed? > > I'm not sure there is any "standard" criteria other than evaluating each > migration in the way you've done above and then removing the code that > is past its useful life (due to the code touching now-irrelevant parts > of code as you describe above for the placement-related checks). > > Best, > -jay > > From mrhillsman at gmail.com Mon Dec 3 17:22:26 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 3 Dec 2018 11:22:26 -0600 Subject: [all][uc] OpenStack UC Meeting @ 1900 UTC Message-ID: Hi everyone, Just a reminder that the UC meeting will be in #openstack-uc in a little more than an hour and a half from now. Please feel empowered to add to the agenda here - https://etherpad.openstack.org/p/uc - and we hope to see you there! -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Mon Dec 3 17:40:25 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 3 Dec 2018 11:40:25 -0600 Subject: Fwd: [Action Needed] Update your Zoom clients In-Reply-To: References: Message-ID: Hi everyone, It was mentioned to me last week an issue with Zoom by some community members who know I use it and I keep my client updated but maybe you do not. Got another email this morning so, that being said, if you did not already know: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-15715 Upgrade your client: https://support.zoom.us/hc/en-us/articles/201362233-Where-Do-I-Download-The-Latest-Version- -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Dec 3 17:42:46 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 4 Dec 2018 01:42:46 +0800 Subject: [all]Forum summary: Expose SIGs and WGs Message-ID: Dear all Here is the summary for forum `Expose SIGs and WGs` (etherpad [1] ). This concept still under development, so this is an open discussion and we need more feedbacks. Here are some general agreements on actions or ideas that we think it's worth to find the answer. *Set up guidelines for SIGs/WGs/Teams for interaction specific to this around tracking cross-project work* We tend to agree that we kind of lack for a guideline or a sample for SIGs/WGs, since all SIGs/WGs formed for different interest, we won't try to unify tools (unless that's what everyone agrees on) or rules for all groups. What we can do is to give more help to groups and provide a clear way for how they can set up cross-project works if they want to. Also, we can provide information on how to reach to users, ops, and developers and bridge them up. And we can even do a more general guideline or sample on how other SIGs/WGs are doing with their own workflow. Like self-healing SIG working on getting user story and feedback and use them to general better document/guideline for other users. Also, public cloud WG working on collect issues from public cloud providers and bring those issues to projects. Those IMO are great examples that we should put them down somewhere for cross SIGs/WGs consideration. As a further idea, we can even discuss if it's a common interest to have a SIG to help on SIGs. *A workflow for tracking:* This kind of follow above item. If we're going to set up a workflow, what we can add in to help people live an easier life? This is also an idea that no one in the room thinks it's a bad one, so it means in long term, it might worth our time to provide more clear information on what exactly workflow that we suggest everyone use. *Discuss SIG spec repo*: The discussion here is how can we monitoring SIGs/WGs health and track tasks. When talking about tasks we not just talking about bugs, but also features that's been considered as essential tasks for SIGs/WGs. We need a place to put them down in a trackable way (from a user story to a patch for implementation). *Ask foundation about have project update for SIGs/WGs*: One action we can start right now is to let SIGs/WGs present a project update (or even like a session but give each group 5 mins to present). This should help group getting more attention. And even capable to send out messages like what's the most important features or bug fixes they need from project teams, or what's the most important tasks that are under planning or working on. Fortunately, we got Melvin Hillsman (UC) volunteer on this task. We also have some real story (Luzi's story) for people to get a better understanding of why current workflow can look like for someone who tries to help. The thing that we also wish to do is to clear the message here. We think most of the tools are already there, so we shouldn't need to ask project teams to do any huge change. But still, we found there are definitely some improvements that we can do to better bridge users, ops, and developers. You might find some information here didn't give you a clear answer. And that's because of we still under open discussion for this. And I assume we gonna keep finding actions from discussions that we can do right away. We will try to avoid that we have to do the exact same session with the same argument over and over again. So please give your feedback, any idea, or give us your help if you also care about this. [1] https://etherpad.openstack.org/p/expose-sigs-and-wgs -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Dec 3 17:54:41 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 3 Dec 2018 11:54:41 -0600 Subject: [ptl][release] Proposed changes for cycle-with-intermediary services releases Message-ID: <20181203175441.GA19885@sm-workstation> This is a continuation of the effort the release team had started after the Denver PTG with simplifying and better aligning the release models in use. We noticed a handful of projects that had, intentionally or unintentionally, only made one release during the Rocky cycle. This result wasn't quite was intended, so we had discussed how we might encourage things so consumers of the community output get what they are expecting during the cycle. The intermediary model should be used to be able to deliver multiple releases at any point throughout the cycle. It should not be used as a way to avoid doing any releases until the very end of the cycle. To encourage (force?) this, we would require at least two releases of a non-library cycle-with-intermediary project during the cycle. If a release is not done by milestone 2, these projects would be switched to the new cycle-with-rc. This may actually be preferable for some with the reduced administrative overhead that should provide. Of course, teams are encouraged to decide and follow their preferred release model for whatever makes the most sense for their project. If you are the PTL or release liaison for one of these projects, please take some time to consider if the current cycle-with-intermediary release model is right for you, or if you should switch over to the newer cycle-with-rc model. The release team will review the current state of things after the second milestone at the beginning of January to try to help any projects that may benefit from choosing a different declared release model. Thanks! Sean McGinnis and the Release Team From jimmy at openstack.org Mon Dec 3 18:22:40 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 03 Dec 2018 12:22:40 -0600 Subject: OpenStack Summit Berlin Videos Now Available Message-ID: <5C057470.8050405@openstack.org> Thank you again for a wonderful Summit in Berlin. I'm pleased to announce the Summit Videos are now up on the openstack.org website: https://www.openstack.org/videos/summits/berlin-2018 If there was a session you missed, now is your chance to catch up! These videos will also be available in the Summit App as well as on the web under the Berlin Summit Schedule (https://www.openstack.org/summit/berlin-2018/summit-schedule/). If you have any questions or concerns about the videos, please write speakersupport at openstack.org. Cheers, Jimmy From doug at doughellmann.com Mon Dec 3 18:34:20 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 03 Dec 2018 13:34:20 -0500 Subject: [tc] agenda for Technical Committee Meeting 6 Dec 2018 @ 1400 UTC Message-ID: TC Members, Our next meeting will be this Thursday, 6 Dec at 1400 UTC in #openstack-tc. This email contains the agenda for the meeting, based on the content of the wiki [0]. If you will not be able to attend, please include your name in the "Apologies for Absence" section of the wiki page [0]. [0] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee * Follow up on past action items ** dhellmann complete liaison assignments using the random generator I have updated the team liaisons in the wiki [1]. Please review the list of projects to which you are assigned. [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams ** tc-members review the chair duties document The draft document from [2] has been merged and is now available in the governance repo as CHAIR.rst [3]. Please come prepared to discuss any remaining questions about the list of chair duties. [2] https://etherpad.openstack.org/p/tc-chair-responsibilities [3] http://git.openstack.org/cgit/openstack/governance/tree/CHAIR.rst * active initiatives ** keeping up with python 3 releases We are ready to approve Zane's resolution for a process for tracking python 3 versions [4]. There is one wording update [5] that we should prepare for approval as well. The next step will be to approve Sean's patch describing the runtimes supported for Stein [6]. Please come prepared to discuss any issues with those patches so we can resolve them and move forward. [4] https://review.openstack.org/613145 [5] https://review.openstack.org/#/c/621461/1 [6] https://review.openstack.org/#/c/611080/ * follow-up from Berlin Forum ** Vision for OpenStack clouds Zane has summarized the forum session on the mailing list [7], including listing several potential updates to the vision based on our discussion there. Please come prepared to discuss next steps for making those changes. [7] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000431.html ** Train cycle goals I posted my summary of the forum session [8]. Each of the candidate goals have work to be done before they could be selected, so we will need to work with the sponsors and champions to see where enough progress is made to let us choose from among the proposals. Lance has agreed to lead the selection process for the Train goals, and will be looking for someone to pair up with on that. [8] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000055.html ** Other TC outcomes from Forum We had several other forum sessions, and should make sure we have a good list of any promised actions that came from those discussions. Please come prepared to discuss any sessions you moderated -- having summaries on the mailing list before the meeting would be very helpful. -- Doug From mdulko at redhat.com Mon Dec 3 18:49:53 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Mon, 03 Dec 2018 19:49:53 +0100 Subject: [openstack-dev] [kuryr] can we start kuryr libnetwork in container inside the nova VM. In-Reply-To: References: <7b585278013291fda1d55b5d74965b26d317e637.camel@redhat.com> Message-ID: <67b4d15b3c5b08d364610112e0b2c709f86febbf.camel@redhat.com> On Sun, 2018-12-02 at 09:33 +0530, Vikrant Aggarwal wrote: > Thanks Michal. Yes, my scenario is same which you mentioned. But I > don't want to use COE atm. So. the OVS and neutron-agent running > inside the VM will be communicating with compute node neutron agent? I've did some more research and seems like nested deployments actually got implemented in kuryr-libnetwork around 3 years ago. I don't know if that still works though. Also there seem to be no documentation, so unfortunately you'll need to figure it out by reading the code. See blueprint [1] for a list of related patches. Remember that this requires the cloud to support subports and trunk ports in Neutron. VMs get the trunk ports attached and the containers get the subports. This doesn't require neutron-agent running on the VMs. [1] https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances > Thanks & Regards, > Vikrant Aggarwal > > > On Fri, Nov 30, 2018 at 9:31 PM Michał Dulko wrote: > > On Fri, 2018-11-30 at 09:38 +0530, Vikrant Aggarwal wrote: > > > Hello Team, > > > > > > I have seen the steps of starting the kuryr libnetwork container on > > > compute node. But If I need to run the same container inside the VM > > > running on compute node, is't possible to do that? > > > > > > I am not sure how can I map the /var/run/openvswitch inside the > > > nested VM because this is present on compute node. > > > > I think that if you want to run Neutron-networked Docker containers on > > an OpenStack VM, you'll need OpenvSwitch and neutron-agent installed on > > that VM as well. > > > > A better-suited approach would be to run K8s on OpenStack and use > > kuryr-kubernetes instead. That way Neutron subports are used to network > > pods. We have such a K8s-on-VM use case described in the docs [1]. > > > > [1] https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html > > > > > https://docs.openstack.org/kuryr-libnetwork/latest/readme.html > > > > > > Thanks & Regards, > > > Vikrant Aggarwal > > From jude at judecross.com Mon Dec 3 19:15:46 2018 From: jude at judecross.com (Jude Cross) Date: Mon, 3 Dec 2018 11:15:46 -0800 Subject: [senlin] New meeting time for odd weeks Message-ID: +1 -------------------------------------------------------------------------------------------------------------------- As discussed in our last Senlin meeting [1], I would like to propose holding the weekly meeting during odd weeks at a time that is more convenient for US and Europe. The proposal is to have the meeting at 19:00 UTC (11am PST) for odd weeks, while the meeting time for even weeks stays at the current time of 5:30 UTC. If accepted, the meeting times for upcoming weeks would be as follows: - Thursday, December 6, 2018 at 19:00:00 UTC - Friday, December 14, 2018 at 05:30:00 UTC - Thursday, December 20, 2018 at 19:00:00 UTC ... Please reply with any feedback. Regards, Duc (dtruong) [1] http://eavesdrop.openstack.org/meetings/senlin/2018/senlin.2018-11-30-05.30.log.html#l-44 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Dec 3 19:40:01 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 3 Dec 2018 19:40:01 +0000 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <5C057470.8050405@openstack.org> References: <5C057470.8050405@openstack.org> Message-ID: <1d4f5bec8f9143ba880d78df8566a757@AUSX13MPS308.AMER.DELL.COM> Thank you very much Jimmy for making it happen. -----Original Message----- From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, December 3, 2018 12:23 PM To: openstack-discuss at lists.openstack.org; community at lists.openstack.org Subject: OpenStack Summit Berlin Videos Now Available [EXTERNAL EMAIL] Thank you again for a wonderful Summit in Berlin. I'm pleased to announce the Summit Videos are now up on the openstack.org website: https://www.openstack.org/videos/summits/berlin-2018 If there was a session you missed, now is your chance to catch up! These videos will also be available in the Summit App as well as on the web under the Berlin Summit Schedule (https://www.openstack.org/summit/berlin-2018/summit-schedule/). If you have any questions or concerns about the videos, please write speakersupport at openstack.org. Cheers, Jimmy From openstack at nemebean.com Mon Dec 3 19:40:04 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 3 Dec 2018 13:40:04 -0600 Subject: [oslo] Berlin Summit Recap Message-ID: A bit late because I was on PTO the week after and it turns out I had a lot more to say than I realized. :-) There wasn't a ton of Oslo stuff going on, but there were a few interesting things that I discuss in [1]. I also ended up writing my thoughts about some non-Oslo-specific sessions I attended, including one that I split off in its own post because it got a bit long and deserved standalone attention[2]. 1: http://blog.nemebean.com/content/berlin-summit-recap 2: http://blog.nemebean.com/content/upstream-openstack-performance-and-release-shaming -Ben From doug at doughellmann.com Mon Dec 3 20:00:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 03 Dec 2018 15:00:22 -0500 Subject: [ops][docs] The Contributor Guide: Ops Feedback Session Summary In-Reply-To: References: <5938a4e0-3eb1-5448-3af5-f56bf3452e9c@suse.com> Message-ID: Doug Hellmann writes: > Andreas Jaeger writes: > >> On 11/27/18 4:38 PM, Kendall Nelson wrote: >>> Hello! >>> >>> >>> For the long version, feel free to look over the etherpad[1]. >>> >>> >>> It should be noted that this session was in relation to the operator >>> section of the contributor guide, not the operations guide, though they >>> should be closely related and link to one another. >>> >>> >>> Basically the changes requested can be boiled down to two types of >>> changes: cosmetic and missing content. >>> >>> >>> Cosmetic Changes: >>> >>> * >>> >>> Timestamps so people can know when the last change was made to a >>> given doc (dhellmann volunteered to help here)[2] >>> >>> * >>> >>> Floating bug report button and some mechanism for auto populating >>> which page a bug is on so that the reader doesn’t have to know what >>> rst file in what repo has the issue to file a bug[3] >> >> This is something probably for openstackdocstheme to have it >> everywhere. > > Yes, that was the idea. We already have some code that pulls the > timestamp from git in the governance repo, so I was going to move that > over to the theme for better reuse. > > -- > Doug > The patch to add this to the theme is in https://review.openstack.org/#/c/621690/ -- Doug From mrhillsman at gmail.com Mon Dec 3 20:18:11 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 3 Dec 2018 14:18:11 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <5C057470.8050405@openstack.org> References: <5C057470.8050405@openstack.org> Message-ID: Thanks for the update Jimmy! On Mon, Dec 3, 2018 at 12:23 PM Jimmy McArthur wrote: > Thank you again for a wonderful Summit in Berlin. I'm pleased to > announce the Summit Videos are now up on the openstack.org website: > https://www.openstack.org/videos/summits/berlin-2018 If there was a > session you missed, now is your chance to catch up! These videos will > also be available in the Summit App as well as on the web under the > Berlin Summit Schedule > (https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. > > Cheers, > Jimmy > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Dec 3 20:32:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 3 Dec 2018 14:32:15 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <5C057470.8050405@openstack.org> References: <5C057470.8050405@openstack.org> Message-ID: <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> On 12/3/2018 12:22 PM, Jimmy McArthur wrote: > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. So uh, I don't really want to be that guy, but I'm sure others have noticed the deal with the slides being different in the recordings from years past, in that you can't view them (hopefully people are uploading their slides). I'm mostly curious if there was a reason for that? Budget cuts? Technical issues? -- Thanks, Matt From mike.carden at gmail.com Mon Dec 3 20:43:04 2018 From: mike.carden at gmail.com (Mike Carden) Date: Tue, 4 Dec 2018 07:43:04 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: > > > Presuming you are deploying Rocky or Queens, > Yep, it's Queens. > > It goes in the nova.conf file under the [placement] section: > > randomize_allocation_candidates = true > In triple-o land it seems like the config may need to be somewhere like nova-scheduler.yaml and laid down via a re-deploy. Or something. The nova_scheduler runs in a container on a 'controller' host. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Dec 3 20:46:31 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 03 Dec 2018 14:46:31 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> Message-ID: <5C059627.9040304@openstack.org> In Berlin, in rooms where we had a full view of the screen, we didn't do a second screen for slides only. As mentioned, presenters can upload their slides to help with that. For places like the Marketplace demo theater where we had a smaller screen format, we added the view of both presenter and slide: https://www.openstack.org/videos/berlin-2018/how-to-avoid-vendor-lock-in-in-a-multi-cloud-world-with-zenko We're looking at ways to improve both formats in Denver, so I'd say stand by. If there is a presentation that you feel is too difficult to follow, we can reach out to those presenters and encourage them again to upload their slides. > Matt Riedemann > December 3, 2018 at 2:32 PM > > > So uh, I don't really want to be that guy, but I'm sure others have > noticed the deal with the slides being different in the recordings > from years past, in that you can't view them (hopefully people are > uploading their slides). I'm mostly curious if there was a reason for > that? Budget cuts? Technical issues? > > Jimmy McArthur > December 3, 2018 at 12:22 PM > Thank you again for a wonderful Summit in Berlin. I'm pleased to > announce the Summit Videos are now up on the openstack.org website: > https://www.openstack.org/videos/summits/berlin-2018 If there was a > session you missed, now is your chance to catch up! These videos will > also be available in the Summit App as well as on the web under the > Berlin Summit Schedule > (https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. > > Cheers, > Jimmy > > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From gryf73 at gmail.com Mon Dec 3 20:58:23 2018 From: gryf73 at gmail.com (Roman Dobosz) Date: Mon, 3 Dec 2018 21:58:23 +0100 Subject: [nova][ops] Migrating instances with old extra specs info. In-Reply-To: References: <20181203173240.8de68c03dfc8527fe2cefb56@gmail.com> Message-ID: <20181203215823.e41d1d0b0e33f878794a8beb@gmail.com> On Mon, 3 Dec 2018 11:56:17 -0500 Jay Pipes wrote: > >>> Is it a bug? Or maybe it is possible to change the instance flavor > >>> extra spec? > >> Not sure it's a bug, per-se... might just be something you need to > > At least unexpected behavior :) > It's not unexpected behaviour at all. The behaviour of Nova when > changing a flavor's details has been (for many releases) that the > flavor's details are changed but flavor data for already-launched > instances does not change. Just like you can delete a flavor but it > doesn't affect already-launched instances because we store the details > of the original instance. For resource related things - sure. As for more ephemeral things like metadata - its arguable. Although, I tend to agree, that sometimes metadata could potentially hold sensitive information, which shouldn't be changed in that case. > The root of the problem you are experiencing is that flavor extra-specs > is a pile of half-standardized random key-value information instead of > structured, standardized quantitative and qualitative information > (*gasp* that's what resource classes and traits are in placement...) Yeah, I know. But we have that issue in Newton, where traits are nonexistent yet :) > Which is why I say this isn't actually unexpected behaviour or really a bug. > > >> manually change database records for, though. Sucks, since it's hacky, > >> but not sure if it's something we should change in the API contract. > > I'd be happy with nova-manage command for update extra specs in > > instance info, it doesn't need to be an API change :) > mysql -u$USER -p$PASSWORD -Dnova -e"UPDATE instance_extra SET flavor = > REPLACE(flavor, 'old string', 'new string')" > > There, no need for a nova-manage command. Well, thank you, kind sir! > p.s. test the above on a copy of your production DB before you trust > it... :) You bet :) -- Cheers, Roman Dobosz From mriedemos at gmail.com Mon Dec 3 21:01:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 3 Dec 2018 15:01:58 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <5C059627.9040304@openstack.org> References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> Message-ID: On 12/3/2018 2:46 PM, Jimmy McArthur wrote: > We're looking at ways to improve both formats in Denver, so I'd say > stand by.  If there is a presentation that you feel is too difficult to > follow, we can reach out to those presenters and encourage them again to > upload their slides. Here is a good example of what I'm talking about: https://youtu.be/J9K-x0yVZ4U?t=425 There is full view of the slides, but my eyes can't read most of that text. Compare that to YVR: https://youtu.be/U5V_2CUj-6A?t=576 And it's night and day. -- Thanks, Matt From jimmy at openstack.org Mon Dec 3 21:05:26 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 03 Dec 2018 15:05:26 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> Message-ID: <5C059A96.8030504@openstack.org> Yeah, that makes sense. We had multiple complaints coming out of YVR that the size of the speaker was too small. So we're trying to work out a happy medium that can still work within our budget. Appreciate the feedback :) > Matt Riedemann > December 3, 2018 at 3:01 PM > > > Here is a good example of what I'm talking about: > > https://youtu.be/J9K-x0yVZ4U?t=425 > > There is full view of the slides, but my eyes can't read most of that > text. Compare that to YVR: > > https://youtu.be/U5V_2CUj-6A?t=576 > > And it's night and day. > > Jimmy McArthur > December 3, 2018 at 2:46 PM > In Berlin, in rooms where we had a full view of the screen, we didn't > do a second screen for slides only. As mentioned, presenters can > upload their slides to help with that. > > For places like the Marketplace demo theater where we had a smaller > screen format, we added the view of both presenter and slide: > https://www.openstack.org/videos/berlin-2018/how-to-avoid-vendor-lock-in-in-a-multi-cloud-world-with-zenko > > We're looking at ways to improve both formats in Denver, so I'd say > stand by. If there is a presentation that you feel is too difficult > to follow, we can reach out to those presenters and encourage them > again to upload their slides. > > > > Matt Riedemann > December 3, 2018 at 2:32 PM > > > So uh, I don't really want to be that guy, but I'm sure others have > noticed the deal with the slides being different in the recordings > from years past, in that you can't view them (hopefully people are > uploading their slides). I'm mostly curious if there was a reason for > that? Budget cuts? Technical issues? > > Jimmy McArthur > December 3, 2018 at 12:22 PM > Thank you again for a wonderful Summit in Berlin. I'm pleased to > announce the Summit Videos are now up on the openstack.org website: > https://www.openstack.org/videos/summits/berlin-2018 If there was a > session you missed, now is your chance to catch up! These videos will > also be available in the Summit App as well as on the web under the > Berlin Summit Schedule > (https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. > > Cheers, > Jimmy > > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Mon Dec 3 21:16:11 2018 From: openstack at medberry.net (David Medberry) Date: Mon, 3 Dec 2018 14:16:11 -0700 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: <5C059A96.8030504@openstack.org> References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> <5C059A96.8030504@openstack.org> Message-ID: Hmmm, if you can HEAR the speaker and SEE the slides (as in YVR), seems like that should be sufficient. There's usually a photo of the speaker on their bio page if you need more "presence". -dave On Mon, Dec 3, 2018 at 2:05 PM Jimmy McArthur wrote: > Yeah, that makes sense. We had multiple complaints coming out of YVR that > the size of the speaker was too small. So we're trying to work out a happy > medium that can still work within our budget. > > Appreciate the feedback :) > > Matt Riedemann > December 3, 2018 at 3:01 PM > > > Here is a good example of what I'm talking about: > > https://youtu.be/J9K-x0yVZ4U?t=425 > > There is full view of the slides, but my eyes can't read most of that > text. Compare that to YVR: > > https://youtu.be/U5V_2CUj-6A?t=576 > > And it's night and day. > > Jimmy McArthur > December 3, 2018 at 2:46 PM > In Berlin, in rooms where we had a full view of the screen, we didn't do a > second screen for slides only. As mentioned, presenters can upload their > slides to help with that. > > For places like the Marketplace demo theater where we had a smaller screen > format, we added the view of both presenter and slide: > https://www.openstack.org/videos/berlin-2018/how-to-avoid-vendor-lock-in-in-a-multi-cloud-world-with-zenko > > We're looking at ways to improve both formats in Denver, so I'd say stand > by. If there is a presentation that you feel is too difficult to follow, > we can reach out to those presenters and encourage them again to upload > their slides. > > > > Matt Riedemann > December 3, 2018 at 2:32 PM > > > So uh, I don't really want to be that guy, but I'm sure others have > noticed the deal with the slides being different in the recordings from > years past, in that you can't view them (hopefully people are uploading > their slides). I'm mostly curious if there was a reason for that? Budget > cuts? Technical issues? > > Jimmy McArthur > December 3, 2018 at 12:22 PM > Thank you again for a wonderful Summit in Berlin. I'm pleased to announce > the Summit Videos are now up on the openstack.org website: > https://www.openstack.org/videos/summits/berlin-2018 If there was a > session you missed, now is your chance to catch up! These videos will also > be available in the Summit App as well as on the web under the Berlin > Summit Schedule ( > https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. > > Cheers, > Jimmy > > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Mon Dec 3 21:18:00 2018 From: openstack at medberry.net (David Medberry) Date: Mon, 3 Dec 2018 14:18:00 -0700 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> <5C059A96.8030504@openstack.org> Message-ID: and so sorry for the TOFU reply all. I'm beating myself up over it. From corvus at inaugust.com Mon Dec 3 21:30:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 03 Dec 2018 13:30:32 -0800 Subject: [infra] A change to Zuul's queuing behavior Message-ID: <87bm62z4av.fsf@meyer.lemoncheese.net> Hi, We recently made a change to how Zuul and Nodepool prioritize node requests. Cloud resources are the major constraint in how long it takes Zuul to run test jobs on proposed changes. Because we're using more resources than ever before (but not necessarily because we're doing more work -- Clark has been helping to identify inefficiencies in other mailing list threads), the amount of time it takes to receive results on a change has been increasing. Since some larger projects consume the bulk of cloud resources in our system, this can be especially frustrating for smaller projects. To be sure, it impacts everyone, but while larger projects receive a continuous stream of results (even if delayed) smaller projects may wait hours before seeing results on a single change. In order to help all projects maintain a minimal velocity, we've begun dynamically prioritizing node requests based on the number of changes a project has in a given pipeline. This means that the first change for every project in the check pipeline has the same priority. The same is true for the second change of each project in the pipeline. The result is that if a project has 50 changes in check, and another project has a single change in check, the second project won't have to wait for all 50 changes ahead before it gets nodes allocated. As conditions change (requests are fulfilled, changes are added and removed) the priorities of any unfulfilled requests are adjusted accordingly. In the gate pipeline, the grouping is by shared change queue. But the gate pipeline still has a higher overall precedence than check. We hope that this will make for a significant improvement in the experience for smaller projects without causing undue hardship for larger ones. We will be closely observing the new behavior and make any necessary tuning adjustments over the next few weeks. Please let us know if you see any adverse impacts, but don't be surprised if you notice node requests being filled "out of order". -Jim From chris.friesen at windriver.com Mon Dec 3 21:33:08 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 3 Dec 2018 15:33:08 -0600 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <87bm62z4av.fsf@meyer.lemoncheese.net> References: <87bm62z4av.fsf@meyer.lemoncheese.net> Message-ID: On 12/3/2018 3:30 PM, James E. Blair wrote: > In order to help all projects maintain a minimal velocity, we've begun > dynamically prioritizing node requests based on the number of changes a > project has in a given pipeline. This sounds great, thanks. Chris From miguel at mlavalle.com Mon Dec 3 21:38:21 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 3 Dec 2018 15:38:21 -0600 Subject: [openstack-dev] [Neutron] Propose Nate Johnston for Neutron core Message-ID: Hi Stackers, I want to nominate Nate Johnston (irc:njohnston) as a member of the Neutron core team. Nate started contributing to Neutron back in the Liberty cycle. One of the highlight contributions of that early period is his collaboration with others to implement DSCP QoS rules ( https://review.openstack.org/#/c/251738/). After a hiatus of a few cycles, we were lucky to have Nate come back to the community during the Rocky cycle. Since then, he has been a driving force in the adoption in Neutron of Oslo Versioned Objects, the "Run under Python 3 by default" community wide initiative and the optimization of ports creation in bulk to better support containerized workloads. He is a man with a wide range of interests, who is not afraid of expressing his opinions in any of them. The quality and number of his code reviews during the Stein cycle is on par with the leading members of the core team: http://stackalytics.com/?module=neutron-group. I especially admire his ability to forcefully handle disagreement in a friendly and easy going manner. On top of all that, he graciously endured me as his mentor over the past few months. For all these reasons, I think he is ready to join the team and we will be very lucky to have him as a fully voting core. I will keep this nomination open for a week as customary. Thank you Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From duc.openstack at gmail.com Mon Dec 3 21:53:45 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 3 Dec 2018 13:53:45 -0800 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> <5C059A96.8030504@openstack.org> Message-ID: Quick question, how do we upload the slides for our presentation? From miguel at mlavalle.com Mon Dec 3 22:14:02 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 3 Dec 2018 16:14:02 -0600 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core Message-ID: Hi Stackers, I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron core team. Hongbin started contributing to the OpenStack community in the Liberty cycle. Over time, he made great contributions in helping the community to better support containers by being core team member and / or PTL in projects such as Zun and Magnum. An then, fortune played in our favor and Hongbin joined the Neutron team in the Queens cycle. Since then, he has made great contributions such as filters validation in the ReST API, PF status propagation to to VFs (ports) in SR-IOV environments and leading the forking of RYU into the os-ken OpenStack project, which provides key foundational functionality for openflow. He is not a man who wastes words, but when he speaks up, his opinions are full of insight. This is reflected in the quality of his code reviews, which in number are on par with the leading members of the core team: http://stackalytics.com/?module=neutron-group. Even though Hongbin leaves in Toronto, he speaks Mandarin Chinese and was born and raised in China. This is a big asset in helping the Neutron team to incorporate use cases from that part of the world. Hongbin spent the past few months being mentored by Slawek Kaplonski, who has reported that Hongbin is ready for the challenge of being a core team member. I (and other core team members) concur. I will keep this nomination open for a week as customary. Thank you Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Dec 3 22:16:29 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 03 Dec 2018 16:16:29 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> <5C059A96.8030504@openstack.org> Message-ID: <5C05AB3D.8080908@openstack.org> Hi Duc, Please send a link to your slide deck (or the deck itself) to speakersupport at openstack.org, and I'll be happy to get it up for you. Thank you, Jimmy > Duc Truong > December 3, 2018 at 3:53 PM > Quick question, how do we upload the slides for our presentation? > David Medberry > December 3, 2018 at 3:18 PM > and so sorry for the TOFU reply all. I'm beating myself up over it. > David Medberry > December 3, 2018 at 3:16 PM > Hmmm, if you can HEAR the speaker and SEE the slides (as in YVR), > seems like that should be sufficient. There's usually a photo of the > speaker on their bio page if you need more "presence". > > -dave > > Jimmy McArthur > December 3, 2018 at 3:05 PM > Yeah, that makes sense. We had multiple complaints coming out of YVR > that the size of the speaker was too small. So we're trying to work > out a happy medium that can still work within our budget. > > Appreciate the feedback :) > > > Matt Riedemann > December 3, 2018 at 3:01 PM > > > Here is a good example of what I'm talking about: > > https://youtu.be/J9K-x0yVZ4U?t=425 > > There is full view of the slides, but my eyes can't read most of that > text. Compare that to YVR: > > https://youtu.be/U5V_2CUj-6A?t=576 > > And it's night and day. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Dec 3 22:24:48 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 3 Dec 2018 23:24:48 +0100 Subject: [openstack-dev] [magnum] kubernetes images for magnum rocky Message-ID: Hello all, Following the vulnerability [0], with magnum rocky and the kubernetes driver on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To upgrade the apiserver in existing clusters, on the master node(s) you can run: sudo atomic pull --storage ostree docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 sudo atomic containers update --rebase docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver You can upgrade the other k8s components with similar commands. I'll share instructions for magnum queens tomorrow morning CET time. Cheers, Spyros [0] https://github.com/kubernetes/kubernetes/issues/71411 [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From strigazi at gmail.com Mon Dec 3 22:24:48 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 3 Dec 2018 23:24:48 +0100 Subject: [Openstack-operators] [openstack-dev][magnum] kubernetes images for magnum rocky Message-ID: Hello all, Following the vulnerability [0], with magnum rocky and the kubernetes driver on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To upgrade the apiserver in existing clusters, on the master node(s) you can run: sudo atomic pull --storage ostree docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 sudo atomic containers update --rebase docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver You can upgrade the other k8s components with similar commands. I'll share instructions for magnum queens tomorrow morning CET time. Cheers, Spyros [0] https://github.com/kubernetes/kubernetes/issues/71411 [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From skaplons at redhat.com Mon Dec 3 22:41:33 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 3 Dec 2018 23:41:33 +0100 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: References: Message-ID: <493CB94D-19C3-4B61-8577-6BD98006B7F5@redhat.com> Definitely big +1 from me! — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Miguel Lavalle w dniu 03.12.2018, o godz. 23:14: > > Hi Stackers, > > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron core team. Hongbin started contributing to the OpenStack community in the Liberty cycle. Over time, he made great contributions in helping the community to better support containers by being core team member and / or PTL in projects such as Zun and Magnum. An then, fortune played in our favor and Hongbin joined the Neutron team in the Queens cycle. Since then, he has made great contributions such as filters validation in the ReST API, PF status propagation to to VFs (ports) in SR-IOV environments and leading the forking of RYU into the os-ken OpenStack project, which provides key foundational functionality for openflow. He is not a man who wastes words, but when he speaks up, his opinions are full of insight. This is reflected in the quality of his code reviews, which in number are on par with the leading members of the core team: http://stackalytics.com/?module=neutron-group. Even though Hongbin leaves in Toronto, he speaks Mandarin Chinese and was born and raised in China. This is a big asset in helping the Neutron team to incorporate use cases from that part of the world. > > Hongbin spent the past few months being mentored by Slawek Kaplonski, who has reported that Hongbin is ready for the challenge of being a core team member. I (and other core team members) concur. > > I will keep this nomination open for a week as customary. > > Thank you > > Miguel From skaplons at redhat.com Mon Dec 3 22:43:24 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 3 Dec 2018 23:43:24 +0100 Subject: [openstack-dev] [Neutron] Propose Nate Johnston for Neutron core In-Reply-To: References: Message-ID: Big +1 from me for Nate also :) — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Miguel Lavalle w dniu 03.12.2018, o godz. 22:38: > > Hi Stackers, > > I want to nominate Nate Johnston (irc:njohnston) as a member of the Neutron core team. Nate started contributing to Neutron back in the Liberty cycle. One of the highlight contributions of that early period is his collaboration with others to implement DSCP QoS rules (https://review.openstack.org/#/c/251738/). After a hiatus of a few cycles, we were lucky to have Nate come back to the community during the Rocky cycle. Since then, he has been a driving force in the adoption in Neutron of Oslo Versioned Objects, the "Run under Python 3 by default" community wide initiative and the optimization of ports creation in bulk to better support containerized workloads. He is a man with a wide range of interests, who is not afraid of expressing his opinions in any of them. The quality and number of his code reviews during the Stein cycle is on par with the leading members of the core team: http://stackalytics.com/?module=neutron-group. I especially admire his ability to forcefully handle disagreement in a friendly and easy going manner. > > On top of all that, he graciously endured me as his mentor over the past few months. For all these reasons, I think he is ready to join the team and we will be very lucky to have him as a fully voting core. > > I will keep this nomination open for a week as customary. > > Thank you > > Miguel > > From openstack at nemebean.com Mon Dec 3 22:48:28 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 3 Dec 2018 16:48:28 -0600 Subject: [all] Etcd as DLM Message-ID: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> Hi, I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was: "Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3." The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-) It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements. On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found. Thanks. -Ben 1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2: https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 3: https://github.com/grpc-ecosystem/grpc-gateway From mrhillsman at gmail.com Mon Dec 3 23:13:42 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 3 Dec 2018 17:13:42 -0600 Subject: [sigs] Monthly Update Message-ID: Hi everyone, During the Forum we discussed one simple way we could move forward to hopefully get more visibility and activity within SIGs. Here is a proposal for such a step. Send out a monthly email to openstack-discuss with the following information from each SIG captured via etherpad [0] 1. What success(es) have you had this month as a SIG? 2. What should we know about the SIG for the next month? 3. What would you like help (hands) or feedback (eyes) on? Besides the ML, other places this could be re-used in whole or part is on social media, SU Blog, etc. Thoughts? [0] https://etherpad.openstack.org/p/sig-updates -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Dec 3 23:30:22 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Dec 2018 23:30:22 +0000 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> <5C059A96.8030504@openstack.org> Message-ID: On Mon, 2018-12-03 at 14:16 -0700, David Medberry wrote: > Hmmm, if you can HEAR the speaker and SEE the slides (as in YVR), seems like that should be sufficient. that would be my peference also. it is nice to be able to see presenteres but their slides/laptop screen are the more important content that said im glad that they are available. > There's usually a photo of the speaker on their bio page if you need more "presence". > > -dave > > On Mon, Dec 3, 2018 at 2:05 PM Jimmy McArthur wrote: > > Yeah, that makes sense. We had multiple complaints coming out of YVR that the size of the speaker was too small. So > > we're trying to work out a happy medium that can still work within our budget. > > > > Appreciate the feedback :) > > > > > Matt Riedemann December 3, 2018 at 3:01 PM > > > > > > > > > Here is a good example of what I'm talking about: > > > > > > https://youtu.be/J9K-x0yVZ4U?t=425 > > > > > > There is full view of the slides, but my eyes can't read most of that text. Compare that to YVR: > > > > > > https://youtu.be/U5V_2CUj-6A?t=576 > > > > > > And it's night and day. one factor in that may be that the videos are uploaded in 720p at 50 rather then 1080p at 60 as they had been in vancouver. so far i have been able to view most of videos ok. the room where the project updates were is proably the one with the worst lighting. the room were the lights at the front were dimmed are quite good. the shorter videos/ market place demos like https://www.youtube.com/watch?v=hzcrjk-tgLk are certenly eaiser to view the presented content but most of the larger rooms are visable. > > > > > > Jimmy McArthur December 3, 2018 at 2:46 PM > > > In Berlin, in rooms where we had a full view of the screen, we didn't do a second screen for slides only. As > > > mentioned, presenters can upload their slides to help with that. > > > > > > For places like the Marketplace demo theater where we had a smaller screen format, we added the view of both > > > presenter and slide: https://www.openstack.org/videos/berlin-2018/how-to-avoid-vendor-lock-in-in-a-multi-cloud-wor > > > ld-with-zenko > > > > > > We're looking at ways to improve both formats in Denver, so I'd say stand by. If there is a presentation that you > > > feel is too difficult to follow, we can reach out to those presenters and encourage them again to upload their > > > slides. > > > > > > > > > > > > Matt Riedemann December 3, 2018 at 2:32 PM > > > > > > > > > So uh, I don't really want to be that guy, but I'm sure others have noticed the deal with the slides being > > > different in the recordings from years past, in that you can't view them (hopefully people are uploading their > > > slides). I'm mostly curious if there was a reason for that? Budget cuts? Technical issues? > > > > > > Jimmy McArthur December 3, 2018 at 12:22 PM > > > Thank you again for a wonderful Summit in Berlin. I'm pleased to announce the Summit Videos are now up on the > > > openstack.org website: https://www.openstack.org/videos/summits/berlin-2018 If there was a session you missed, > > > now is your chance to catch up! These videos will also be available in the Summit App as well as on the web under > > > the Berlin Summit Schedule (https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > > > > > If you have any questions or concerns about the videos, please write speakersupport at openstack.org. > > > > > > Cheers, > > > Jimmy > > > > > > _______________________________________________ > > > Staff mailing list > > > Staff at lists.openstack.org > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff > > From smooney at redhat.com Mon Dec 3 23:42:19 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Dec 2018 23:42:19 +0000 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <87bm62z4av.fsf@meyer.lemoncheese.net> References: <87bm62z4av.fsf@meyer.lemoncheese.net> Message-ID: On Mon, 2018-12-03 at 13:30 -0800, James E. Blair wrote: > Hi, > > We recently made a change to how Zuul and Nodepool prioritize node > requests. Cloud resources are the major constraint in how long it takes > Zuul to run test jobs on proposed changes. Because we're using more > resources than ever before (but not necessarily because we're doing more > work -- Clark has been helping to identify inefficiencies in other > mailing list threads), the amount of time it takes to receive results on > a change has been increasing. > > Since some larger projects consume the bulk of cloud resources in our > system, this can be especially frustrating for smaller projects. To be > sure, it impacts everyone, but while larger projects receive a > continuous stream of results (even if delayed) smaller projects may wait > hours before seeing results on a single change. > > In order to help all projects maintain a minimal velocity, we've begun > dynamically prioritizing node requests based on the number of changes a > project has in a given pipeline. > > This means that the first change for every project in the check pipeline > has the same priority. The same is true for the second change of each > project in the pipeline. The result is that if a project has 50 changes > in check, and another project has a single change in check, the second > project won't have to wait for all 50 changes ahead before it gets nodes > allocated. i could be imagineing this but is this not how zuul v2 used to work or rather how the gate was configured a few cycles ago. i remember in the past when i was working on smaller projects it was often quicker to submit patches to those instead fo nova for example. in partacal i remember working on both os-vif and nova in the past and finding my os-vif jobs would often get started much quicker then nova. anyway i think this is hopefully a good change for the majority of projects but it triggered of feeling of deja vu, was this how the gates used to run? > As conditions change (requests are fulfilled, changes are > added and removed) the priorities of any unfulfilled requests are > adjusted accordingly. > > In the gate pipeline, the grouping is by shared change queue. But the > gate pipeline still has a higher overall precedence than check. > > We hope that this will make for a significant improvement in the > experience for smaller projects without causing undue hardship for > larger ones. > We will be closely observing the new behavior and make any > necessary tuning adjustments over the next few weeks. Please let us > know if you see any adverse impacts, but don't be surprised if you > notice node requests being filled "out of order". > > -Jim > From strigazi at gmail.com Mon Dec 3 23:13:52 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 4 Dec 2018 00:13:52 +0100 Subject: [openstack-dev] [magnum] kubernetes images for magnum rocky In-Reply-To: References: Message-ID: Magnum queens, uses kubernetes 1.9.3 by default. You can upgrade to v1.10.11-1. From a quick test v1.11.5-1 is also compatible with 1.9.x. We are working to make this painless, sorry you have to ssh to the nodes for now. Cheers, Spyros On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis wrote: > Hello all, > > Following the vulnerability [0], with magnum rocky and the kubernetes > driver > on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To > upgrade > the apiserver in existing clusters, on the master node(s) you can run: > sudo atomic pull --storage ostree > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 > sudo atomic containers update --rebase > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver > > You can upgrade the other k8s components with similar commands. > > I'll share instructions for magnum queens tomorrow morning CET time. > > Cheers, > Spyros > > [0] https://github.com/kubernetes/kubernetes/issues/71411 > [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From strigazi at gmail.com Mon Dec 3 23:13:52 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 4 Dec 2018 00:13:52 +0100 Subject: [Openstack-operators] [openstack-dev][magnum] kubernetes images for magnum rocky In-Reply-To: References: Message-ID: Magnum queens, uses kubernetes 1.9.3 by default. You can upgrade to v1.10.11-1. From a quick test v1.11.5-1 is also compatible with 1.9.x. We are working to make this painless, sorry you have to ssh to the nodes for now. Cheers, Spyros On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis wrote: > Hello all, > > Following the vulnerability [0], with magnum rocky and the kubernetes > driver > on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To > upgrade > the apiserver in existing clusters, on the master node(s) you can run: > sudo atomic pull --storage ostree > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 > sudo atomic containers update --rebase > docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver > > You can upgrade the other k8s components with similar commands. > > I'll share instructions for magnum queens tomorrow morning CET time. > > Cheers, > Spyros > > [0] https://github.com/kubernetes/kubernetes/issues/71411 > [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From juliaashleykreger at gmail.com Mon Dec 3 23:53:45 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 3 Dec 2018 15:53:45 -0800 Subject: [all] Etcd as DLM In-Reply-To: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> Message-ID: I would like to slightly interrupt this train of thought for an unscheduled vision of the future! What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy? While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward. -Julia On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: > Hi, > > I wanted to revisit this topic because it has come up in some downstream > discussions around Cinder A/A HA and the last time we talked about it > upstream was a year and a half ago[1]. There have certainly been changes > since then so I think it's worth another look. For context, the > conclusion of that session was: > > "Let's use etcd 3.x in the devstack CI, projects that are eventlet based > an use the etcd v3 http experimental API and those that don't can use > the etcd v3 gRPC API. Dims will submit a patch to tooz for the new > driver with v3 http experimental API. Projects should feel free to use > the DLM based on tooz+etcd3 from now on. Others projects can figure out > other use cases for etcd3." > > The main question that has come up is whether this is still the best > practice or if we should revisit the preferred drivers for etcd. Gorka > has gotten the grpc-based driver working in a Cinder driver that needs > etcd[2], so there's a question as to whether we still need the HTTP > etcd-gateway or if everything should use grpc. I will admit I'm nervous > about trying to juggle eventlet and grpc, but if it works then my only > argument is general misgivings about doing anything clever that involves > eventlet. :-) > > It looks like the HTTP API for etcd has moved out of experimental > status[3] at this point, so that's no longer an issue. There was some > vague concern from a downstream packaging perspective that the grpc > library might use a funky build system, whereas the etcd3-gateway > library only depends on existing OpenStack requirements. > > On the other hand, I don't know how much of a hassle it is to deploy and > manage a grpc-gateway. I'm kind of hoping someone has already been down > this road and can advise about what they found. > > Thanks. > > -Ben > > 1: https://etherpad.openstack.org/p/BOS-etcd-base-service > 2: > > https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 > 3: https://github.com/grpc-ecosystem/grpc-gateway > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Dec 4 00:09:32 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 4 Dec 2018 09:09:32 +0900 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: <20181203170339.psadnws63wfywtrs@yuggoth.org> References: <20181203170339.psadnws63wfywtrs@yuggoth.org> Message-ID: Thank Jeremy for the clear instructions. On Tue, Dec 4, 2018 at 2:05 AM Jeremy Stanley wrote: > On 2018-12-04 00:28:30 +0900 (+0900), Trinh Nguyen wrote: > > Currently, [1] fails tox py27 tests on Zuul check for just updating the > log > > text. The tests are successful at local dev env. Just wondering there is > > any new change at Zuul CI? > > > > [1] https://review.openstack.org/#/c/619162/ > > I don't know of any recent changes which would result in the > indicated test failures. According to the log it looks like it's a > functional testsuite and the tests are failing to connect to the > search API. I don't see your job collecting any service logs > however, so it's unclear whether the API service is failing to > start, or spontaneously crashes, or something else is going on. My > first guess would be that one of your dependencies has released and > brought some sort of regression. > > According to > > http://zuul.openstack.org/builds?job_name=openstack-tox-py27&project=openstack%2Fsearchlight&branch=master > the last time that job passed for your repo was 2018-11-07 with the > installed package versions listed in the > > http://logs.openstack.org/56/616056/1/gate/openstack-tox-py27/e413441/tox/py27-5.log > file, and the first failure I see matching the errors in yours ran > with the versions in > > http://logs.openstack.org/62/619162/1/check/openstack-tox-py27/809a281/tox/py27-5.log > on 2018-11-21 (it wasn't run for the intervening 2 weeks so we have > a fairly large window of potential external breakage there). A diff > of those suggests the following dependencies updated between them: > > coverage: 4.5.1 -> 4.5.2 > cryptography: 2.3.1 -> 2.4.1 > httplib2: 0.11.3 -> 0.12.0 > oslo.cache: 1.31.1 -> 1.31.0 (downgraded) > oslo.service: 1.32.0 -> 1.33.0 > python-neutronclient: 6.10.0 -> 6.11.0 > requests: 2.20.0 -> 2.20.1 > WebOb: 1.8.3 -> 1.8.4 > > Make sure with your local attempts at reproduction you're running > with these newer versions of dependencies, for example by clearing > any existing tox envs with the -r flag or `git clean -dfx` so that > stale versions aren't used instead. > -- > Jeremy Stanley > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 4 00:05:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Dec 2018 00:05:28 +0000 Subject: [all] The old mailing lists are now retired In-Reply-To: <20181119000426.hpgshg6ln5652pvt@yuggoth.org> References: <20181109181447.qhutsauxl4fuinnh@yuggoth.org> <20181119000426.hpgshg6ln5652pvt@yuggoth.org> Message-ID: <20181204000527.mdeesmrtlloej3ye@yuggoth.org> The openstack, openstack-dev, openstack-sigs and openstack-operators mailing lists were replaced by openstack-discuss. Starting now the old lists are no longer receiving new posts, but their addresses are aliased to the new list for the convenience of anyone replying to an earlier message. If you happen to notice one of the old list addresses in a reply you're writing, please take a moment to update it to the correct one. Here follow some brief notes on the nature of this new list: 1. We have documented[*] subject tags we want to use. Most of our current needs are probably already covered there, but if you notice something missing you can correct it by posting a change for code review to the doc/source/open-community.rst file in the openstack/project-team-guide Git repository. Because we're interested in moving to Mailman 3 as our listserv in the not-too-distant future and the upstream Mailman maintainers decided to drop support for server-side topic filters, you'll need to set up client-side mail filtering if you're only interested in seeing messages for a subset of them. 2. Remember there is no Subject header mangling on this list, so if you want to perform client-side matching it's strongly recommended you use the List-Id header to identify it. You'll likely also notice there is no list footer appended to every message. Further, the From and Reply-To headers from the original message are unaltered by the listserv. These changes in behavior from our previous lists are part of an effort to avoid creating unnecessary DMARC/DKIM violations. A mailreader with reply-to-list functionality is strongly recommended. If you're unfortunate enough not to be using such a client, you may be able to get away with using reply-to-all and then removing any irrelevant addresses (making sure to still include the mailing list's posting address). 3. Please take a moment to review our netiquette guide[**] for participating in OpenStack mailing lists. [*] https://docs.openstack.org/project-team-guide/open-community.html#mailing-lists [**] https://wiki.openstack.org/wiki/MailingListEtiquette -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From strigazi at gmail.com Tue Dec 4 00:24:21 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 4 Dec 2018 01:24:21 +0100 Subject: [openstack-dev][magnum] kubernetes images for magnum rocky In-Reply-To: References: Message-ID: Hello again, address space issues: # uninstall the running container sudo atomic uninstall kube-apiserver # delete the old image, check which one you use sudo atomic images delete --storage ostree docker.io/openstackmagnum/kubernetes-apiserver:v1.9.3 # prune to actually claim the space back sudo atomic images prune # install the new image sudo atomic install --system --storage ostree --name kube-apiserver docker.io/openstackmagnum/kubernetes-apiserver:v1.10.11-1 # start the service sudo systemctl start kube-apiserver I haven't used it, but there is an ansible module [0]. Cheers, Spyros [0] https://docs.ansible.com/ansible/2.5/modules/atomic_container_module.html On Tue, 4 Dec 2018 at 00:13, Spyros Trigazis wrote: > Magnum queens, uses kubernetes 1.9.3 by default. > You can upgrade to v1.10.11-1. From a quick test > v1.11.5-1 is also compatible with 1.9.x. > > We are working to make this painless, sorry you > have to ssh to the nodes for now. > > Cheers, > Spyros > > On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis wrote: > >> Hello all, >> >> Following the vulnerability [0], with magnum rocky and the kubernetes >> driver >> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. >> To upgrade >> the apiserver in existing clusters, on the master node(s) you can run: >> sudo atomic pull --storage ostree >> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 >> sudo atomic containers update --rebase >> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver >> >> You can upgrade the other k8s components with similar commands. >> >> I'll share instructions for magnum queens tomorrow morning CET time. >> >> Cheers, >> Spyros >> >> [0] https://github.com/kubernetes/kubernetes/issues/71411 >> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Dec 4 00:55:31 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 4 Dec 2018 00:55:31 +0000 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> It is a full base service already: https://governance.openstack.org/tc/reference/base-services.html Projects have been free to use it for quite some time. I'm not sure if any actually are yet though. It was decided not to put an abstraction layer on top as its pretty simple and commonly deployed. Thanks, Kevin ________________________________ From: Julia Kreger [juliaashleykreger at gmail.com] Sent: Monday, December 03, 2018 3:53 PM To: Ben Nemec Cc: Davanum Srinivas; geguileo at redhat.com; openstack-discuss at lists.openstack.org Subject: Re: [all] Etcd as DLM I would like to slightly interrupt this train of thought for an unscheduled vision of the future! What if we could allow a component to store data in etcd3's key value store like how we presently use oslo_db/sqlalchemy? While I personally hope to have etcd3 as a DLM for ironic one day, review bandwidth permitting, it occurs to me that etcd3 could be leveraged for more than just DLM. If we have a common vision to enable data storage, I suspect it might help provide overall guidance as to how we want to interact with the service moving forward. -Julia On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec > wrote: Hi, I wanted to revisit this topic because it has come up in some downstream discussions around Cinder A/A HA and the last time we talked about it upstream was a year and a half ago[1]. There have certainly been changes since then so I think it's worth another look. For context, the conclusion of that session was: "Let's use etcd 3.x in the devstack CI, projects that are eventlet based an use the etcd v3 http experimental API and those that don't can use the etcd v3 gRPC API. Dims will submit a patch to tooz for the new driver with v3 http experimental API. Projects should feel free to use the DLM based on tooz+etcd3 from now on. Others projects can figure out other use cases for etcd3." The main question that has come up is whether this is still the best practice or if we should revisit the preferred drivers for etcd. Gorka has gotten the grpc-based driver working in a Cinder driver that needs etcd[2], so there's a question as to whether we still need the HTTP etcd-gateway or if everything should use grpc. I will admit I'm nervous about trying to juggle eventlet and grpc, but if it works then my only argument is general misgivings about doing anything clever that involves eventlet. :-) It looks like the HTTP API for etcd has moved out of experimental status[3] at this point, so that's no longer an issue. There was some vague concern from a downstream packaging perspective that the grpc library might use a funky build system, whereas the etcd3-gateway library only depends on existing OpenStack requirements. On the other hand, I don't know how much of a hassle it is to deploy and manage a grpc-gateway. I'm kind of hoping someone has already been down this road and can advise about what they found. Thanks. -Ben 1: https://etherpad.openstack.org/p/BOS-etcd-base-service 2: https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 3: https://github.com/grpc-ecosystem/grpc-gateway -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Tue Dec 4 02:04:02 2018 From: mike.carden at gmail.com (Mike Carden) Date: Tue, 4 Dec 2018 13:04:02 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: Having found the nice docs at: https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/tips_tricks.html I have divined that I can ssh to each controller node and: sudo docker exec -u root nova_scheduler crudini --set /etc/nova/nova.conf placement randomize_allocation_candidates true sudo docker kill -s SIGHUP nova_scheduler ...and indeed the /etc/nova/nova.conf in each nova_scheduler container is updated accordingly. Unfortunately, instances are all still launched on compute-0. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuyulong.xa at gmail.com Tue Dec 4 02:04:35 2018 From: liuyulong.xa at gmail.com (LIU Yulong) Date: Tue, 4 Dec 2018 10:04:35 +0800 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: <493CB94D-19C3-4B61-8577-6BD98006B7F5@redhat.com> References: <493CB94D-19C3-4B61-8577-6BD98006B7F5@redhat.com> Message-ID: +1 On Tue, Dec 4, 2018 at 6:43 AM Slawomir Kaplonski wrote: > Definitely big +1 from me! > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Miguel Lavalle w dniu > 03.12.2018, o godz. 23:14: > > > > Hi Stackers, > > > > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron > core team. Hongbin started contributing to the OpenStack community in the > Liberty cycle. Over time, he made great contributions in helping the > community to better support containers by being core team member and / or > PTL in projects such as Zun and Magnum. An then, fortune played in our > favor and Hongbin joined the Neutron team in the Queens cycle. Since then, > he has made great contributions such as filters validation in the ReST API, > PF status propagation to to VFs (ports) in SR-IOV environments and leading > the forking of RYU into the os-ken OpenStack project, which provides key > foundational functionality for openflow. He is not a man who wastes words, > but when he speaks up, his opinions are full of insight. This is reflected > in the quality of his code reviews, which in number are on par with the > leading members of the core team: > http://stackalytics.com/?module=neutron-group. Even though Hongbin leaves > in Toronto, he speaks Mandarin Chinese and was born and raised in China. > This is a big asset in helping the Neutron team to incorporate use cases > from that part of the world. > > > > Hongbin spent the past few months being mentored by Slawek Kaplonski, > who has reported that Hongbin is ready for the challenge of being a core > team member. I (and other core team members) concur. > > > > I will keep this nomination open for a week as customary. > > > > Thank you > > > > Miguel > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuyulong.xa at gmail.com Tue Dec 4 02:04:51 2018 From: liuyulong.xa at gmail.com (LIU Yulong) Date: Tue, 4 Dec 2018 10:04:51 +0800 Subject: [openstack-dev] [Neutron] Propose Nate Johnston for Neutron core In-Reply-To: References: Message-ID: +1 On Tue, Dec 4, 2018 at 6:44 AM Slawomir Kaplonski wrote: > Big +1 from me for Nate also :) > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Miguel Lavalle w dniu > 03.12.2018, o godz. 22:38: > > > > Hi Stackers, > > > > I want to nominate Nate Johnston (irc:njohnston) as a member of the > Neutron core team. Nate started contributing to Neutron back in the Liberty > cycle. One of the highlight contributions of that early period is his > collaboration with others to implement DSCP QoS rules ( > https://review.openstack.org/#/c/251738/). After a hiatus of a few > cycles, we were lucky to have Nate come back to the community during the > Rocky cycle. Since then, he has been a driving force in the adoption in > Neutron of Oslo Versioned Objects, the "Run under Python 3 by default" > community wide initiative and the optimization of ports creation in bulk to > better support containerized workloads. He is a man with a wide range of > interests, who is not afraid of expressing his opinions in any of them. > The quality and number of his code reviews during the Stein cycle is on par > with the leading members of the core team: > http://stackalytics.com/?module=neutron-group. I especially admire his > ability to forcefully handle disagreement in a friendly and easy going > manner. > > > > On top of all that, he graciously endured me as his mentor over the past > few months. For all these reasons, I think he is ready to join the team and > we will be very lucky to have him as a fully voting core. > > > > I will keep this nomination open for a week as customary. > > > > Thank you > > > > Miguel > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 4 02:47:25 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 3 Dec 2018 18:47:25 -0800 Subject: [all] Etcd as DLM In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> Message-ID: Indeed it is a considered a base service, but I'm unaware of why it was decided to not have any abstraction layer on top. That sort of defeats the adoption of tooz as a standard in the community. Plus with the rest of our code bases, we have a number of similar or identical patterns and it would be ideal to have a single library providing the overall interface for the purposes of consistency. Could you provide some more background on that decision? I guess what I'd really like to see is an oslo.db interface into etcd3. -Julia On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M wrote: > It is a full base service already: > https://governance.openstack.org/tc/reference/base-services.html > > Projects have been free to use it for quite some time. I'm not sure if any > actually are yet though. > > It was decided not to put an abstraction layer on top as its pretty simple > and commonly deployed. > > Thanks, > Kevin > ------------------------------ > *From:* Julia Kreger [juliaashleykreger at gmail.com] > *Sent:* Monday, December 03, 2018 3:53 PM > *To:* Ben Nemec > *Cc:* Davanum Srinivas; geguileo at redhat.com; > openstack-discuss at lists.openstack.org > *Subject:* Re: [all] Etcd as DLM > > I would like to slightly interrupt this train of thought for an > unscheduled vision of the future! > > What if we could allow a component to store data in etcd3's key value > store like how we presently use oslo_db/sqlalchemy? > > While I personally hope to have etcd3 as a DLM for ironic one day, review > bandwidth permitting, it occurs to me that etcd3 could be leveraged for > more than just DLM. If we have a common vision to enable data storage, I > suspect it might help provide overall guidance as to how we want to > interact with the service moving forward. > > -Julia > > On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: > >> Hi, >> >> I wanted to revisit this topic because it has come up in some downstream >> discussions around Cinder A/A HA and the last time we talked about it >> upstream was a year and a half ago[1]. There have certainly been changes >> since then so I think it's worth another look. For context, the >> conclusion of that session was: >> >> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based >> an use the etcd v3 http experimental API and those that don't can use >> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new >> driver with v3 http experimental API. Projects should feel free to use >> the DLM based on tooz+etcd3 from now on. Others projects can figure out >> other use cases for etcd3." >> >> The main question that has come up is whether this is still the best >> practice or if we should revisit the preferred drivers for etcd. Gorka >> has gotten the grpc-based driver working in a Cinder driver that needs >> etcd[2], so there's a question as to whether we still need the HTTP >> etcd-gateway or if everything should use grpc. I will admit I'm nervous >> about trying to juggle eventlet and grpc, but if it works then my only >> argument is general misgivings about doing anything clever that involves >> eventlet. :-) >> >> It looks like the HTTP API for etcd has moved out of experimental >> status[3] at this point, so that's no longer an issue. There was some >> vague concern from a downstream packaging perspective that the grpc >> library might use a funky build system, whereas the etcd3-gateway >> library only depends on existing OpenStack requirements. >> >> On the other hand, I don't know how much of a hassle it is to deploy and >> manage a grpc-gateway. I'm kind of hoping someone has already been down >> this road and can advise about what they found. >> >> Thanks. >> >> -Ben >> >> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service >> 2: >> >> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 >> 3: https://github.com/grpc-ecosystem/grpc-gateway >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Dec 4 03:46:05 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 3 Dec 2018 20:46:05 -0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Mon, Dec 3, 2018 at 7:06 PM Mike Carden wrote: > > > Having found the nice docs at: > https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/tips_tricks.html > > I have divined that I can ssh to each controller node and: > sudo docker exec -u root nova_scheduler crudini --set /etc/nova/nova.conf placement randomize_allocation_candidates true > sudo docker kill -s SIGHUP nova_scheduler > FYI protip, you can add the following to a custom environment file to configure this value (as we don't expose the config by default) parameters_defaults: ControllerExtraConfig: nova::config::nova_config: placement/randomize_allocation_candidates: value: true And then do a deployment. This will persist it and ensure future scaling/management updates won't remove this configuration. > ...and indeed the /etc/nova/nova.conf in each nova_scheduler container is updated accordingly. > > Unfortunately, instances are all still launched on compute-0. > > -- > MC > > From aschultz at redhat.com Tue Dec 4 03:46:05 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 3 Dec 2018 20:46:05 -0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Mon, Dec 3, 2018 at 7:06 PM Mike Carden wrote: > > > Having found the nice docs at: > https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/tips_tricks.html > > I have divined that I can ssh to each controller node and: > sudo docker exec -u root nova_scheduler crudini --set /etc/nova/nova.conf placement randomize_allocation_candidates true > sudo docker kill -s SIGHUP nova_scheduler > FYI protip, you can add the following to a custom environment file to configure this value (as we don't expose the config by default) parameters_defaults: ControllerExtraConfig: nova::config::nova_config: placement/randomize_allocation_candidates: value: true And then do a deployment. This will persist it and ensure future scaling/management updates won't remove this configuration. > ...and indeed the /etc/nova/nova.conf in each nova_scheduler container is updated accordingly. > > Unfortunately, instances are all still launched on compute-0. > > -- > MC > > From miguel at mlavalle.com Tue Dec 4 04:00:18 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 3 Dec 2018 22:00:18 -0600 Subject: [openstack-dev] [Neutron] Bugs deputy report week of November 26th Message-ID: Hi Neutron team, I was the bugs deputy the week of November 26th. It was a relatively quite week. These are the bugs reported during the period: *Medium priority* - https://bugs.launchpad.net/neutron/+bug/1805824, The dhcp port's address may be messed when the port's network has multiple subnets. Fix proposed here: https://review.openstack.org/#/c/620900/. It undoes a previous fix, https://bugs.launchpad.net/neutron/+bug/1581918, so another alternative has to be found - https://bugs.launchpad.net/neutron/+bug/1805808, Openvswitch firewall driver don't reinitialize after ovs-vswitchd restart. Fix proposed here: https://review.openstack.org/620886 - https://bugs.launchpad.net/neutron/+bug/1805456, [DVR] Neutron doesn't configure multiple external subnets for one network properly. Owner assigned *Low priority* - https://bugs.launchpad.net/neutron/+bug/1805844*, *When debugging fullstack tests, "cmd" module is incorrectly imported*. *Fix already in progress: https://review.openstack.org/620916 - https://bugs.launchpad.net/neutron/+bug/1805126, confusing wording in config-dns-int.rst. Fixed proposed: https://review.openstack.org/620017 *Incomplete* - https://bugs.launchpad.net/neutron/+bug/1805769, maximum RPC response timeout is not reasonable. Requested clarification from submitter - https://bugs.launchpad.net/neutron/+bug/1805356, Convert instance method to staticmethod in linux bridge agent. Requested clarification from submitter *Needs further investigation* - https://bugs.launchpad.net/neutron/+bug/1805132, bulk creation of security group rules fails StaleDataError. Couldn't reproduce -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Tue Dec 4 04:10:36 2018 From: mike.carden at gmail.com (Mike Carden) Date: Tue, 4 Dec 2018 15:10:36 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Tue, Dec 4, 2018 at 2:46 PM Alex Schultz wrote: > > > parameters_defaults: > ControllerExtraConfig: > nova::config::nova_config: > placement/randomize_allocation_candidates: > value: true Thanks for that Alex. I'll roll that into our next over-the-top deploy update. I won't hold my breath for it actually getting our scheduling sorted out though, since it made no difference when I manually updated all three controllers with that config. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Tue Dec 4 04:12:19 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 3 Dec 2018 22:12:19 -0600 Subject: [openlab] Weekly Meeting Message-ID: Hi everyone, Please see the following links for full log of today’s meeting and a more concise summary: Full Log: https://meetings.openlabtesting.org/devops/2018/devops.2018-12-04-02.00.log.html Summary (Minutes): https://meetings.openlabtesting.org/devops/2018/devops.2018-12-04-02.00.html Help Wanted: Setup os_loganalyze – https://github.com/theopenlab/openlab/issues/137 There is always an opportunity to grab items off the project board – https://github.com/orgs/theopenlab/projects/1 If you have any questions regarding the meeting please send email response to this email for follow-up. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Tue Dec 4 08:38:08 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Tue, 4 Dec 2018 09:38:08 +0100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: <0331f7bb-9e6d-5f75-5649-5bc5285585c8@binero.se> Started a patch to include that option in puppet-nova as well based on this thread which perhaps can help the TripleO based world as well. https://review.openstack.org/#/c/621593/ Best regards On 12/04/2018 04:56 AM, Alex Schultz wrote: > On Mon, Dec 3, 2018 at 7:06 PM Mike Carden wrote: >> >> Having found the nice docs at: >> https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/tips_tricks.html >> >> I have divined that I can ssh to each controller node and: >> sudo docker exec -u root nova_scheduler crudini --set /etc/nova/nova.conf placement randomize_allocation_candidates true >> sudo docker kill -s SIGHUP nova_scheduler >> > FYI protip, you can add the following to a custom environment file to > configure this value (as we don't expose the config by default) > > parameters_defaults: > ControllerExtraConfig: > nova::config::nova_config: > placement/randomize_allocation_candidates: > value: true > > > And then do a deployment. This will persist it and ensure future > scaling/management updates won't remove this configuration. > > > >> ...and indeed the /etc/nova/nova.conf in each nova_scheduler container is updated accordingly. >> >> Unfortunately, instances are all still launched on compute-0. >> >> -- >> MC >> >> > From balazs.gibizer at ericsson.com Tue Dec 4 09:00:43 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 4 Dec 2018 09:00:43 +0000 Subject: [nova] When can/should we remove old nova-status upgrade checks? In-Reply-To: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> References: <41dda844-4466-84b6-6f47-3996d2b955cc@gmail.com> Message-ID: <1543914038.32372.1@smtp.office365.com> On Mon, Dec 3, 2018 at 5:38 PM, Matt Riedemann wrote: > Questions came up in review [1] about dropping an old "nova-status > upgrade check" which relies on using the in-tree placement database > models for testing the check. The check in question, "Resource > Providers", compares the number of compute node resource providers in > the nova_api DB against the number of compute nodes in all cells. > When the check was originally written in Ocata [2] it was meant to > help ease the upgrade where nova-compute needed to be configured to > report compute node resource provider inventory to placement so the > scheduler could use placement. It looks for things like >0 compute > nodes but 0 resource providers indicating the computes aren't > reporting into placement like they should be. In Ocata, if that > happened, and there were older compute nodes (from Newton), then the > scheduler would fallback to not use placement. That fallback code has > been removed. Also in Ocata, nova-compute would fail to start if > nova.conf wasn't configured for placement [3] but that has also been > removed. Now if nova.conf isn't configured for placement, I think > we'll just log an exception traceback but not actually fail the > service startup, and the node's resources wouldn't be available to > the scheduler, so you could get NoValidHost failures during > scheduling and need to dig into why a given compute node isn't being > used during scheduling. > > The question is, given this was added in Ocata to ease with the > upgrade to require placement, and we're long past that now, is the > check still useful? The check still has lots of newton/ocata/pike > comments in it, so it's showing its age. However, one could argue it > is still useful for base install verification, or for someone doing > FFU. If we keep this check, the related tests will need to be > re-written to use the placement REST API fixture since the in-tree > nova_api db tables will eventually go away because of extracted > placement. I'm OK to remove the check as during FFU one can install Rocky version of nova to run the check if needed. Anyhow if there is a need to keep the check, then I think we can change the implementation to read the hostname of each compute from the HostMapping and query the placement API with that hostname as a RP name then check that there is VCPU inventory at least on that RP. Cheers, gibi > > The bigger question is, what sort of criteria do we have for dropping > old checks like this besides when the related code, for which the > check was added, is removed? FFU kind of throws a wrench in > everything, but at the same time, I believe the prescribed FFU steps > are that online data migrations (and upgrade checks) are meant to be > run per-release you're fast-forward upgrading through. > > [1] > https://review.openstack.org/#/c/617941/26/nova/tests/unit/cmd/test_status.py > [2] https://review.openstack.org/#/c/413250/ > [3] > https://github.com/openstack/nova/blob/stable/ocata/nova/compute/manager.py#L1139 > > -- > > Thanks, > > Matt > From gmann at ghanshyammann.com Tue Dec 4 09:01:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Dec 2018 18:01:04 +0900 Subject: [dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <1677873098d.e1e2934268954.3300616982788651524@ghanshyammann.com> Hello everyone, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. - gmann & TC From gmann at ghanshyammann.com Tue Dec 4 10:16:49 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Dec 2018 19:16:49 +0900 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest Message-ID: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> Hi All, As you all know, placement tests (in nova repo or in new placement repo) are implemented using gabbi functional test[1]. There is a patch from amodi about adding placement test in Tempest [2]. Currently, Tempest does not support the placement endpoints. To support placement API in Tempest, it does not need much work (i have listed the todo items in gerrit patch.). Before we start or proceed with the discussion in QA, i would like to get the nova(placement) team opinion on adding the placement support in Tempest. Obviously, we should not duplicate the testing effort between what existing gabbi tests cover or what going to be added in Tempest which we can take care while adding the new tests. [1] https://github.com/openstack/nova/tree/5f648dda49a6d5fe5ecfd7dddcb5f7dc3d6b51a6/nova/tests/functional/api/openstack/placement https://github.com/openstack/placement/tree/34f03c297e28ae2c46ab11debdb4bbe64df1acdc/placement/tests/functional [2] https://review.openstack.org/#/c/621645/2 -gmann From cdent+os at anticdent.org Tue Dec 4 10:55:08 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Dec 2018 10:55:08 +0000 (GMT) Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Tue, 4 Dec 2018, Mike Carden wrote: > Having found the nice docs at: > https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/tips_tricks.html > > I have divined that I can ssh to each controller node and: > sudo docker exec -u root nova_scheduler crudini --set /etc/nova/nova.conf > placement randomize_allocation_candidates true > sudo docker kill -s SIGHUP nova_scheduler > > ...and indeed the /etc/nova/nova.conf in each nova_scheduler container is > updated accordingly. > > Unfortunately, instances are all still launched on compute-0. Sorry this has been such a pain for you. There are a couple of issues/other things to try: * The 'randomize_allocation_candidates' config setting is used by the placement-api process (probably called nova-placement-api in queens), not the nova-scheduler process, so you need to update the config (in the placement section) for the former and restart it. * If that still doesn't fix it then it would be helpful to see the logs from both the placement-api and nova-scheduler process from around the time you try to launch some instances, as that will help show if there's some other factor at play that is changing the number of available target hosts, causing attempts on the other two hosts to not land. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From flavio at redhat.com Tue Dec 4 11:08:08 2018 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 4 Dec 2018 12:08:08 +0100 Subject: [Nova] Increase size limits for user data Message-ID: Greetings, I've been working on a tool that requires creating CoreOS nodes on OpenStack. Sadly, I've hit the user data limit, which has blocked the work I'm doing. One big difference between CoreOS images and other cloud images out there is that CoreOS images don't use cloud-init but a different tool called Ignition[0], which uses JSON as its serialization format. The size of the configs that I need to pass to the instance is bigger than the limit imposed by Nova. I've worked on reducing the size as much as possible and even generating a compressed version of it but the data is still bigger than the limit (144 kb vs 65kb). I'd like to understand better what the nature of the limit is (is it because of the field size in the database? Is it purely an API limit? Is it because it causes problems depending on the vendor? As far as I can tell the limit is just being enforced by the API schema[1] and not the DB as it uses a MEDIUMTEXT field. I realize this has been asked before but I wanted to get a feeling of the current opinion about this. Would the Nova team consider increasing the limit of the API considering that more use cases like this may be more common these days? [0] https://coreos.com/ignition/docs/latest/ [1] https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L212-L215 Thanks, Flavio -- @flaper87 Flavio Percoco From cdent+os at anticdent.org Tue Dec 4 11:13:44 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Dec 2018 11:13:44 +0000 (GMT) Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> Message-ID: On Tue, 4 Dec 2018, Ghanshyam Mann wrote: > Before we start or proceed with the discussion in QA, i would like to get the nova(placement) team opinion on adding the placement support in Tempest. Obviously, we should not duplicate the testing effort between what existing gabbi tests cover or what going to be added in Tempest which we can take care while adding the new tests. My feeling on this is that what should be showing up in tempest with regard to placement tests are things that demonstrate and prove end to end scenarios in which placement is involved as a critical part, but is in the background. For example, things like the emerging minimal bandwidth functionality that involves all three of nova, placement and neutron. I don't think we need extensive testing in Tempest of the placement API itself, as that's already well covered by the existing functional tests, nor do I think it makes much sense to cover the common scheduling scenarios between nova and placement as those are also well covered and will continue to be covered even with placement extracted [1]. Existing Tempests tests that do things like launching, resizing, migrating servers already touch placement so may be sufficient. If we wanted to make these more complete adding verification of resource providers and their inventories before and after the tests might be useful. [1] https://review.openstack.org/#/c/617941/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From lajos.katona at ericsson.com Tue Dec 4 11:49:53 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Tue, 4 Dec 2018 11:49:53 +0000 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> Message-ID: <231b7347-1165-3917-80a1-88cf33c96de1@ericsson.com> Hi, I started to add a scenario test to neutron-tempest-plugin for bandwidth and for that some placement client to tempest. I plan to upload (possibly as WIP) first the client part and after that the scenario for bandwidth and for routed networks as that one uses placement as well. Regards Lajos On 2018. 12. 04. 12:13, Chris Dent wrote: > On Tue, 4 Dec 2018, Ghanshyam Mann wrote: > >> Before we start or proceed with the discussion in QA, i would like to >> get the nova(placement) team opinion on adding the placement support >> in Tempest. Obviously, we should not duplicate the testing effort >> between what existing gabbi tests cover or what going to be added in >> Tempest which we can take care while adding the new tests. > > My feeling on this is that what should be showing up in tempest with > regard to placement tests are things that demonstrate and prove end > to end scenarios in which placement is involved as a critical part, > but is in the background. For example, things like the emerging minimal > bandwidth functionality that involves all three of nova, placement > and neutron. > > I don't think we need extensive testing in Tempest of the placement > API itself, as that's already well covered by the existing > functional tests, nor do I think it makes much sense to cover the > common scheduling scenarios between nova and placement as those are > also well covered and will continue to be covered even with > placement extracted [1]. > > Existing Tempests tests that do things like launching, resizing, > migrating servers already touch placement so may be sufficient. If > we wanted to make these more complete adding verification of > resource providers and their inventories before and after the tests > might be useful. > > > [1] https://review.openstack.org/#/c/617941/ > From zufar at onf-ambassador.org Tue Dec 4 08:59:13 2018 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Tue, 4 Dec 2018 15:59:13 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: Hi all, I am facing this issue again, I try to add this configuration but still, the node is going to compute1. [scheduler] driver = filter_scheduler host_manager = host_manager [filter_scheduler] available_filters=nova.scheduler.filters.all_filters enabled_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter use_baremetal_filters=False weight_classes=nova.scheduler.weights.all_weighers [placement] randomize_allocation_candidates = true thank you. Best Regards, Zufar Dhiyaulhaq On Tue, Dec 4, 2018 at 3:55 AM Mike Carden wrote: > >> Presuming you are deploying Rocky or Queens, >> > > Yep, it's Queens. > > >> >> It goes in the nova.conf file under the [placement] section: >> >> randomize_allocation_candidates = true >> > > In triple-o land it seems like the config may need to be somewhere like > nova-scheduler.yaml and laid down via a re-deploy. > > Or something. > > The nova_scheduler runs in a container on a 'controller' host. > > -- > MC > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Dec 4 10:08:12 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 4 Dec 2018 11:08:12 +0100 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> Message-ID: <20181204100812.og33xegl2fxmoo6g@localhost> On 03/12, Julia Kreger wrote: > Indeed it is a considered a base service, but I'm unaware of why it was > decided to not have any abstraction layer on top. That sort of defeats the > adoption of tooz as a standard in the community. Plus with the rest of our > code bases, we have a number of similar or identical patterns and it would > be ideal to have a single library providing the overall interface for the > purposes of consistency. Could you provide some more background on that > decision? > > I guess what I'd really like to see is an oslo.db interface into etcd3. > > -Julia Hi, I think that some projects won't bother with the etcd interface since it would require some major rework of the whole service to get it working. Take Cinder for example. We do complex conditional updates that, as far as I know, cannot be satisfied with etcd's Compare-and-Swap functionality. We could modify all our code to make it support both relational databases and key-value stores, but I'm not convinced it would be worthwhile considering the huge effort it would require. I believe there are other OpenStack projects that have procedural code stored on the database, which would probably be hard to make compatible with key-value stores. Cheers, Gorka. > > On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M wrote: > > > It is a full base service already: > > https://governance.openstack.org/tc/reference/base-services.html > > > > Projects have been free to use it for quite some time. I'm not sure if any > > actually are yet though. > > > > It was decided not to put an abstraction layer on top as its pretty simple > > and commonly deployed. > > > > Thanks, > > Kevin > > ------------------------------ > > *From:* Julia Kreger [juliaashleykreger at gmail.com] > > *Sent:* Monday, December 03, 2018 3:53 PM > > *To:* Ben Nemec > > *Cc:* Davanum Srinivas; geguileo at redhat.com; > > openstack-discuss at lists.openstack.org > > *Subject:* Re: [all] Etcd as DLM > > > > I would like to slightly interrupt this train of thought for an > > unscheduled vision of the future! > > > > What if we could allow a component to store data in etcd3's key value > > store like how we presently use oslo_db/sqlalchemy? > > > > While I personally hope to have etcd3 as a DLM for ironic one day, review > > bandwidth permitting, it occurs to me that etcd3 could be leveraged for > > more than just DLM. If we have a common vision to enable data storage, I > > suspect it might help provide overall guidance as to how we want to > > interact with the service moving forward. > > > > -Julia > > > > On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: > > > >> Hi, > >> > >> I wanted to revisit this topic because it has come up in some downstream > >> discussions around Cinder A/A HA and the last time we talked about it > >> upstream was a year and a half ago[1]. There have certainly been changes > >> since then so I think it's worth another look. For context, the > >> conclusion of that session was: > >> > >> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based > >> an use the etcd v3 http experimental API and those that don't can use > >> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new > >> driver with v3 http experimental API. Projects should feel free to use > >> the DLM based on tooz+etcd3 from now on. Others projects can figure out > >> other use cases for etcd3." > >> > >> The main question that has come up is whether this is still the best > >> practice or if we should revisit the preferred drivers for etcd. Gorka > >> has gotten the grpc-based driver working in a Cinder driver that needs > >> etcd[2], so there's a question as to whether we still need the HTTP > >> etcd-gateway or if everything should use grpc. I will admit I'm nervous > >> about trying to juggle eventlet and grpc, but if it works then my only > >> argument is general misgivings about doing anything clever that involves > >> eventlet. :-) > >> > >> It looks like the HTTP API for etcd has moved out of experimental > >> status[3] at this point, so that's no longer an issue. There was some > >> vague concern from a downstream packaging perspective that the grpc > >> library might use a funky build system, whereas the etcd3-gateway > >> library only depends on existing OpenStack requirements. > >> > >> On the other hand, I don't know how much of a hassle it is to deploy and > >> manage a grpc-gateway. I'm kind of hoping someone has already been down > >> this road and can advise about what they found. > >> > >> Thanks. > >> > >> -Ben > >> > >> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service > >> 2: > >> > >> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 > >> 3: https://github.com/grpc-ecosystem/grpc-gateway > >> > >> From ervikrant06 at gmail.com Tue Dec 4 11:28:50 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Tue, 4 Dec 2018 16:58:50 +0530 Subject: [neutron] [octavia] Manual deployment step for octavia on packstack Message-ID: Hello Team, Do we have the steps documented somewhere to install octavia manually like we have for zun [1]? I have done the openstack deployment using packstack and now I want to install the octavia manually on it. I have done the following steps: # groupadd --system octavia # useradd --home-dir "/var/lib/octavia" --create-home --system --shell /bin/false -g octavia octavia # cd /var/lib/octavia/ # git clone https://github.com/openstack/octavia.git # chown -R octavia:octavia * # pip install -r requirements.txt # python setup.py install # openstack user create --domain default --password-prompt octavia # openstack role add --project service --user octavia admin # openstack service create --name octavia --description "Octavia Service" "Octavia Load Balancing Servic" # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" public http://10.121.19.50:9876/v1 # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" admin http://10.121.19.50:9876/v1 # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" internal http://10.121.19.50:9876/v1 Made the following changes in the configuration file. [root at packstack1 octavia(keystone_admin)]# diff etc/octavia.conf /etc/octavia/octavia.conf 20,21c20,21 < # bind_host = 127.0.0.1 < # bind_port = 9876 --- > bind_host = 10.121.19.50 > bind_port = 9876 38c38 < # api_v2_enabled = True --- > # api_v2_enabled = False 64c64 < # connection = mysql+pymysql:// --- > connection = mysql+pymysql://octavia:octavia at 10.121.19.50/octavia 109c109 < # www_authenticate_uri = https://localhost:5000/v3 --- > www_authenticate_uri = https://10.121.19.50:5000/v3 111,114c111,114 < # auth_url = https://localhost:5000/v3 < # username = octavia < # password = password < # project_name = service --- > auth_url = https://10.121.19.50:35357/v3 > username = octavia > password = octavia > project_name = service 117,118c117,118 < # project_domain_name = Default < # user_domain_name = Default --- > project_domain_name = default > user_domain_name = default Generated the certificates using the script and copy the following certificates in octavia: [root at packstack1 octavia(keystone_admin)]# cd /etc/octavia/ [root at packstack1 octavia(keystone_admin)]# ls -lhrt total 28K -rw-r--r--. 1 octavia octavia 14K Dec 4 05:50 octavia.conf -rw-r--r--. 1 octavia octavia 1.7K Dec 4 05:55 client.key -rw-r--r--. 1 octavia octavia 989 Dec 4 05:55 client.csr -rw-r--r--. 1 octavia octavia 1.7K Dec 4 05:55 client.pem Can anyone please guide me about the further configuration? [1] https://docs.openstack.org/zun/latest/install/controller-install.html Thanks & Regards, Vikrant Aggarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Dec 4 12:52:10 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Dec 2018 07:52:10 -0500 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> Message-ID: <208e0023-cb4d-1b7f-9f31-b186a9c45f4c@gmail.com> On 12/03/2018 06:53 PM, Julia Kreger wrote: > I would like to slightly interrupt this train of thought for an > unscheduled vision of the future! > > What if we could allow a component to store data in etcd3's key value > store like how we presently use oslo_db/sqlalchemy? > > While I personally hope to have etcd3 as a DLM for ironic one day, > review bandwidth permitting, it occurs to me that etcd3 could be > leveraged for more than just DLM. If we have a common vision to enable > data storage, I suspect it might help provide overall guidance as to how > we want to interact with the service moving forward. Considering Ironic doesn't have a database schema that really uses the relational database properly, I think this is an excellent idea. [1] Ironic's database schema is mostly a bunch of giant JSON BLOB fields that are (ab)used by callers to add unstructured data pointing at a node's UUID. Which is pretty much what a KVS like etcd was made for, so I say, go for it. Best, -jay [1] The same can be said for quite a few tables in Nova's cell DB, namely compute_nodes, instance_info_caches, instance_metadata, instance_system_metadata, instance_extra, instance_actions, instance_action_events and pci_devices. And Nova's API DB has the aggregate_metadata, flavor_extra_specs, request_specs, build_requests and key_pairs tables, all of which are good candidates for non-relational storage. From jaypipes at gmail.com Tue Dec 4 13:00:53 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Dec 2018 08:00:53 -0500 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> Message-ID: <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> On 12/04/2018 06:13 AM, Chris Dent wrote: > On Tue, 4 Dec 2018, Ghanshyam Mann wrote: > >> Before we start or proceed with the discussion in QA, i would like to >> get the nova(placement) team opinion on adding the placement support >> in Tempest. Obviously, we should not duplicate the testing effort >> between what existing gabbi tests cover or what going to be added in >> Tempest which we can take care while adding the new tests. > > My feeling on this is that what should be showing up in tempest with > regard to placement tests are things that demonstrate and prove end > to end scenarios in which placement is involved as a critical part, > but is in the background. For example, things like the emerging minimal > bandwidth functionality that involves all three of nova, placement > and neutron. > > I don't think we need extensive testing in Tempest of the placement > API itself, as that's already well covered by the existing > functional tests, nor do I think it makes much sense to cover the > common scheduling scenarios between nova and placement as those are > also well covered and will continue to be covered even with > placement extracted [1]. > > Existing Tempests tests that do things like launching, resizing, > migrating servers already touch placement so may be sufficient. If > we wanted to make these more complete adding verification of > resource providers and their inventories before and after the tests > might be useful. Fully agree with Chris' assessment on this. Best, -jay From thierry at openstack.org Tue Dec 4 13:15:30 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 4 Dec 2018 14:15:30 +0100 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> Message-ID: <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> Julia Kreger wrote: > Indeed it is a considered a base service, but I'm unaware of why it was > decided to not have any abstraction layer on top. That sort of defeats > the adoption of tooz as a standard in the community. Plus with the rest > of our code bases, we have a number of similar or identical patterns and > it would be ideal to have a single library providing the overall > interface for the purposes of consistency. Could you provide some more > background on that decision? Dims can probably summarize it better than I can do. When we were discussing adding a DLM as a base service, we had a lot of discussion at several events and on several threads weighing that option (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do with leveraging specific etcd features vs. using the smallest common denominator, while we expect everyone to be deploying etcd. > I guess what I'd really like to see is an oslo.db interface into etcd3. Not sure that is what you're looking for, but the concept of an oslo.db interface to a key-value store was explored by a research team and the FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of distributing Nova data around. Their ROME oslo.db driver PoC was using Redis, but I think it could be adapted to use etcd quite easily. Some pointers: https://github.com/beyondtheclouds/rome https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds -- Thierry Carrez (ttx) From dprince at redhat.com Tue Dec 4 13:16:30 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 04 Dec 2018 08:16:30 -0500 Subject: [Nova] Increase size limits for user data In-Reply-To: References: Message-ID: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> On Tue, 2018-12-04 at 12:08 +0100, Flavio Percoco wrote: > Greetings, > > I've been working on a tool that requires creating CoreOS nodes on > OpenStack. Sadly, I've hit the user data limit, which has blocked the > work I'm doing. > > One big difference between CoreOS images and other cloud images out > there is that CoreOS images don't use cloud-init but a different tool > called Ignition[0], which uses JSON as its serialization format. > > The size of the configs that I need to pass to the instance is bigger > than the limit imposed by Nova. I've worked on reducing the size as > much as possible and even generating a compressed version of it but > the data is still bigger than the limit (144 kb vs 65kb). > > I'd like to understand better what the nature of the limit is (is it > because of the field size in the database? Is it purely an API limit? > Is it because it causes problems depending on the vendor? As far as I > can tell the limit is just being enforced by the API schema[1] and > not > the DB as it uses a MEDIUMTEXT field. > > I realize this has been asked before but I wanted to get a feeling of > the current opinion about this. Would the Nova team consider > increasing the limit of the API considering that more use cases like > this may be more common these days? I think EC2 only gives you 1/4 of what Nova does (16KB or so). So it would seem Nova is already being somewhat generous here. I don't see any harm in increasing it so long as the DB supports it (no DB schema change would be required). I wonder if pairing userdata with a token that allowed you to download the information from another (much larger) data source would be a better pattern here though. Then you could make it as large as you needed. Dan > > [0] https://coreos.com/ignition/docs/latest/ > [1] > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L212-L215 > > Thanks, > Flavio > From ervikrant06 at gmail.com Tue Dec 4 13:19:41 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Tue, 4 Dec 2018 18:49:41 +0530 Subject: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed In-Reply-To: References: Message-ID: Hello Team, Any help on this issue? Thanks & Regards, Vikrant Aggarwal On Fri, Nov 30, 2018 at 9:13 AM Vikrant Aggarwal wrote: > Hi Feilong, > > Thanks for your reply. > > Kindly find the below outputs. > > [root at packstack1 ~]# rpm -qa | grep -i magnum > python-magnum-7.0.1-1.el7.noarch > openstack-magnum-conductor-7.0.1-1.el7.noarch > openstack-magnum-ui-5.0.1-1.el7.noarch > openstack-magnum-api-7.0.1-1.el7.noarch > puppet-magnum-13.3.1-1.el7.noarch > python2-magnumclient-2.10.0-1.el7.noarch > openstack-magnum-common-7.0.1-1.el7.noarch > > [root at packstack1 ~]# rpm -qa | grep -i heat > openstack-heat-ui-1.4.0-1.el7.noarch > openstack-heat-api-cfn-11.0.0-1.el7.noarch > openstack-heat-engine-11.0.0-1.el7.noarch > puppet-heat-13.3.1-1.el7.noarch > python2-heatclient-1.16.1-1.el7.noarch > openstack-heat-api-11.0.0-1.el7.noarch > openstack-heat-common-11.0.0-1.el7.noarch > > Thanks & Regards, > Vikrant Aggarwal > > > On Fri, Nov 30, 2018 at 2:44 AM Feilong Wang > wrote: > >> Hi Vikrant, >> >> Before we dig more, it would be nice if you can let us know the version >> of your Magnum and Heat. Cheers. >> >> >> On 30/11/18 12:12 AM, Vikrant Aggarwal wrote: >> >> Hello Team, >> >> Trying to deploy on K8 on fedora atomic. >> >> Here is the output of cluster template: >> ~~~ >> [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum >> cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 >> WARNING: The magnum client is deprecated and will be removed in a future >> release. >> Use the OpenStack client to avoid seeing this message. >> +-----------------------+--------------------------------------+ >> | Property | Value | >> +-----------------------+--------------------------------------+ >> | insecure_registry | - | >> | labels | {} | >> | updated_at | - | >> | floating_ip_enabled | True | >> | fixed_subnet | - | >> | master_flavor_id | - | >> | user_id | 203617849df9490084dde1897b28eb53 | >> | uuid | 16eb91f7-18fe-4ce3-98db-c732603f2e57 | >> | no_proxy | - | >> | https_proxy | - | >> | tls_disabled | False | >> | keypair_id | kubernetes | >> | project_id | 45a6706c831c42d5bf2da928573382b1 | >> | public | False | >> | http_proxy | - | >> | docker_volume_size | 10 | >> | server_type | vm | >> | external_network_id | external1 | >> | cluster_distro | fedora-atomic | >> | image_id | f5954340-f042-4de3-819e-a3b359591770 | >> | volume_driver | - | >> | registry_enabled | False | >> | docker_storage_driver | devicemapper | >> | apiserver_port | - | >> | name | coe-k8s-template | >> | created_at | 2018-11-28T12:58:21+00:00 | >> | network_driver | flannel | >> | fixed_network | - | >> | coe | kubernetes | >> | flavor_id | m1.small | >> | master_lb_enabled | False | >> | dns_nameserver | 8.8.8.8 | >> +-----------------------+--------------------------------------+ >> ~~~ >> Found couple of issues in the logs of VM started by magnum. >> >> - etcd was not getting started because of incorrect permission on file >> "/etc/etcd/certs/server.key". This file is owned by root by default have >> 0440 as permission. Changed the permission to 0444 so that etcd can read >> the file. After that etcd started successfully. >> >> - etcd DB doesn't contain anything: >> >> [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r >> [root at kube-cluster1-qobaagdob75g-master-0 ~]# >> >> - Flanneld is stuck in activating status. >> ~~~ >> [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld >> ● flanneld.service - Flanneld overlay address etcd agent >> Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; >> vendor preset: disabled) >> Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago >> Main PID: 6491 (flanneld) >> Tasks: 6 (limit: 4915) >> Memory: 4.7M >> CPU: 53ms >> CGroup: /system.slice/flanneld.service >> └─6491 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379 >> -etcd-prefix=/atomic.io/network >> >> Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:44.569376 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:45.584532 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:46.646255 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:47.673062 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:48.686919 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:49.709136 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:50.729548 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:51.749425 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:52.776612 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal >> flanneld[6491]: E1129 11:05:53.790418 6491 network.go:102] failed to >> retrieve network config: 100: Key not found (/atomic.io) [3] >> ~~~ >> >> - Continuously in the jouralctl logs following messages are printed. >> >> ~~~ >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >> kube-apiserver[6888]: F1129 11:06:39.338416 6888 server.go:269] Invalid >> Authorization Config: Unknown authorization mode Node specified >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: >> kube-apiserver.service: Main process exited, code=exited, status=255/n/a >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >> kube-scheduler[2540]: E1129 11:06:39.408272 2540 reflector.go:199] >> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to >> list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: >> dial tcp 127.0.0.1:8080: getsockopt: connection refused >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >> kube-scheduler[2540]: E1129 11:06:39.444737 2540 reflector.go:199] >> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to >> list *api.Pod: Get >> http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: >> dial tcp 127.0.0.1:8080: getsockopt: connection refused >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >> kube-scheduler[2540]: E1129 11:06:39.445793 2540 reflector.go:199] >> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to >> list *api.PersistentVolume: Get >> http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial >> tcp 127.0.0.1:8080: getsockopt: connection refused >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal audit[1]: >> SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 >> subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver comm="systemd" >> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: >> Failed to start Kubernetes API Server. >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: >> kube-apiserver.service: Unit entered failed state. >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal systemd[1]: >> kube-apiserver.service: Failed with result 'exit-code'. >> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >> kube-scheduler[2540]: E1129 11:06:39.611699 2540 reflector.go:199] >> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to >> list *extensions.ReplicaSet: Get >> http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: >> dial tcp 127.0.0.1:8080: getsockopt: connection refused >> ~~~ >> >> Any help on above issue is highly appreciated. >> >> Thanks & Regards, >> Vikrant Aggarwal >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Cheers & Best regards, >> Feilong Wang (王飞龙) >> -------------------------------------------------------------------------- >> Senior Cloud Software Engineer >> Tel: +64-48032246 >> Email: flwang at catalyst.net.nz >> Catalyst IT Limited >> Level 6, Catalyst House, 150 Willis Street, Wellington >> -------------------------------------------------------------------------- >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Dec 4 13:47:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Dec 2018 08:47:37 -0500 Subject: [all] Etcd as DLM In-Reply-To: <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> Message-ID: <0c5a4a87-2ee8-f6d4-8de0-f693d70df7ee@gmail.com> On 12/04/2018 08:15 AM, Thierry Carrez wrote: > Julia Kreger wrote: >> Indeed it is a considered a base service, but I'm unaware of why it >> was decided to not have any abstraction layer on top. That sort of >> defeats the adoption of tooz as a standard in the community. Plus with >> the rest of our code bases, we have a number of similar or identical >> patterns and it would be ideal to have a single library providing the >> overall interface for the purposes of consistency. Could you provide >> some more background on that decision? > > Dims can probably summarize it better than I can do. > > When we were discussing adding a DLM as a base service, we had a lot of > discussion at several events and on several threads weighing that option > (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do > with leveraging specific etcd features vs. using the smallest common > denominator, while we expect everyone to be deploying etcd. > >> I guess what I'd really like to see is an oslo.db interface into etcd3. > > Not sure that is what you're looking for, but the concept of an oslo.db > interface to a key-value store was explored by a research team and the > FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of > distributing Nova data around. Their ROME oslo.db driver PoC was using > Redis, but I think it could be adapted to use etcd quite easily. Note that it's not appropriate to replace *all* use of an RDBMS in OpenStack-land with etcd. I hope I wasn't misunderstood in my statement earlier. Just *some* use cases are better served by a key/value store, and etcd3's transactions and watches are a great tool for solving *some* use cases -- but definitely not all :) Anyway, just making sure nobody's going to accuse me of saying OpenStack should abandon all RDBMS use for a KVS. :) Best, -jay > Some pointers: > > https://github.com/beyondtheclouds/rome > > https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds > > From bcafarel at redhat.com Tue Dec 4 14:01:17 2018 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 4 Dec 2018 15:01:17 +0100 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: References: <493CB94D-19C3-4B61-8577-6BD98006B7F5@redhat.com> Message-ID: Not a core, but definitely +1 from me, I think most Neutron reviews I went through had comments or feedback from Hongbin Lu! On Tue, 4 Dec 2018 at 03:05, LIU Yulong wrote: > +1 > > On Tue, Dec 4, 2018 at 6:43 AM Slawomir Kaplonski > wrote: > >> Definitely big +1 from me! >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> >> > Wiadomość napisana przez Miguel Lavalle w dniu >> 03.12.2018, o godz. 23:14: >> > >> > Hi Stackers, >> > >> > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron >> core team. Hongbin started contributing to the OpenStack community in the >> Liberty cycle. Over time, he made great contributions in helping the >> community to better support containers by being core team member and / or >> PTL in projects such as Zun and Magnum. An then, fortune played in our >> favor and Hongbin joined the Neutron team in the Queens cycle. Since then, >> he has made great contributions such as filters validation in the ReST API, >> PF status propagation to to VFs (ports) in SR-IOV environments and leading >> the forking of RYU into the os-ken OpenStack project, which provides key >> foundational functionality for openflow. He is not a man who wastes words, >> but when he speaks up, his opinions are full of insight. This is reflected >> in the quality of his code reviews, which in number are on par with the >> leading members of the core team: >> http://stackalytics.com/?module=neutron-group. Even though Hongbin >> leaves in Toronto, he speaks Mandarin Chinese and was born and raised in >> China. This is a big asset in helping the Neutron team to incorporate use >> cases from that part of the world. >> > >> > Hongbin spent the past few months being mentored by Slawek Kaplonski, >> who has reported that Hongbin is ready for the challenge of being a core >> team member. I (and other core team members) concur. >> > >> > I will keep this nomination open for a week as customary. >> > >> > Thank you >> > >> > Miguel >> >> >> -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Dec 4 14:12:24 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 4 Dec 2018 09:12:24 -0500 Subject: [ops] last week's Ops Meetups team minutes Message-ID: Meeting ended Tue Nov 27 15:56:56 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:57 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-11-27-15.00.html 10:57 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-11-27-15.00.txt 10:57 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-11-27-15.00.log.html Next meeting in just under an hour from now on #openstack-operators Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Tue Dec 4 14:14:20 2018 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 4 Dec 2018 15:14:20 +0100 Subject: [Nova] Increase size limits for user data In-Reply-To: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> References: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> Message-ID: On Tue, Dec 4, 2018 at 2:16 PM Dan Prince wrote: > > On Tue, 2018-12-04 at 12:08 +0100, Flavio Percoco wrote: > > Greetings, > > > > I've been working on a tool that requires creating CoreOS nodes on > > OpenStack. Sadly, I've hit the user data limit, which has blocked the > > work I'm doing. > > > > One big difference between CoreOS images and other cloud images out > > there is that CoreOS images don't use cloud-init but a different tool > > called Ignition[0], which uses JSON as its serialization format. > > > > The size of the configs that I need to pass to the instance is bigger > > than the limit imposed by Nova. I've worked on reducing the size as > > much as possible and even generating a compressed version of it but > > the data is still bigger than the limit (144 kb vs 65kb). > > > > I'd like to understand better what the nature of the limit is (is it > > because of the field size in the database? Is it purely an API limit? > > Is it because it causes problems depending on the vendor? As far as I > > can tell the limit is just being enforced by the API schema[1] and > > not > > the DB as it uses a MEDIUMTEXT field. > > > > I realize this has been asked before but I wanted to get a feeling of > > the current opinion about this. Would the Nova team consider > > increasing the limit of the API considering that more use cases like > > this may be more common these days? > > I think EC2 only gives you 1/4 of what Nova does (16KB or so). So it > would seem Nova is already being somewhat generous here. > Yeah, I checked before sending this and I thought that regardless of what EC2 is doing, I think it'd be nice for us to consider the use case. > I don't see any harm in increasing it so long as the DB supports it (no > DB schema change would be required). > > I wonder if pairing userdata with a token that allowed you to download > the information from another (much larger) data source would be a > better pattern here though. Then you could make it as large as you > needed. This is the current solution, which has allowed me to move forward with the work I'm doing. Regardless, I would like us to discuss this. I'd rather have the limit in Nova increased than adding a dependency on another service that would, very likely, only be used for this specific use case. Flavio From mihalis68 at gmail.com Tue Dec 4 14:17:07 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 4 Dec 2018 09:17:07 -0500 Subject: [ops] Ops Meetup proposal, Berlin, early 2019 Message-ID: Hello Everyone, Deutsche Telekom has kindly offered to host an OpenStack Operators Meetup early next year see https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2019-berlin This is the only current offer so far for the first Ops Meetup of 2019 so is very likely to be approved. If you have feedback on this please join us at our weekly IRC meeting or forward your feedback to an Ops Meetups Team member (see https://wiki.openstack.org/wiki/Ops_Meetups_Team for details of the current team). This proposal meets the consensus goal of Europe as the region for the 1st meetup, making it likely that the following one later in 2019 would be in North America. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue Dec 4 14:29:14 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 4 Dec 2018 15:29:14 +0100 Subject: [all][FEMDC] Etcd as DLM In-Reply-To: <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> Message-ID: On 12/4/18 2:15 PM, Thierry Carrez wrote: > Not sure that is what you're looking for, but the concept of an oslo.db > interface to a key-value store was explored by a research team and the > FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of > distributing Nova data around. Their ROME oslo.db driver PoC was using > Redis, but I think it could be adapted to use etcd quite easily. > > Some pointers: > > https://github.com/beyondtheclouds/rome > > https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds That's interesting, thank you! I'd like to remind though that Edge/Fog cases assume high latency, which is not the best fit for strongly consistent oslo.db data backends, like Etcd or Galera. Technically, it had been proved in the past a few years that only causal consistency, which is like eventual consistency but works much better for end users [0], is a way to go for Edge clouds. Except that there is *yet* a decent implementation exists of a causal consistent KVS! So my take is, if we'd ever want to redesign ORM transactions et al to CAS operations and KVS, it should be done not for Etcd in mind, but a future causal consistent solution. [0] https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando From haleyb.dev at gmail.com Tue Dec 4 14:57:43 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 4 Dec 2018 09:57:43 -0500 Subject: [openstack-dev] [Neutron] Propose Nate Johnston for Neutron core In-Reply-To: References: Message-ID: Big +1 from me, keep up the great work Nate! -Brian On 12/3/18 4:38 PM, Miguel Lavalle wrote: > Hi Stackers, > > I want to nominate Nate Johnston (irc:njohnston) as a member of the > Neutron core team. Nate started contributing to Neutron back in the > Liberty cycle. One of the highlight contributions of that early period > is his collaboration with others to implement DSCP QoS rules > (https://review.openstack.org/#/c/251738/). After a hiatus of a few > cycles, we were lucky to have Nate come back to the community during the > Rocky cycle. Since then, he has been a driving force in the adoption in > Neutron of Oslo Versioned Objects, the "Run under Python 3 by default" > community wide initiative and the optimization of ports creation in bulk > to better support containerized workloads. He is a man with a wide range > of interests, who is not afraid of expressing his opinions in any of > them.  The quality and number of his code reviews during the Stein cycle > is on par with the leading members of the core team: > http://stackalytics.com/?module=neutron-group.  I especially admire his > ability to forcefully handle disagreement in a friendly and easy going > manner. > > On top of all that, he graciously endured me as his mentor over the past > few months. For all these reasons, I think he is ready to join the team > and we will be very lucky to have him as a fully voting core. > > I will keep this nomination open for a week as customary. > > Thank you > > Miguel > > From haleyb.dev at gmail.com Tue Dec 4 14:57:55 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 4 Dec 2018 09:57:55 -0500 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: References: Message-ID: <726003a1-da0b-f542-00b2-a7900cc308d9@gmail.com> Big +1 from me as well! -Brian On 12/3/18 5:14 PM, Miguel Lavalle wrote: > Hi Stackers, > > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron > core team. Hongbin started contributing to the OpenStack community in > the Liberty cycle. Over time, he made great contributions in helping the > community to better support containers by being core team member and / > or PTL in projects such as Zun and Magnum. An then, fortune played in > our favor and Hongbin joined the Neutron team in the Queens cycle. Since > then, he has made great contributions such as filters validation in the > ReST API, PF status propagation to to VFs (ports) in SR-IOV environments > and leading the forking of RYU into the os-ken OpenStack project, which > provides key foundational functionality for openflow. He is not a man > who wastes words, but when he speaks up, his opinions are full of > insight. This is reflected in the quality of his code reviews, which in > number are on par with the leading members of the core team: > http://stackalytics.com/?module=neutron-group. Even though Hongbin > leaves in Toronto, he speaks Mandarin Chinese and was born and raised in > China. This is a big asset in helping the Neutron team to incorporate > use cases from that part of the world. > > Hongbin spent the past few months being mentored by Slawek Kaplonski, > who has reported that Hongbin is ready for the challenge of being a core > team member. I (and other core team members) concur. > > I will keep this nomination open for a week as customary. > > Thank you > > Miguel From mriedemos at gmail.com Tue Dec 4 15:09:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 4 Dec 2018 09:09:30 -0600 Subject: [Nova] Increase size limits for user data In-Reply-To: References: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> Message-ID: On 12/4/2018 8:14 AM, Flavio Percoco wrote: > This is the current solution, which has allowed me to move forward > with the work I'm doing. Regardless, I would like us to discuss this. > I'd rather have the limit in Nova increased than adding a dependency > on another service that would, very likely, only be used for this > specific use case. As far as the DB limit, it's not just the actual instances.user_data table that matters [1] it's also the build_requests.instance column [2] and the latter is the bigger issue since it's an entire Instance object, including the user_data plus whatever else (like the flavor, metadata and system_metadata) serialized into that single MEDIUMTEXT field. That's what worries me about blowing up that field if we increase the API limit on user_data. As for passing a handle to a thing in another service, we have talked a few times about integrating with a service like Glare to allow users to pass a baremetal node RAID config handle when creating a server and then nova would pull the spec down from Glare and pass it on to the virt driver. We could do the same thing here I would think. Glare is just a generic artifact repository right? I think that would be a better long-term solution for these problems rather than trying to make nova a blob store. [1] https://github.com/openstack/nova/blob/5f648dda49a6d5fe5ecfd7dddcb5f7dc3d6b51a6/nova/db/sqlalchemy/models.py#L288 [2] https://github.com/openstack/nova/blob/5f648dda49a6d5fe5ecfd7dddcb5f7dc3d6b51a6/nova/db/sqlalchemy/api_models.py#L250 -- Thanks, Matt From chkumar246 at gmail.com Tue Dec 4 15:10:42 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Tue, 4 Dec 2018 20:40:42 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update II Message-ID: Hello, It's more than 2 weeks, here is the another updates on what we have done till today i.e. on Dec 4th, 2018 on collaborating on os_tempest role [1]. * Add centos-7 job with support to python-tempestconf - https://review.openstack.org/#/c/619021/ * Enable support to pass override option to tempestconf - https://review.openstack.org/#/c/619986/ * Added task to list tempest tests - https://review.openstack.org/619024 Work still going on: * Use tempest run for generating subunit results - https://review.openstack.org/621584 * python-tempestconf extra cli support in os_tempest - https://review.openstack.org/620800 * Better blacklist and whitelist tests management - https://review.openstack.org/621605 In upcoming two weeks, we are planning to finish * complete python-tempestconf named_arguments support in os_tempest * Better blacklist and whitelist tests management. Here is the first update [2]. Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136452.html Thanks, Chandan Kumar From jaypipes at gmail.com Tue Dec 4 15:27:23 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 4 Dec 2018 10:27:23 -0500 Subject: [Nova] Increase size limits for user data In-Reply-To: References: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> Message-ID: On 12/04/2018 10:09 AM, Matt Riedemann wrote: > On 12/4/2018 8:14 AM, Flavio Percoco wrote: >> This is the current solution, which has allowed me to move forward >> with the work I'm doing. Regardless, I would like us to discuss this. >> I'd rather have the limit in Nova increased than adding a dependency >> on another service that would, very likely, only be used for this >> specific use case. > > As far as the DB limit, it's not just the actual instances.user_data > table that matters [1] it's also the build_requests.instance column [2] > and the latter is the bigger issue since it's an entire Instance object, > including the user_data plus whatever else (like the flavor, metadata > and system_metadata) serialized into that single MEDIUMTEXT field. > That's what worries me about blowing up that field if we increase the > API limit on user_data. How prescient. :) http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000523.html Best, -jay From dmellado at redhat.com Tue Dec 4 15:27:26 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Tue, 4 Dec 2018 16:27:26 +0100 Subject: [openstack-dev] [kuryr] kuryr libnetwork installation inside the vm is failing In-Reply-To: References: Message-ID: <70b7da13-282c-9b48-68b2-24a59b718ae3@redhat.com> If you check here our issue lies on subprocess32 dependencies, not really on anything kuryr related. Please make sure you're able to install this package and that you match all deps here before retrying! Thanks! On 30/11/18 5:04, Vikrant Aggarwal wrote: >   Running setup.py install for subprocess32 ... error -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x13DDF774E05F5B85.asc Type: application/pgp-keys Size: 2208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From msm at redhat.com Tue Dec 4 15:34:08 2018 From: msm at redhat.com (Michael McCune) Date: Tue, 4 Dec 2018 10:34:08 -0500 Subject: [sigs] Monthly Update In-Reply-To: References: Message-ID: hi Melvin, i think this is a nice idea to help increase communications but i would like to share a few things i observed over the time we published the API-SIG newsletter. 1. sometimes not much changes. we noticed this trend as the SIG become more and more mature, basically there were fewer and fewer weekly items to report. perhaps a monthly cadence will be better for collecting input from the various SIGs, but just a heads up that it is possible for some groups to have slow movement between updates. 2. tough to gauge readership. this was always something i was curious about, we never had many (or really any) responses to our newsletters. i did receive a few personal notes of appreciation, which were tremendous boosts, but aside from that it was difficult to know if our message was getting across. 3. are we talking to ourselves? this kinda follows point 2, depending on how the newsletter is received it is entirely possible that we were just publishing our newsletter for ourselves. i like to think this wasn't the case, but i think it's something to be aware of as we start to ask SIGs to take time to contribute. mind you, i don't think any of these points should dissuade the community from publishing a SIG newsletter. i wanted to share my experiences just to help build awareness as we launch this new effort. peace o/ On Mon, Dec 3, 2018 at 6:16 PM Melvin Hillsman wrote: > > Hi everyone, > > During the Forum we discussed one simple way we could move forward to hopefully get more visibility and activity within SIGs. Here is a proposal for such a step. Send out a monthly email to openstack-discuss with the following information from each SIG captured via etherpad [0] > > 1. What success(es) have you had this month as a SIG? > 2. What should we know about the SIG for the next month? > 3. What would you like help (hands) or feedback (eyes) on? > > Besides the ML, other places this could be re-used in whole or part is on social media, SU Blog, etc. Thoughts? > > [0] https://etherpad.openstack.org/p/sig-updates > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 From bodenvmw at gmail.com Tue Dec 4 15:38:29 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Tue, 4 Dec 2018 08:38:29 -0700 Subject: [dev][neutron] neutron-lib work items Message-ID: Awhile back we asked for volunteers willing to help with the neutron-lib effort [1]. Now that we have some volunteers [2], I've gone ahead and added some high-level work items to that list [2]. For those who choose to drive one of the items, please put your name next to it so that we don't step on each others toes during the effort. Feel free to reach out to me for questions/details. Also a friendly reminder to all; please help out this effort by reviewing neutron-lib patches as they arrive in your project's review queue. Thanks [1] http://lists.openstack.org/pipermail/openstack-dev/2018-September/135279.html [2] https://etherpad.openstack.org/p/neutron-lib-volunteers-and-punch-list From thierry at openstack.org Tue Dec 4 15:45:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 4 Dec 2018 16:45:20 +0100 Subject: [tc] Adapting office hours schedule to demand Message-ID: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Hi, A while ago, the Technical Committee designated specific hours in the week where members would make extra effort to be around on #openstack-tc on IRC, so that community members looking for answers to their questions or wanting to engage can find a time convenient for them and a critical mass of TC members around. We currently have 3 weekly spots: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays But after a few months it appears that: 1/ nobody really comes on channel at office hour time to ask questions. We had a questions on the #openstack-tc IRC channel, but I wouldn't say people take benefit of the synced time 2/ some office hours (most notably the 01:00 UTC on Wednesdays, but also to a lesser extent the 09:00 UTC on Tuesdays) end up just being a couple of TC members present So the schedule is definitely not reaching its objectives, and as such may be a bit overkill. I was also wondering if this is not a case where the offer is hurting the demand -- by having so many office hour spots around, nobody considers them special. Should we: - Reduce office hours to one or two per week, possibly rotating times - Dump the whole idea and just encourage people to ask questions at any time on #openstack-tc, and get asynchronous answers - Keep it as-is, it still has the side benefit of triggering spikes of TC member activity Thoughts ? -- Thierry Carrez (ttx) From doug at doughellmann.com Tue Dec 4 16:04:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 4 Dec 2018 11:04:13 -0500 Subject: [Release-job-failures][release][tripleo] Tag of openstack/tripleo-heat-templates failed In-Reply-To: References: Message-ID: > On Dec 4, 2018, at 11:00 AM, zuul at openstack.org wrote: > > Build failed. > > - publish-openstack-releasenotes-python3 http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes-python3/2d5a462/ : POST_FAILURE in 5m 02s > - publish-openstack-releasenotes http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes/6088133/ : SUCCESS in 5m 04s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures It looks like tripleo-heat-templates still has both release notes jobs configured, so there was a race condition in publishing. Doug From zufar at onf-ambassador.org Tue Dec 4 16:10:58 2018 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Tue, 4 Dec 2018 23:10:58 +0700 Subject: Octavia Production Deployment Confused Message-ID: Hi, I want to implement Octavia service in OpenStack Queens. I am stuck on two-step : 1. Create Octavia User I am trying to create Octavia user with this command, is this the right way? openstack user create octavia --domain default --password octavia openstack role add --user octavia --project services admin openstack service create --name octavia --description "OpenStack Octavia" load-balancer openstack endpoint create --region RegionOne octavia public http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia internal http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia admin http://10.60.60.10:9876 2. Load Balancer Network Configuration "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." I don't know how to route from controller host into a private network, is any specific command for doing that? following tutorial from https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production . Thank You Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Tue Dec 4 16:15:46 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Tue, 4 Dec 2018 23:15:46 +0700 Subject: Octavia Production Deployment Confused Message-ID: Hi, I want to implement Octavia service in OpenStack Queens. I am stuck on two-step : 1. Create Octavia User I am trying to create Octavia user with this command, is this the right way? openstack user create octavia --domain default --password octavia openstack role add --user octavia --project services admin openstack service create --name octavia --description "OpenStack Octavia" load-balancer openstack endpoint create --region RegionOne octavia public http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia internal http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia admin http://10.60.60.10:9876 2. Load Balancer Network Configuration "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." I don't know how to route from controller host into a private network, is any specific command for doing that? following tutorial from https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production . Thank You Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Dec 4 16:23:44 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 11:23:44 -0500 Subject: [Release-job-failures][release][tripleo] Tag of openstack/tripleo-heat-templates failed In-Reply-To: References: Message-ID: I'm looking into it now. On Tue, Dec 4, 2018 at 11:11 AM Doug Hellmann wrote: > > > > On Dec 4, 2018, at 11:00 AM, zuul at openstack.org wrote: > > > > Build failed. > > > > - publish-openstack-releasenotes-python3 > http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes-python3/2d5a462/ > : POST_FAILURE in 5m 02s > > - publish-openstack-releasenotes > http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes/6088133/ > : SUCCESS in 5m 04s > > > > _______________________________________________ > > Release-job-failures mailing list > > Release-job-failures at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > It looks like tripleo-heat-templates still has both release notes jobs > configured, so there was a race condition in publishing. > > Doug > > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Dec 4 16:42:05 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 4 Dec 2018 10:42:05 -0600 Subject: [all] Etcd as DLM In-Reply-To: <20181204100812.og33xegl2fxmoo6g@localhost> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <20181204100812.og33xegl2fxmoo6g@localhost> Message-ID: Copying Mike Bayer since he's our resident DB expert. One more comment inline. On 12/4/18 4:08 AM, Gorka Eguileor wrote: > On 03/12, Julia Kreger wrote: >> Indeed it is a considered a base service, but I'm unaware of why it was >> decided to not have any abstraction layer on top. That sort of defeats the >> adoption of tooz as a standard in the community. Plus with the rest of our >> code bases, we have a number of similar or identical patterns and it would >> be ideal to have a single library providing the overall interface for the >> purposes of consistency. Could you provide some more background on that >> decision? >> >> I guess what I'd really like to see is an oslo.db interface into etcd3. >> >> -Julia > > Hi, > > I think that some projects won't bother with the etcd interface since it > would require some major rework of the whole service to get it working. I don't think Julia was suggesting that every project move to etcd, just that we make it available for projects that want to use it this way. > > Take Cinder for example. We do complex conditional updates that, as far > as I know, cannot be satisfied with etcd's Compare-and-Swap > functionality. We could modify all our code to make it support both > relational databases and key-value stores, but I'm not convinced it > would be worthwhile considering the huge effort it would require. > > I believe there are other OpenStack projects that have procedural code > stored on the database, which would probably be hard to make compatible > with key-value stores. > > Cheers, > Gorka. > >> >> On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M wrote: >> >>> It is a full base service already: >>> https://governance.openstack.org/tc/reference/base-services.html >>> >>> Projects have been free to use it for quite some time. I'm not sure if any >>> actually are yet though. >>> >>> It was decided not to put an abstraction layer on top as its pretty simple >>> and commonly deployed. >>> >>> Thanks, >>> Kevin >>> ------------------------------ >>> *From:* Julia Kreger [juliaashleykreger at gmail.com] >>> *Sent:* Monday, December 03, 2018 3:53 PM >>> *To:* Ben Nemec >>> *Cc:* Davanum Srinivas; geguileo at redhat.com; >>> openstack-discuss at lists.openstack.org >>> *Subject:* Re: [all] Etcd as DLM >>> >>> I would like to slightly interrupt this train of thought for an >>> unscheduled vision of the future! >>> >>> What if we could allow a component to store data in etcd3's key value >>> store like how we presently use oslo_db/sqlalchemy? >>> >>> While I personally hope to have etcd3 as a DLM for ironic one day, review >>> bandwidth permitting, it occurs to me that etcd3 could be leveraged for >>> more than just DLM. If we have a common vision to enable data storage, I >>> suspect it might help provide overall guidance as to how we want to >>> interact with the service moving forward. >>> >>> -Julia >>> >>> On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: >>> >>>> Hi, >>>> >>>> I wanted to revisit this topic because it has come up in some downstream >>>> discussions around Cinder A/A HA and the last time we talked about it >>>> upstream was a year and a half ago[1]. There have certainly been changes >>>> since then so I think it's worth another look. For context, the >>>> conclusion of that session was: >>>> >>>> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based >>>> an use the etcd v3 http experimental API and those that don't can use >>>> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new >>>> driver with v3 http experimental API. Projects should feel free to use >>>> the DLM based on tooz+etcd3 from now on. Others projects can figure out >>>> other use cases for etcd3." >>>> >>>> The main question that has come up is whether this is still the best >>>> practice or if we should revisit the preferred drivers for etcd. Gorka >>>> has gotten the grpc-based driver working in a Cinder driver that needs >>>> etcd[2], so there's a question as to whether we still need the HTTP >>>> etcd-gateway or if everything should use grpc. I will admit I'm nervous >>>> about trying to juggle eventlet and grpc, but if it works then my only >>>> argument is general misgivings about doing anything clever that involves >>>> eventlet. :-) >>>> >>>> It looks like the HTTP API for etcd has moved out of experimental >>>> status[3] at this point, so that's no longer an issue. There was some >>>> vague concern from a downstream packaging perspective that the grpc >>>> library might use a funky build system, whereas the etcd3-gateway >>>> library only depends on existing OpenStack requirements. >>>> >>>> On the other hand, I don't know how much of a hassle it is to deploy and >>>> manage a grpc-gateway. I'm kind of hoping someone has already been down >>>> this road and can advise about what they found. >>>> >>>> Thanks. >>>> >>>> -Ben >>>> >>>> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service >>>> 2: >>>> >>>> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 >>>> 3: https://github.com/grpc-ecosystem/grpc-gateway >>>> >>>> From Kevin.Fox at pnnl.gov Tue Dec 4 16:49:27 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 4 Dec 2018 16:49:27 +0000 Subject: [all][FEMDC] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C24CFD6@EX10MBOX03.pnnl.gov> This thread is shifting a bit... I'd like to throw in another related idea we're talking about it... There is storing data in key/value stores and there is also storing data in document stores. Kubernetes uses a key/value store and builds a document store out of it. All its api then runs through a document store, not a key/value store. This model has proven to be quite powerful. I wonder if an abstraction over document stores would be useful? wrapping around k8s crds would be interesting. A lightweight openstack without mysql would have some interesting benifits. Thanks, Kevin ________________________________________ From: Bogdan Dobrelya [bdobreli at redhat.com] Sent: Tuesday, December 04, 2018 6:29 AM To: openstack-discuss at lists.openstack.org Subject: Re: [all][FEMDC] Etcd as DLM On 12/4/18 2:15 PM, Thierry Carrez wrote: > Not sure that is what you're looking for, but the concept of an oslo.db > interface to a key-value store was explored by a research team and the > FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of > distributing Nova data around. Their ROME oslo.db driver PoC was using > Redis, but I think it could be adapted to use etcd quite easily. > > Some pointers: > > https://github.com/beyondtheclouds/rome > > https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds That's interesting, thank you! I'd like to remind though that Edge/Fog cases assume high latency, which is not the best fit for strongly consistent oslo.db data backends, like Etcd or Galera. Technically, it had been proved in the past a few years that only causal consistency, which is like eventual consistency but works much better for end users [0], is a way to go for Edge clouds. Except that there is *yet* a decent implementation exists of a causal consistent KVS! So my take is, if we'd ever want to redesign ORM transactions et al to CAS operations and KVS, it should be done not for Etcd in mind, but a future causal consistent solution. [0] https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando From zufardhiyaulhaq at gmail.com Tue Dec 4 16:00:23 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Tue, 4 Dec 2018 23:00:23 +0700 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: Hi all, I am facing this issue again, I try to add this configuration but still, the node is going to compute1. [scheduler] driver = filter_scheduler host_manager = host_manager [filter_scheduler] available_filters=nova.scheduler.filters.all_filters enabled_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter use_baremetal_filters=False weight_classes=nova.scheduler.weights.all_weighers [placement] randomize_allocation_candidates = true thank you. Best Regards, Zufar Dhiyaulhaq On Tue, Dec 4, 2018 at 3:59 PM Zufar Dhiyaulhaq wrote: > Hi all, I am facing this issue again, > > I try to add this configuration but still, the node is going to compute1. > > [scheduler] > driver = filter_scheduler > host_manager = host_manager > > [filter_scheduler] > available_filters=nova.scheduler.filters.all_filters > > enabled_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,CoreFilter > use_baremetal_filters=False > weight_classes=nova.scheduler.weights.all_weighers > > [placement] > randomize_allocation_candidates = true > > thank you. > > > Best Regards, > Zufar Dhiyaulhaq > > > On Tue, Dec 4, 2018 at 3:55 AM Mike Carden wrote: > >> >>> Presuming you are deploying Rocky or Queens, >>> >> >> Yep, it's Queens. >> >> >>> >>> It goes in the nova.conf file under the [placement] section: >>> >>> randomize_allocation_candidates = true >>> >> >> In triple-o land it seems like the config may need to be somewhere like >> nova-scheduler.yaml and laid down via a re-deploy. >> >> Or something. >> >> The nova_scheduler runs in a container on a 'controller' host. >> >> -- >> MC >> >> _______________________________________________ >> Mailing list: >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack >> Post to : openstack at lists.openstack.org >> Unsubscribe : >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Tue Dec 4 16:07:21 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Tue, 4 Dec 2018 23:07:21 +0700 Subject: Octavia Production Deployment Confused Message-ID: Hi, I want to implement Octavia service in OpenStack Queens. I am stuck on two-step : 1. Create Octavia User I am trying to create Octavia user with this command, is this the right way? openstack user create octavia --domain default --password octavia openstack role add --user octavia --project services admin openstack service create --name octavia --description "OpenStack Octavia" load-balancer openstack endpoint create --region RegionOne octavia public http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia internal http://10.60.60.10:9876 openstack endpoint create --region RegionOne octavia admin http://10.60.60.10:9876 2. Load Balancer Network Configuration "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." I don't know how to route from controller host into a private network, is any specific command for doing that? following tutorial from https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production . Thank You Best Regards, Zufar Dhiyaulhaq -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Tue Dec 4 16:55:34 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 04 Dec 2018 08:55:34 -0800 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> (Jay Pipes's message of "Tue, 4 Dec 2018 08:00:53 -0500") References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> Message-ID: > On 12/04/2018 06:13 AM, Chris Dent wrote: >> On Tue, 4 Dec 2018, Ghanshyam Mann wrote: >> >>> Before we start or proceed with the discussion in QA, i would like >>> to get the nova(placement) team opinion on adding the placement >>> support in Tempest. Obviously, we should not duplicate the testing >>> effort between what existing gabbi tests cover or what going to be >>> added in Tempest which we can take care while adding the new tests. >> >> My feeling on this is that what should be showing up in tempest with >> regard to placement tests are things that demonstrate and prove end >> to end scenarios in which placement is involved as a critical part, >> but is in the background. For example, things like the emerging minimal >> bandwidth functionality that involves all three of nova, placement >> and neutron. >> >> I don't think we need extensive testing in Tempest of the placement >> API itself, as that's already well covered by the existing >> functional tests, nor do I think it makes much sense to cover the >> common scheduling scenarios between nova and placement as those are >> also well covered and will continue to be covered even with >> placement extracted [1]. >> >> Existing Tempests tests that do things like launching, resizing, >> migrating servers already touch placement so may be sufficient. If >> we wanted to make these more complete adding verification of >> resource providers and their inventories before and after the tests >> might be useful. > > Fully agree with Chris' assessment on this. I don't disagree either. However, I do think that there are cases where it may make sense to be _able_ to hit the placement endpoint from tempest in order to verify that certain things are happening, even in a scenario that involves other services. For example, if we're testing nova's request filter stuff, we may very well need to hit the placement endpoint to validate that aggregate information is being mirrored, and/or that adding a trait to a provider properly results in some scheduling behavior. So, if the question is "should a tempest test be able to hit the placement endpoint?" I would say "yes". If the question is "should tempest have tests that only hit placement to validate proper behavior", I'd agree that functional tests in placement probably cover that sufficiently. I *think* that gmann's question in the email was actually about placement endpoint support, which is the former, and I think is probably legit. --Dan From doug at doughellmann.com Tue Dec 4 16:56:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Dec 2018 11:56:37 -0500 Subject: [Release-job-failures] Release of openstack/networking-ansible failed In-Reply-To: References: Message-ID: zuul at openstack.org writes: > Build failed. > > - release-openstack-python http://logs.openstack.org/62/62d263ef737957dce0e83526268f20ae4bdd3b21/release/release-openstack-python/dd28025/ : SUCCESS in 4m 09s > - announce-release http://logs.openstack.org/62/62d263ef737957dce0e83526268f20ae4bdd3b21/release/announce-release/44a3623/ : SUCCESS in 4m 16s > - propose-update-constraints http://logs.openstack.org/62/62d263ef737957dce0e83526268f20ae4bdd3b21/release/propose-update-constraints/3b72b02/ : SUCCESS in 3m 49s > - trigger-readthedocs-webhook http://logs.openstack.org/62/62d263ef737957dce0e83526268f20ae4bdd3b21/release/trigger-readthedocs-webhook/c912b3a/ : FAILURE in 1m 57s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures It looks like the networking-ansible project needs to do some work on the readthedocs integration. -- Doug From Kevin.Fox at pnnl.gov Tue Dec 4 16:57:36 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 4 Dec 2018 16:57:36 +0000 Subject: [all] Etcd as DLM In-Reply-To: <208e0023-cb4d-1b7f-9f31-b186a9c45f4c@gmail.com> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> , <208e0023-cb4d-1b7f-9f31-b186a9c45f4c@gmail.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C24D014@EX10MBOX03.pnnl.gov> I've asked for it a while ago, but will ask again since the subject came back up. :) If ironic could target k8s crds for storage, it would be significantly easier to deploy an under cloud. Between k8s api, the k8s cluster api and ironic's api, a truly self hosting k8s could be possible. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, December 04, 2018 4:52 AM To: openstack-discuss at lists.openstack.org Subject: Re: [all] Etcd as DLM On 12/03/2018 06:53 PM, Julia Kreger wrote: > I would like to slightly interrupt this train of thought for an > unscheduled vision of the future! > > What if we could allow a component to store data in etcd3's key value > store like how we presently use oslo_db/sqlalchemy? > > While I personally hope to have etcd3 as a DLM for ironic one day, > review bandwidth permitting, it occurs to me that etcd3 could be > leveraged for more than just DLM. If we have a common vision to enable > data storage, I suspect it might help provide overall guidance as to how > we want to interact with the service moving forward. Considering Ironic doesn't have a database schema that really uses the relational database properly, I think this is an excellent idea. [1] Ironic's database schema is mostly a bunch of giant JSON BLOB fields that are (ab)used by callers to add unstructured data pointing at a node's UUID. Which is pretty much what a KVS like etcd was made for, so I say, go for it. Best, -jay [1] The same can be said for quite a few tables in Nova's cell DB, namely compute_nodes, instance_info_caches, instance_metadata, instance_system_metadata, instance_extra, instance_actions, instance_action_events and pci_devices. And Nova's API DB has the aggregate_metadata, flavor_extra_specs, request_specs, build_requests and key_pairs tables, all of which are good candidates for non-relational storage. From doug at doughellmann.com Tue Dec 4 17:06:58 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 04 Dec 2018 12:06:58 -0500 Subject: [Release-job-failures][release][tripleo] Tag of openstack/tripleo-heat-templates failed In-Reply-To: References: Message-ID: Emilien Macchi writes: > I'm looking into it now. > > On Tue, Dec 4, 2018 at 11:11 AM Doug Hellmann wrote: > >> >> >> > On Dec 4, 2018, at 11:00 AM, zuul at openstack.org wrote: >> > >> > Build failed. >> > >> > - publish-openstack-releasenotes-python3 >> http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes-python3/2d5a462/ >> : POST_FAILURE in 5m 02s >> > - publish-openstack-releasenotes >> http://logs.openstack.org/7b/7baea84f1168d72b0eb8901da47d5f4efbaccff8/tag/publish-openstack-releasenotes/6088133/ >> : SUCCESS in 5m 04s >> > >> > _______________________________________________ >> > Release-job-failures mailing list >> > Release-job-failures at lists.openstack.org >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >> >> It looks like tripleo-heat-templates still has both release notes jobs >> configured, so there was a race condition in publishing. >> >> Doug >> >> >> > > -- > Emilien Macchi The fix for this is to stop running the python2 version of the publish job on the tag event by removing that job from the project-template. See https://review.openstack.org/622430 -- Doug From emilien at redhat.com Tue Dec 4 17:09:05 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 12:09:05 -0500 Subject: [Release-job-failures][release][tripleo] Tag of openstack/tripleo-heat-templates failed In-Reply-To: References: Message-ID: On Tue, Dec 4, 2018 at 12:07 PM Doug Hellmann wrote: > The fix for this is to stop running the python2 version of the publish > job on the tag event by removing that job from the project-template. See > https://review.openstack.org/622430 > Thanks for your help Doug! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mranga at gmail.com Tue Dec 4 17:18:17 2018 From: mranga at gmail.com (M. Ranganathan) Date: Tue, 4 Dec 2018 12:18:17 -0500 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: I did this manually. -- Create an ovs port on br-int -- Create a neutron port using the ovs port that you just created. -- Assign the ip address of the neutron port to the ovs port -- Use ip netns exec to assign a route in the router namespace of the LoadBalancer network. If there's somebody who has a better way to do this, please share. Ranga On Tue, Dec 4, 2018 at 11:16 AM Zufar Dhiyaulhaq wrote: > Hi, I want to implement Octavia service in OpenStack Queens. > > I am stuck on two-step : > 1. Create Octavia User > > I am trying to create Octavia user with this command, is this the right > way? > > openstack user create octavia --domain default --password octavia > openstack role add --user octavia --project services admin > > openstack service create --name octavia --description "OpenStack Octavia" > load-balancer > openstack endpoint create --region RegionOne octavia public > http://10.60.60.10:9876 > openstack endpoint create --region RegionOne octavia internal > http://10.60.60.10:9876 > openstack endpoint create --region RegionOne octavia admin > http://10.60.60.10:9876 > > 2. Load Balancer Network Configuration > "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is > allowed, and the controller (to be created later) can talk to hosts on this > network." > > I don't know how to route from controller host into a private network, is > any specific command for doing that? > > following tutorial from > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production > . > > Thank You > > Best Regards, > Zufar Dhiyaulhaq > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Dec 4 17:25:21 2018 From: gael.therond at gmail.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Tue, 4 Dec 2018 18:25:21 +0100 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: You can do it with any routed network that you’ll load as a provider network too. Way more simpler, no need for ovs manipulation, just get your network team to give you a vlan both available from computer node and controller plan. It can be a network subnet and vlan completely unknown from you controller as long as you get an intermediary equipment that route your traffic or that you add the proper route on your controllers. Le mar. 4 déc. 2018 à 18:21, M. Ranganathan a écrit : > I did this manually. > > -- Create an ovs port on br-int > -- Create a neutron port using the ovs port that you just created. > -- Assign the ip address of the neutron port to the ovs port > -- Use ip netns exec to assign a route in the router namespace of the > LoadBalancer network. > > If there's somebody who has a better way to do this, please share. > > Ranga > > On Tue, Dec 4, 2018 at 11:16 AM Zufar Dhiyaulhaq > wrote: > >> Hi, I want to implement Octavia service in OpenStack Queens. >> >> I am stuck on two-step : >> 1. Create Octavia User >> >> I am trying to create Octavia user with this command, is this the right >> way? >> >> openstack user create octavia --domain default --password octavia >> openstack role add --user octavia --project services admin >> >> openstack service create --name octavia --description "OpenStack Octavia" >> load-balancer >> openstack endpoint create --region RegionOne octavia public >> http://10.60.60.10:9876 >> openstack endpoint create --region RegionOne octavia internal >> http://10.60.60.10:9876 >> openstack endpoint create --region RegionOne octavia admin >> http://10.60.60.10:9876 >> >> 2. Load Balancer Network Configuration >> "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is >> allowed, and the controller (to be created later) can talk to hosts on this >> network." >> >> I don't know how to route from controller host into a private network, is >> any specific command for doing that? >> >> following tutorial from >> https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production >> . >> >> Thank You >> >> Best Regards, >> Zufar Dhiyaulhaq >> > > > -- > M. Ranganathan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Dec 4 17:38:27 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 4 Dec 2018 11:38:27 -0600 Subject: [tc][all] Train Community Goals Message-ID: Hi all, The purpose of this thread is to have a more focused discussion about what we'd like to target for Train community goals, bootstrapped with the outcomes from the session in Berlin [0]. During the session, we went through each item as a group and let the person who added it share why they thought it would be a good community goal candidate for the next release. Most goals have feedback captured in etherpad describing next steps, but the following stuck out as top contenders from the session (rated by upvotes): 1. Moving legacy clients to python-openstackclient 2. Cleaning up resources when deleting a project 3. Service-side health checks I don't think I missed any goals from the session, but if I did, please let me know and I'll add it to the list so that we can discuss it here. Does anyone have strong opinions either way about the goals listed above? [0] https://etherpad.openstack.org/p/BER-t-series-goals -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 4 17:43:28 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 09:43:28 -0800 Subject: [all] Etcd as DLM In-Reply-To: <0c5a4a87-2ee8-f6d4-8de0-f693d70df7ee@gmail.com> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> <0c5a4a87-2ee8-f6d4-8de0-f693d70df7ee@gmail.com> Message-ID: On Tue, Dec 4, 2018 at 5:53 AM Jay Pipes wrote: > On 12/04/2018 08:15 AM, Thierry Carrez wrote: > > Julia Kreger wrote: > >> Indeed it is a considered a base service, but I'm unaware of why it > >> was decided to not have any abstraction layer on top. That sort of > >> defeats the adoption of tooz as a standard in the community. Plus with > >> the rest of our code bases, we have a number of similar or identical > >> patterns and it would be ideal to have a single library providing the > >> overall interface for the purposes of consistency. Could you provide > >> some more background on that decision? > > > > Dims can probably summarize it better than I can do. > > > > When we were discussing adding a DLM as a base service, we had a lot of > > discussion at several events and on several threads weighing that option > > (a "tooz-compatible DLM" vs. "etcd"). IIRC the final decision had to do > > with leveraging specific etcd features vs. using the smallest common > > denominator, while we expect everyone to be deploying etcd. > > > >> I guess what I'd really like to see is an oslo.db interface into etcd3. > > > > Not sure that is what you're looking for, but the concept of an oslo.db > > interface to a key-value store was explored by a research team and the > > FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of > > distributing Nova data around. Their ROME oslo.db driver PoC was using > > Redis, but I think it could be adapted to use etcd quite easily. > > Note that it's not appropriate to replace *all* use of an RDBMS in > OpenStack-land with etcd. I hope I wasn't misunderstood in my statement > earlier. > > Just *some* use cases are better served by a key/value store, and > etcd3's transactions and watches are a great tool for solving *some* use > cases -- but definitely not all :) > > Anyway, just making sure nobody's going to accuse me of saying OpenStack > should abandon all RDBMS use for a KVS. :) > > Best, > -jay > Definitely not interpreted that way and not what I was thinking either. I definitely see there is value, and your thoughts do greatly confirm that at least I'm not the only crazy person thinking it could be a good idea^(TM). > > Some pointers: > > > > https://github.com/beyondtheclouds/rome > > > > > https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Dec 4 17:47:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Dec 2018 17:47:56 +0000 (GMT) Subject: [tc] Adapting office hours schedule to demand In-Reply-To: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Message-ID: On Tue, 4 Dec 2018, Thierry Carrez wrote: > Should we: > > - Reduce office hours to one or two per week, possibly rotating times > > - Dump the whole idea and just encourage people to ask questions at any time > on #openstack-tc, and get asynchronous answers > > - Keep it as-is, it still has the side benefit of triggering spikes of TC > member activity One more option, for completeness - Drop to one or two per week, at fixed and well-known times, and encourage more use of email for engaging with and within the TC. We keep saying that email is the only reliable medium we have and then keep talking about ways to use IRC more. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From juliaashleykreger at gmail.com Tue Dec 4 17:50:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 09:50:13 -0800 Subject: [all][FEMDC] Etcd as DLM In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C24CFD6@EX10MBOX03.pnnl.gov> References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <3ebe82c8-ea86-566b-faff-b9f2fd22009a@openstack.org> <1A3C52DFCD06494D8528644858247BF01C24CFD6@EX10MBOX03.pnnl.gov> Message-ID: I like where this is going! Comment in-line. On Tue, Dec 4, 2018 at 8:53 AM Fox, Kevin M wrote: > This thread is shifting a bit... > > I'd like to throw in another related idea we're talking about it... > > There is storing data in key/value stores and there is also storing data > in document stores. > > Kubernetes uses a key/value store and builds a document store out of it. > All its api then runs through a document store, not a key/value store. > > This model has proven to be quite powerful. I wonder if an abstraction > over document stores would be useful? wrapping around k8s crds would be > interesting. A lightweight openstack without mysql would have some > interesting benifits. > I suspect the code would largely be the same, if we support key/value then I would hope that then we could leverage all of that with just opening/connecting of the document store. Perhaps this is worth some further investigation, but ultimately what I've been thinking is if we _could_ allow some, not all, services to operate in completely decoupled fashion we better enable them to support OpenStack and neighboring technologies. Ironic is kind of the obvious starting point of sorts since everyone needs to start with some baremetal somewhere if they are are building their own infrastructure up. > > Thanks, > Kevin > ________________________________________ > From: Bogdan Dobrelya [bdobreli at redhat.com] > Sent: Tuesday, December 04, 2018 6:29 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [all][FEMDC] Etcd as DLM > > On 12/4/18 2:15 PM, Thierry Carrez wrote: > > Not sure that is what you're looking for, but the concept of an oslo.db > > interface to a key-value store was explored by a research team and the > > FEMDC WG (Fog/Edge/Massively-distributed Clouds), in the context of > > distributing Nova data around. Their ROME oslo.db driver PoC was using > > Redis, but I think it could be adapted to use etcd quite easily. > > > > Some pointers: > > > > https://github.com/beyondtheclouds/rome > > > > > https://www.openstack.org/videos/austin-2016/a-ring-to-rule-them-all-revising-openstack-internals-to-operate-massively-distributed-clouds > > That's interesting, thank you! I'd like to remind though that Edge/Fog > cases assume high latency, which is not the best fit for strongly > consistent oslo.db data backends, like Etcd or Galera. Technically, it > had been proved in the past a few years that only causal consistency, > which is like eventual consistency but works much better for end users > [0], is a way to go for Edge clouds. Except that there is *yet* a decent > implementation exists of a causal consistent KVS! > > So my take is, if we'd ever want to redesign ORM transactions et al to > CAS operations and KVS, it should be done not for Etcd in mind, but a > future causal consistent solution. > > [0] > > https://www.usenix.org/system/files/login/articles/08_lloyd_41-43_online.pdf > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Dec 4 17:53:30 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 4 Dec 2018 11:53:30 -0600 Subject: [tc] Adapting office hours schedule to demand In-Reply-To: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Message-ID: On Tue, Dec 4, 2018 at 9:47 AM Thierry Carrez wrote: > Hi, > > A while ago, the Technical Committee designated specific hours in the > week where members would make extra effort to be around on #openstack-tc > on IRC, so that community members looking for answers to their questions > or wanting to engage can find a time convenient for them and a critical > mass of TC members around. We currently have 3 weekly spots: > > - 09:00 UTC on Tuesdays > - 01:00 UTC on Wednesdays > - 15:00 UTC on Thursdays > > But after a few months it appears that: > > 1/ nobody really comes on channel at office hour time to ask questions. > We had a questions on the #openstack-tc IRC channel, but I wouldn't say > people take benefit of the synced time > > 2/ some office hours (most notably the 01:00 UTC on Wednesdays, but also > to a lesser extent the 09:00 UTC on Tuesdays) end up just being a couple > of TC members present > Conversely, office hours are relatively low bandwidth, IMO. Unless there is an active discussion, I'm usually working on something else and checking the channel intermittently. That said, I don't think it's a huge inconveince to monitor the channel for an hour in the event someone does swing by. > > So the schedule is definitely not reaching its objectives, and as such > may be a bit overkill. I was also wondering if this is not a case where > the offer is hurting the demand -- by having so many office hour spots > around, nobody considers them special. > > Should we: > > - Reduce office hours to one or two per week, possibly rotating times > > - Dump the whole idea and just encourage people to ask questions at any > time on #openstack-tc, and get asynchronous answers > I completely agree that we should encourage people to come talk to us at any time, but I think office hours hold us accountable for being present. We're doing our part by making sure we're available. > > - Keep it as-is, it still has the side benefit of triggering spikes of > TC member activity > Thoughts ? > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Dec 4 17:54:59 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 12:54:59 -0500 Subject: [tripleo] [validations] Replacement of undercloud_conf module Message-ID: Hi folks, Context: https://bugs.launchpad.net/tripleo/+bug/1805825 Today I randomly found this module: https://github.com/openstack/tripleo-validations/blob/d21e7fa30f9be15bb980279197dc6c5206f38a38/validations/library/undercloud_conf.py And it gave me 2 ideas, as I think we don't need this module and would consider it as technical debt at this point: - it's relying on a file, which isn't super safe and flexible IMHO. - a lot of validations rely on Hieradata which isn't safe either, we saw it with the Containerized Undercloud. So I propose that: - we export require parameters via the Heat templates into Ansible variables - we consume these variables from tripleo-validations (can be in the inventory or a dedicated var file for validations). So that way we remove the dependency on having the undercloud.conf access from Mistral Executor and also stop depending on Puppet (hieradata) which we don't guarantee to be here in the future. Can someone from TripleO validations team ack this email and put this work in your backlog? If you need assistance we're happy to help but I believe this is an important effort to avoid technical debt here. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Dec 4 18:06:27 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 4 Dec 2018 10:06:27 -0800 Subject: Berlin Summit recap on Edge Message-ID: <5BCA051D-9368-4C1E-B993-FB83525F5296@gmail.com> Hi, I hope those of you who came to the Berlin Summit had a great event, a good trip home and got some rest, caught up with work and those who went on vacation had a great time. Hereby I would like to give a short summary to everyone either as a reminder or as a package to help you catch up briefly with what happened around edge in Berlin. As you most probably know we had a dedicated track for Edge Computing with numerous presentations and panel discussions at the conference which were recorded. If you would like to catch up or see some sessions again please visit the OpenStack website[1] for the videos. In parallel to the conference we were having the Forum taking place with 40-minute-long working sessions for developers, operators and users to meet and discuss new requirements, challenges and pain points to address. We had quite a few sessions around edge which you’ll find a brief recap of here. I would like to start with the OSF Edge Computing Group also Edge WG’s sessions, if you are new to the activities of this group you may want to read my notes[2] on the Denver PTG to catch up on the community’s and the group's work on defining reference architectures for edge use cases. During the Forum we continued to discuss the Minimum Viable Product (MVP) architecture topic[3] that we’ve started at the last PTG. As the group and attendees had limited amount of time available for the topic we concluded on some basics and agreed on action items to follow up on. The session attendees agreed that the MVP architecture is an important first step and we will keep its scope limited to the current OpenStack services listed on the wiki capturing the details[4]. While there is interest in adding further services such as Ironic or Qinling we will discuss those in this context in upcoming phases. The Edge WG is actively working on capturing edge computing use cases in order to understand better the requirements and to work together with OpenStack and StarlingX projects on design and implementation work based the input the groups has been collecting[5]. We had a session about use cases[6] to identify which are the ones the group should focus on with immediate actions where we got vRAN and edge cloud, uCPE and industrial control with most interest in the room to work on. The group is actively working on the map the MVP architecture options to the use cases identified by the group and to get more details on the ones we identified during the Forum session. If you are interested in participating in these activities please see the details[7] of the group’s weekly meetings. While the MVP architecture work is focusing on a minimalistic view to provide a reference architecture with the covered services prepared for edge use cases there is work ongoing in parallel in several OpenStack projects. You can find notes on the Forum etherpads[8][9][10] on the progress of projects such as Cinder, Ironic, Kolla-Ansible and TripleO. The general consensus of the project discussions were that the services are in a good shape when edge requirements are concerned and there is a good view on the way forward like improving availability zone functionality or remote management of bare metal nodes. With all the work ongoing in the projects as well as in the Edge WG the expectation is that we will be able to easily move to the next phases with the MVP architectures work when the working group is ready. Both the group and the projects are looking for contributors for both identifying further requirements, use cases or do the implementation and testing work. Testing is an area that will be crucial for edge and we are looking into both cross-project and cross-community collaborations for that for instance with OPNFV and Akraino. While we didn’t have a Keystone specific Forum session for edge this time a small group of people came together to discuss next steps with federation. We are converging towards some generic feature additions to Keystone based on the Athenz plugin from Oath. You can read a Keystone summary[11] for the week in Berlin from Lance Bragsad including plans related to edge. We had a couple of sessions at the Summit about StarlingX both in the conference part as well as the Forum. You can check out videos such as the project update[12] and other relevant sessions[13] among the Summit videos. As the StarlingX community is working closely with the Edge WG as well as the relevant OpenStack project teams at the Forum we had sessions that were focusing on some specific items for planning future work and understanding requirements better for the project. The team had a session on IoT[14] to talk about the list of devices to consider and the requirements systems need to address in this space. The session also identified a collaboration option between StarlingX, IoTronic[15] and Ironic when it comes to realizing and testing use cases. With putting more emphasis on containers at the edge the team also had a session on containerized application requirements[16] with a focus on Kubernetes clusters. During the session we talked about areas like container networking, multi-tenancy, persistent storage and a few more to see what options we have for them and what is missing today to have the particular area covered. The StarlingX community is focusing more on containerization in the upcoming releases for which the feedback and ideas during the session are very important to have. One more session to mention is the ‘Ask me anything about StarlingX’ one at the Forum where experts from the community offered help in general to people who are new and/or have questions about the project. The session was well attended and questions were focusing more on the practical angles like footprint or memory consumption and a few more specific questions that were beyond generic interest and overview of the project. These were the activities in high level around edge without going into too much detail on either of the topics as that would be a way longer e-mail. :) I hope you found interesting topics and useful pointers for more information to catch up on. If you would like to participate in these activities you can dial-in to the Edge WG weekly calls[17] or weekly Use cases calls[18] or check the StarlingX sub-project team calls[19] and further material on the website[20] about how to contribute or jump on IRC for OpenStack project team meetings[21] in the area of your interest. Please let me know if you have any questions to either of the above items. :) Thanks and Best Regards, Ildikó (IRC: ildikov) [1] https://www.openstack.org/videos/ [2] http://lists.openstack.org/pipermail/edge-computing/2018-September/000432.html [3] https://etherpad.openstack.org/p/BER-MVP-architecture-for-edge [4] https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures [5] https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases [6] https://etherpad.openstack.org/p/BER-edge-use-cases-and-requirements [7] https://wiki.openstack.org/wiki/Edge_Computing_Group [8] https://etherpad.openstack.org/p/BER-Cinder_at_the_Edge [9] https://etherpad.openstack.org/p/BER-ironic-edge [10] https://etherpad.openstack.org/p/BER-tripleo-undercloud-edge [11] https://www.lbragstad.com/blog/openstack-summit-berlin-recap [12] https://www.openstack.org/videos/berlin-2018/starlingx-project-update-6-months-in-the-life-of-a-new-open-source-project [13] https://www.openstack.org/videos/search?search=starlingx [14] https://etherpad.openstack.org/p/BER-integrating-iot-device-mgmt-with-edge-cloud [15] https://github.com/openstack/iotronic [16] https://etherpad.openstack.org/p/BER-containerized-app-reqmts-on-kubernetes-at-edge [17] https://www.openstack.org/assets/edge/OSF-Edge-Computing-Group-Weekly-Calls.ics [18] https://www.openstack.org/assets/edge/OSF-Edge-WG-Use-Cases-Weekly-Calls.ics [19] https://wiki.openstack.org/wiki/Starlingx/Meetings [20] https://www.starlingx.io [21] http://eavesdrop.openstack.org From juliaashleykreger at gmail.com Tue Dec 4 18:08:34 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 10:08:34 -0800 Subject: [tc][all] Train Community Goals In-Reply-To: References: Message-ID: Off-hand, I think there needs to be a few more words agreed upon for each in terms of what each item practically means. In other words, does #1 mean each python-clientlibrary's OSC plugin is ready to rock and roll, or we talking about everyone rewriting all client interactions in to openstacksdk, and porting existing OSC plugins use that different python sdk. In other words, some projects could find it very easy or that they are already done, where as others could find themselves with a huge lift that is also dependent upon review bandwidth that is outside of their control or influence which puts such a goal at risk if we try and push too hard. -Julia On Tue, Dec 4, 2018 at 9:43 AM Lance Bragstad wrote: > Hi all, > > The purpose of this thread is to have a more focused discussion about what > we'd like to target for Train community goals, bootstrapped with the > outcomes from the session in Berlin [0]. > > During the session, we went through each item as a group and let the > person who added it share why they thought it would be a good community > goal candidate for the next release. Most goals have feedback captured in > etherpad describing next steps, but the following stuck out as top > contenders from the session (rated by upvotes): > > 1. Moving legacy clients to python-openstackclient > 2. Cleaning up resources when deleting a project > 3. Service-side health checks > > I don't think I missed any goals from the session, but if I did, please > let me know and I'll add it to the list so that we can discuss it here. > > Does anyone have strong opinions either way about the goals listed above? > > [0] https://etherpad.openstack.org/p/BER-t-series-goals > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 4 18:14:45 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 10:14:45 -0800 Subject: [tc] Adapting office hours schedule to demand In-Reply-To: References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Message-ID: I am +1 to cdent's option below. The bottom line for every TC member is that most of our work days can only hit one or maybe two office hours per week. I think we should re-poll, possibly adjust times if necessary but trying to keep those times with-in the window that works. Every TC election we can re-poll for best times, and change accordingly. Three times a week just seems like it may not be useful or beneficial. Twice is much easier to schedule. One will likely conflict for some of us regardless of what we try to do. -Julia On Tue, Dec 4, 2018 at 9:51 AM Chris Dent wrote: > On Tue, 4 Dec 2018, Thierry Carrez wrote: > > > Should we: > > > > - Reduce office hours to one or two per week, possibly rotating times > > > > - Dump the whole idea and just encourage people to ask questions at any > time > > on #openstack-tc, and get asynchronous answers > > > > - Keep it as-is, it still has the side benefit of triggering spikes of > TC > > member activity > > One more option, for completeness > > - Drop to one or two per week, at fixed and well-known times, and > encourage more use of email for engaging with and within the TC. > > We keep saying that email is the only reliable medium we have and > then keep talking about ways to use IRC more. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Dec 4 18:30:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Dec 2018 18:30:56 +0000 (GMT) Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> Message-ID: On Tue, 4 Dec 2018, Dan Smith wrote: >> On 12/04/2018 06:13 AM, Chris Dent wrote: >>> Existing Tempests tests that do things like launching, resizing, >>> migrating servers already touch placement so may be sufficient. If >>> we wanted to make these more complete adding verification of >>> resource providers and their inventories before and after the tests >>> might be useful. [snip] > I don't disagree either. However, I do think that there are cases where > it may make sense to be _able_ to hit the placement endpoint from > tempest in order to verify that certain things are happening, even in a > scenario that involves other services. [snip] Based on conversation with Dan in IRC, we decided it might be useful to clarify that Dan and I are in agreement. It had seemed to me that he was saying something different from me, but we're both basically saying "yes, tempest needs to be able to talk to placement to confirm what it's holding because that's useful sometimes" and "no, tempest doesn't need to verify the workings of placement api itself". Which boils out to this: > I *think* that gmann's > question in the email was actually about placement endpoint support, > which is the former, and I think is probably legit. Yes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jimmy at openstack.org Tue Dec 4 18:33:48 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 04 Dec 2018 12:33:48 -0600 Subject: [ptl] [openstack-map] New tags for OpenStack Project Map Message-ID: <5C06C88C.1010303@openstack.org> Following up on this thread from ttx [1], we are continuing to enhance the content on the OpenStack Project Map [2],[3] through new tags that are managed through the openstack-map repo [4]. * video (id, title, description) - This controls the Project Update video you see on the project page. I've just pushed a review adding all of the Project Updates for Berlin [5] * depends-on - Things that are a strong dependency. This should be used ONLY if your component requires another one to work (e.g. nova -> glance) * see-also - Should list things that are a week dependency or an adjacent relevant thing * support-teams (name: link) - This is meant to give credit to adjacent projects that aren't necessary to run the software (e.g. Oslo, i18n, Docs). We are still determining how best to implement this tag, but we feel it's important to give some credit to these other teams that are so critical in helping to maintain, support, and build OpenStack If you have some time, please go to the git repo [4] and review your project and help flesh out these new tags (or update old ones) so we can display them in the Project Map [2]. Cheers, Jimmy [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000178.html [2] https://www.openstack.org/software/project-navigator/openstack-components [3] https://www.openstack.org/assets/software/projectmap/openstack-map.pdf [4] https://git.openstack.org/cgit/openstack/openstack-map/ [5] https://review.openstack.org/622485 From mdulko at redhat.com Tue Dec 4 18:39:14 2018 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Tue, 04 Dec 2018 19:39:14 +0100 Subject: [dev] [os-vif][nova][neutron][kuryr] os-vif serialisation, persistence and version compatiablity moving to v2.0 In-Reply-To: References: Message-ID: On Fri, 2018-11-30 at 20:57 +0000, Sean Mooney wrote: > Hi > > I wanted to raise the topic of os-vif, its public api > what client are allowed to do and how that related to serialisation, > persistence of os-vif objects and version compatibility. > > sorry this is a bit of a long mail so here is a TL;DR > and I have separated the mail into a background and future section. > > - we are planning to refine our data model in os-vf for hardware offloads. > https://review.openstack.org/#/c/607610/ > - we have been talking on and off about using os-vif on for nova neutron > port binding and negotiation. to do that we will need to support > serialisation in the future. > - there are some uses of os-vif in kuryr-kubenetes that are currently > out of contract that we would like to correct as they may break in the future. > https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/objects/vif.py > - what would we like to see in an os-vif 2.0 > > > background > ========== > Before I get to the topic of os-vif I wanted to take an aside to set > the basis of how I wanted to frame the topic. > > So in C++ land, there is an international standard that defines the language > and the standard library for the C++ language. one of the stipulations of > that standard is that users of the standard library are allowed to construct > types defined by the standard library and you can call functions and methods > defined by the library. C++ has different semantics then python so the implication > of the statement you can call functions may not be apparent. In C++ it is technically > undefined behaviour to take the address of a standard library function you may only call it. > This allows the library author to implement it as a macro,lambda, function, function object > or any other callable. So, in other words, there is a clear delineation between the public > API and host that API is implemented. > > Bringing this back to os-vif we also have a finite public API and seperate internal > implementation and I want to talk about internal changes > that may affect observable features from the client perspective. specifically > things that client can depend on and things they cannot. > > first of all the public api of os-vif is defined by > https://github.com/openstack/os-vif/blob/master/os_vif/plugin.py > > this module defined the base class of all plugins and is the only > class a plugin must inherit from and the only class defined in os-vif > that a plugin or client of os-vif may inherit from outside of the > exception classes defined in > https://github.com/openstack/os-vif/blob/master/os_vif/exception.py. > > the second part of os-vif public api is a collection of data structures defined in > https://github.com/openstack/os-vif/tree/master/os_vif/objects > that clients are expected to _construct_ and plugins may _read_ from. > > I highlight construct and read as that is the extent to which we currently promise > to clients or plugins. specifically today our version compatibility for objects > is partially enabled by our use of oslo versioned objects. since os-vif objects > are OVOs they have additional methods such as obj_to_primitive and obj_from_primitive > that exist but that are _not_ part of the public api. > > Because serialisation (out side of serialisation via privsep) and persistence of os-vif > longer then the lifetime of a function call, is not today part of the supported use cases > of os-vif, that has enabled us to be less strict with version compatible then we would > have to be If we supported those features. > > For example, we have in general added obj_make_compatible methods to objects when > we modify os-vif objects https://github.com/openstack/os-vif/blob/master/os_vif/objects/vif.py#L120-L142 > but we have often discussed should we add these functions as there are no current uses. > > future > ====== > That brings me to the topic of how os-vif is used in kuryr and how we would like to use it in neutron in > the future. Part of the reason we do not allow clients to extend VIF objects is to ensure that > os-vif plugins are not required on controller nodes and so we can control the set of vocabulary types that > are exchanged and version them. To help us move to passing os-vif object via the api > we need to remove the out of tree vif types that in use in kuryr > and move them to os-vif. > https://github.com/openstack/kuryr-kubernetes/blob/master/kuryr_kubernetes/objects/vif.py#L47-L82 > we had mentioned this 2 or 3 releases ago but the change that prompted this email > was https://github.com/openstack/kuryr-kubernetes/commit/7cc187806b42fee5ea660f86d33ad2f59b009754 I think we can handle that through port profiles. Anyway it's definitely a smaller issue that we can figure out. Moreover the change you mention actually allows us to do modifications in the data structure we save with keeping the backward compatibility, so definitely we have some options. > i would like to know how kuryr-kubernetes is currently using the serialised os-vif objects. > - are they being stored? > - to disk > - in a db > - in etcd We save it in K8s objects annotations, through K8s API, which saves it into the etcd. We assume no access to the etcd itself, so communication happens only through K8s mechanisms. > - are they bining sent to other services No, but we read the VIF's from both kuryr-kubernetes and kuryr-daemon services. > - are how are you serialising them. > - obj_to_primitive > - other It's obj_to_primitive(), see [1]. > - are you depending on obj_make_compatible There's no code for that yet, but we definitely planned that to happen to enable us to do changes in o.vo's we save in a backwards-compatible manner. > - how can we help you not use them until we actually support this use case. Well, we need to use them. :( Either we use os-vif's o.vo's or we transition to anything else we can own now. > In the future, we will support serialising os-vif object but the topic of should we use > ovo has been raised. Initially, we had planned to serialise os-vif objects by > calling obj_to_primitive and then passing that to oslo.serialsiation json utils. > > For python client, this makes it relatively trivial to serialise and deserialise the objects > when calling into neutron but it has a disadvantage in that the format of the payload crated > is specific to OpenStack, it is very verbose and it is not the most friendly format for human consumption. > > The other options that have come up in the past few months have been > removing OVOs and validating with json schema or more recently replacing OVOs with protobufs. > > now we won't be doing either in the stein cycle but we had considered flattening our data models > once https://review.openstack.org/#/c/607610/ is implemented and removing some unused filed as > part of a 2.0 release in preparation for future neutron integration in Train. To that end, > I would like to start the discussion about what we would want from a 2.0 release > if and when we create one, how we can better docuemnt and enformce our public api/ supported usage > patterens and are there any other users of os-vif outside of nova and kuryr today. I believe we need to list the changes you want to make and work out how to keep compatibility. Kuryr itself doesn't use a lot of the fields, but we need to keep compatibility with objects saved on K8s resources by older versions of Kuryr-Kubernetes. This means that we still need a way to deserialize old o.vo's with the newer version of os-vif. > regards > sean. [1] https://github.com/openstack/kuryr-kubernetes/blob/38f541604ff490fbc381f78a23655e30e6aa0bcc/kuryr_kubernetes/controller/handlers/vif.py#L176-L178 From amodi at redhat.com Tue Dec 4 18:50:09 2018 From: amodi at redhat.com (Archit Modi) Date: Tue, 4 Dec 2018 13:50:09 -0500 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> Message-ID: Great! There is already a patch from Lajos [1]. I'd like resource_provider_aggregates_client to be added too. (/resource_providers/{uuid}/aggregates) [1] https://review.openstack.org/#/c/622316/ On Tue, Dec 4, 2018 at 1:32 PM Chris Dent wrote: > On Tue, 4 Dec 2018, Dan Smith wrote: > > >> On 12/04/2018 06:13 AM, Chris Dent wrote: > >>> Existing Tempests tests that do things like launching, resizing, > >>> migrating servers already touch placement so may be sufficient. If > >>> we wanted to make these more complete adding verification of > >>> resource providers and their inventories before and after the tests > >>> might be useful. > > [snip] > > > I don't disagree either. However, I do think that there are cases where > > it may make sense to be _able_ to hit the placement endpoint from > > tempest in order to verify that certain things are happening, even in a > > scenario that involves other services. > > [snip] > > Based on conversation with Dan in IRC, we decided it might be useful > to clarify that Dan and I are in agreement. It had seemed to me that > he was saying something different from me, but we're both basically > saying "yes, tempest needs to be able to talk to placement to > confirm what it's holding because that's useful sometimes" and "no, > tempest doesn't need to verify the workings of placement api itself". > > Which boils out to this: > > > I *think* that gmann's > > question in the email was actually about placement endpoint support, > > which is the former, and I think is probably legit. > > Yes. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Dec 4 18:51:32 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 4 Dec 2018 19:51:32 +0100 Subject: [tc][all] Train Community Goals In-Reply-To: References: Message-ID: <496e2f28-5523-229e-04fb-9494690f1da7@redhat.com> On 12/4/18 7:08 PM, Julia Kreger wrote: > Off-hand, I think there needs to be a few more words agreed upon for each in > terms of what each item practically means. > > In other words, does #1 mean each python-clientlibrary's OSC plugin is ready to > rock and roll, or we talking about everyone rewriting all client interactions in > to openstacksdk, and porting existing OSC plugins use that different python sdk. > > In other words, some projects could find it very easy or that they are already > done, where as others could find themselves with a huge lift that is also > dependent upon review bandwidth that is outside of their control or influence > which puts such a goal at risk if we try and push too hard. If the goal is to make all client interactions use openstacksdk, we may indeed lack review throughput. It looks like we have 4 active reviewers this cycle: http://stackalytics.com/?module=openstacksdk > -Julia > > > On Tue, Dec 4, 2018 at 9:43 AM Lance Bragstad > wrote: > > Hi all, > > The purpose of this thread is to have a more focused discussion about what > we'd like to target for Train community goals, bootstrapped with the > outcomes from the session in Berlin [0]. > > During the session, we went through each item as a group and let the person > who added it share why they thought it would be a good community goal > candidate for the next release. Most goals have feedback captured in > etherpad describing next steps, but the following stuck out as top > contenders from the session (rated by upvotes): > > 1. Moving legacy clients to python-openstackclient > 2. Cleaning up resources when deleting a project > 3. Service-side health checks > > I don't think I missed any goals from the session, but if I did, please let > me know and I'll add it to the list so that we can discuss it here. > > Does anyone have strong opinions either way about the goals listed above? > > [0] https://etherpad.openstack.org/p/BER-t-series-goals > From emilien at redhat.com Tue Dec 4 19:41:38 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 14:41:38 -0500 Subject: [tripleo] cleanup upgrade_tasks Message-ID: Upgrade folks, Please take a look at https://review.openstack.org/622578. We don't run the upgrade_tasks Ansible tasks that stop systemd services and remove the packages since all services are containerized. These tasks were useful in Rocky when we converted the Undercloud from baremetal to containers but in Stein this is not useful anymore. It's actually breaking upgrades for Podman, as containers are now seen by systemd, and these tasks conflicts with the way containers are managed in Paunch. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike_mp at zzzcomputing.com Tue Dec 4 19:30:02 2018 From: mike_mp at zzzcomputing.com (Mike Bayer) Date: Tue, 4 Dec 2018 14:30:02 -0500 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <20181204100812.og33xegl2fxmoo6g@localhost> Message-ID: On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec wrote: > > Copying Mike Bayer since he's our resident DB expert. One more comment > inline. so the level of abstraction oslo.db itself provides is fairly light - it steps in for the initial configuration of the database engine, for the job of reworking exceptions into something more locallized, and then for supplying a basic transactional begin/commit pattern that includes concepts that openstack uses a lot. it also has some helpers for things like special datatypes, test frameworks, and stuff like that. That is, oslo.db is not a full blown "abstraction" layer, it exposes the SQLAlchemy API which is then where you have the major level of abstraction. Given that, making oslo.db do for etcd3 what it does for SQLAlchemy would be an appropriate place for such a thing. It would be all new code and not really have much overlap with anything that's there right now, but still would be feasible at least at the level of, "get a handle to etcd3, here's the basic persistence / query pattern we use with it, here's a test framework that will allow test suites to use it". At the level of actually reading and writing data to etcd3 as well as querying, that's a bigger task, and certainly that is not a SQLAlchemy thing either. If etcd3's interface is a simple enough "get" / "put" / "query" and then some occasional special operations, those kinds of abstraction APIs are often not too terrible to write. Also note that we have a key/value database interface right now in oslo.cache which uses dogpile.cache against both memcached and redis right now. If you really only needed put/get with etcd3, it could do that also, but I would assume we have the need for more of a fine grained interface than that. Haven't studied etcd3 as of yet. But I'd be interested in supporting it in oslo somewhere. > > On 12/4/18 4:08 AM, Gorka Eguileor wrote: > > On 03/12, Julia Kreger wrote: > >> Indeed it is a considered a base service, but I'm unaware of why it was > >> decided to not have any abstraction layer on top. That sort of defeats the > >> adoption of tooz as a standard in the community. Plus with the rest of our > >> code bases, we have a number of similar or identical patterns and it would > >> be ideal to have a single library providing the overall interface for the > >> purposes of consistency. Could you provide some more background on that > >> decision? > >> > >> I guess what I'd really like to see is an oslo.db interface into etcd3. > >> > >> -Julia > > > > Hi, > > > > I think that some projects won't bother with the etcd interface since it > > would require some major rework of the whole service to get it working. > > I don't think Julia was suggesting that every project move to etcd, just > that we make it available for projects that want to use it this way. > > > > > Take Cinder for example. We do complex conditional updates that, as far > > as I know, cannot be satisfied with etcd's Compare-and-Swap > > functionality. We could modify all our code to make it support both > > relational databases and key-value stores, but I'm not convinced it > > would be worthwhile considering the huge effort it would require. > > > > I believe there are other OpenStack projects that have procedural code > > stored on the database, which would probably be hard to make compatible > > with key-value stores. > > > > Cheers, > > Gorka. > > > >> > >> On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M wrote: > >> > >>> It is a full base service already: > >>> https://governance.openstack.org/tc/reference/base-services.html > >>> > >>> Projects have been free to use it for quite some time. I'm not sure if any > >>> actually are yet though. > >>> > >>> It was decided not to put an abstraction layer on top as its pretty simple > >>> and commonly deployed. > >>> > >>> Thanks, > >>> Kevin > >>> ------------------------------ > >>> *From:* Julia Kreger [juliaashleykreger at gmail.com] > >>> *Sent:* Monday, December 03, 2018 3:53 PM > >>> *To:* Ben Nemec > >>> *Cc:* Davanum Srinivas; geguileo at redhat.com; > >>> openstack-discuss at lists.openstack.org > >>> *Subject:* Re: [all] Etcd as DLM > >>> > >>> I would like to slightly interrupt this train of thought for an > >>> unscheduled vision of the future! > >>> > >>> What if we could allow a component to store data in etcd3's key value > >>> store like how we presently use oslo_db/sqlalchemy? > >>> > >>> While I personally hope to have etcd3 as a DLM for ironic one day, review > >>> bandwidth permitting, it occurs to me that etcd3 could be leveraged for > >>> more than just DLM. If we have a common vision to enable data storage, I > >>> suspect it might help provide overall guidance as to how we want to > >>> interact with the service moving forward. > >>> > >>> -Julia > >>> > >>> On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: > >>> > >>>> Hi, > >>>> > >>>> I wanted to revisit this topic because it has come up in some downstream > >>>> discussions around Cinder A/A HA and the last time we talked about it > >>>> upstream was a year and a half ago[1]. There have certainly been changes > >>>> since then so I think it's worth another look. For context, the > >>>> conclusion of that session was: > >>>> > >>>> "Let's use etcd 3.x in the devstack CI, projects that are eventlet based > >>>> an use the etcd v3 http experimental API and those that don't can use > >>>> the etcd v3 gRPC API. Dims will submit a patch to tooz for the new > >>>> driver with v3 http experimental API. Projects should feel free to use > >>>> the DLM based on tooz+etcd3 from now on. Others projects can figure out > >>>> other use cases for etcd3." > >>>> > >>>> The main question that has come up is whether this is still the best > >>>> practice or if we should revisit the preferred drivers for etcd. Gorka > >>>> has gotten the grpc-based driver working in a Cinder driver that needs > >>>> etcd[2], so there's a question as to whether we still need the HTTP > >>>> etcd-gateway or if everything should use grpc. I will admit I'm nervous > >>>> about trying to juggle eventlet and grpc, but if it works then my only > >>>> argument is general misgivings about doing anything clever that involves > >>>> eventlet. :-) > >>>> > >>>> It looks like the HTTP API for etcd has moved out of experimental > >>>> status[3] at this point, so that's no longer an issue. There was some > >>>> vague concern from a downstream packaging perspective that the grpc > >>>> library might use a funky build system, whereas the etcd3-gateway > >>>> library only depends on existing OpenStack requirements. > >>>> > >>>> On the other hand, I don't know how much of a hassle it is to deploy and > >>>> manage a grpc-gateway. I'm kind of hoping someone has already been down > >>>> this road and can advise about what they found. > >>>> > >>>> Thanks. > >>>> > >>>> -Ben > >>>> > >>>> 1: https://etherpad.openstack.org/p/BOS-etcd-base-service > >>>> 2: > >>>> > >>>> https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 > >>>> 3: https://github.com/grpc-ecosystem/grpc-gateway > >>>> > >>>> From mike.carden at gmail.com Tue Dec 4 21:04:11 2018 From: mike.carden at gmail.com (Mike Carden) Date: Wed, 5 Dec 2018 08:04:11 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Tue, Dec 4, 2018 at 9:58 PM Chris Dent wrote: > > * The 'randomize_allocation_candidates' config setting is used by > the placement-api process (probably called nova-placement-api in > queens), not the nova-scheduler process, so you need to update the > config (in the placement section) for the former and restart it. > > Thanks Chris. I tried the same thing in the nova.conf of the nova_placement containers and still no joy. A check on a fresh deploy of Queens with just a couple of x86 compute nodes proves that it can work without randomize_allocation_candidates being set to True. Out of the box we get an even distribution of VMs across compute nodes. It seems that somewhere along the path of adding Ironic and some baremetal nodes and host aggregates and a PPC64LE node, the scheduling goes awry. Back to the drawing board, and the logs. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Tue Dec 4 21:18:52 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 04 Dec 2018 16:18:52 -0500 Subject: [tripleo] [validations] Replacement of undercloud_conf module In-Reply-To: References: Message-ID: On Tue, 2018-12-04 at 12:54 -0500, Emilien Macchi wrote: > Hi folks, > > Context: https://bugs.launchpad.net/tripleo/+bug/1805825 > > Today I randomly found this module: > https://github.com/openstack/tripleo-validations/blob/d21e7fa30f9be15bb980279197dc6c5206f38a38/validations/library/undercloud_conf.py > > And it gave me 2 ideas, as I think we don't need this module and > would consider it as technical debt at this point: > - it's relying on a file, which isn't super safe and flexible IMHO. We still use undercloud.conf though right? Why is it not safe (the data has to be stored somewhere right)? > - a lot of validations rely on Hieradata which isn't safe either, we > saw it with the Containerized Undercloud. Why is this not safe? I commented on the LP you linked but it seems to me that a simple fix would be to set the same hiera setting we used before so that the location of the undercloud.conf is known. We still use and support hiera for the Undercloud. It would be a simple matter to set this in an undercloud service via t-h-t. If you wanted to you could even cache a copy of the used version somewhere and then consume it that way right? Dan > > So I propose that: > - we export require parameters via the Heat templates into Ansible > variables > - we consume these variables from tripleo-validations (can be in the > inventory or a dedicated var file for validations). > > So that way we remove the dependency on having the undercloud.conf > access from Mistral Executor and also stop depending on Puppet > (hieradata) which we don't guarantee to be here in the future. > > Can someone from TripleO validations team ack this email and put this > work in your backlog? If you need assistance we're happy to help but > I believe this is an important effort to avoid technical debt here. > > Thanks, > -- > Emilien Macchi From dprince at redhat.com Tue Dec 4 21:32:31 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 04 Dec 2018 16:32:31 -0500 Subject: [tripleo] cleanup upgrade_tasks In-Reply-To: References: Message-ID: <3d8f8b993d23b2b3391a9b3abd508c0c724465cb.camel@redhat.com> On Tue, 2018-12-04 at 14:41 -0500, Emilien Macchi wrote: > Upgrade folks, > > Please take a look at https://review.openstack.org/622578. > We don't run the upgrade_tasks Ansible tasks that stop systemd > services and remove the packages since all services are > containerized. > These tasks were useful in Rocky when we converted the Undercloud > from baremetal to containers but in Stein this is not useful anymore. Would some of them be useful for fast forward upgrades in the future though? I suppose it all depends on where you draw your "upgrade lines" from major version to major version. > It's actually breaking upgrades for Podman, as containers are now > seen by systemd, and these tasks conflicts with the way containers > are managed in Paunch. Many of these steps are very similar. It seems like it would be possible to detect podman in the systemd unit file (systemctl show | grep podman) or something and then set your Ansible variables accordingly to disable the block if podman is being used. And improvement might be to put this logic into a playbook and consume it from each module. That is, if we even want to keep this upgrade code for the future. > > Thanks, > -- > Emilien Macchi From anlin.kong at gmail.com Tue Dec 4 21:35:04 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 5 Dec 2018 10:35:04 +1300 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: On Wed, Dec 5, 2018 at 6:27 AM Gaël THEROND wrote: > You can do it with any routed network that you’ll load as a provider > network too. > > Way more simpler, no need for ovs manipulation, just get your network team > to give you a vlan both available from computer node and controller plan. > It can be a network subnet and vlan completely unknown from you controller > as long as you get an intermediary equipment that route your traffic or > that you add the proper route on your controllers. > Yeah, that's also how we did for our Octavia service in production thanks to our ops team. Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Dec 4 21:35:38 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 4 Dec 2018 21:35:38 +0000 (GMT) Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: On Wed, 5 Dec 2018, Mike Carden wrote: > On Tue, Dec 4, 2018 at 9:58 PM Chris Dent wrote: > >> >> * The 'randomize_allocation_candidates' config setting is used by >> the placement-api process (probably called nova-placement-api in >> queens), not the nova-scheduler process, so you need to update the >> config (in the placement section) for the former and restart it. > > I tried the same thing in the nova.conf of the nova_placement containers > and still no joy. Darn. > A check on a fresh deploy of Queens with just a couple of x86 compute nodes > proves that it can work without randomize_allocation_candidates being set > to True. Out of the box we get an even distribution of VMs across compute > nodes. It seems that somewhere along the path of adding Ironic and some > baremetal nodes and host aggregates and a PPC64LE node, the scheduling goes > awry. Yeah, this sort of stuff is why I was hoping we could see some of your logs, to figure out which of those things was the haymaker. If you figure it out, please post about it. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From juliaashleykreger at gmail.com Tue Dec 4 21:37:11 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 13:37:11 -0800 Subject: [ironic] Time to discuss clean/deploy steps Message-ID: All, I've looked at the doodle poll results and it looks like the best available time is 3:00 PM UTC on Friday December 7th. I suggest we use bluejeans[2] as that has worked fairly well for us thus far. The specification documented related to the discussion can be found in review[3]. Thanks, -Julia [1] https://doodle.com/poll/yan4wyvztf7mpq46 [2] https://bluejeans.com/u/jkreger/ [3] https://review.openstack.org/#/c/606199/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Dec 4 21:41:02 2018 From: smooney at redhat.com (Sean Mooney) Date: Tue, 04 Dec 2018 21:41:02 +0000 Subject: [ptl] [openstack-map] New tags for OpenStack Project Map In-Reply-To: <5C06C88C.1010303@openstack.org> References: <5C06C88C.1010303@openstack.org> Message-ID: <9fb1b60096c4cebe093a52b7ab933b01974fdf13.camel@redhat.com> On Tue, 2018-12-04 at 12:33 -0600, Jimmy McArthur wrote: > Following up on this thread from ttx [1], we are continuing to enhance > the content on the OpenStack Project Map [2],[3] through new tags that > are managed through the openstack-map repo [4]. > > * video (id, title, description) - This controls the Project Update > video you see on the project page. I've just pushed a review adding all > of the Project Updates for Berlin [5] > > * depends-on - Things that are a strong dependency. This should be used > ONLY if your component requires another one to work (e.g. nova -> glance) > > * see-also - Should list things that are a week dependency or an > adjacent relevant thing out of interest why see-also and not relates-to see-also is an imperative instruction to look at something where as relates-to is a declaritive satement about the relationship between two entities just as depends-on is for hard dependencies. > > * support-teams (name: link) - This is meant to give credit to adjacent > projects that aren't necessary to run the software (e.g. Oslo, i18n, > Docs). We are still determining how best to implement this tag, but we > feel it's important to give some credit to these other teams that are so > critical in helping to maintain, support, and build OpenStack > > If you have some time, please go to the git repo [4] and review your > project and help flesh out these new tags (or update old ones) so we can > display them in the Project Map [2]. > > Cheers, > Jimmy > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000178.html > > [2] > https://www.openstack.org/software/project-navigator/openstack-components > [3] https://www.openstack.org/assets/software/projectmap/openstack-map.pdf > [4] https://git.openstack.org/cgit/openstack/openstack-map/ > [5] https://review.openstack.org/622485 > From smooney at redhat.com Tue Dec 4 21:50:35 2018 From: smooney at redhat.com (Sean Mooney) Date: Tue, 04 Dec 2018 21:50:35 +0000 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> Message-ID: <4273f2ee3117b06cb52f553e2bdbb2b20a816039.camel@redhat.com> On Tue, 2018-12-04 at 21:35 +0000, Chris Dent wrote: > On Wed, 5 Dec 2018, Mike Carden wrote: > > > On Tue, Dec 4, 2018 at 9:58 PM Chris Dent wrote: > > > > > > > > * The 'randomize_allocation_candidates' config setting is used by > > > the placement-api process (probably called nova-placement-api in > > > queens), not the nova-scheduler process, so you need to update the > > > config (in the placement section) for the former and restart it. > > > > I tried the same thing in the nova.conf of the nova_placement containers > > and still no joy. > > Darn. > > > A check on a fresh deploy of Queens with just a couple of x86 compute nodes > > proves that it can work without randomize_allocation_candidates being set > > to True. Out of the box we get an even distribution of VMs across compute > > nodes. It seems that somewhere along the path of adding Ironic and some > > baremetal nodes and host aggregates and a PPC64LE node, the scheduling goes > > awry. > > Yeah, this sort of stuff is why I was hoping we could see some of > your logs, to figure out which of those things was the haymaker. so one thing that came up recently downstream was a discusion around the BuildFailureWeigher https://docs.openstack.org/nova/rocky/user/filter-scheduler.html#weights and the build_failure_weight_multiplier https://docs.openstack.org/nova/latest/configuration/config.html#filter_scheduler.build_failure_weight_multiplier i wonder if failed build shoudl be leading to packing behavior. it would explain why intially it is fine but over time as host get a significant build failure weight applied that the cluster transiation form even spread to packing behaivior. > > If you figure it out, please post about it. > From johnsomor at gmail.com Tue Dec 4 22:04:00 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 4 Dec 2018 14:04:00 -0800 Subject: [Nova] Increase size limits for user data In-Reply-To: References: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> Message-ID: The limited size of user_data (not even a floppy disk worth of space) is why Octavia has not used it either. We have used a, now deprecated, feature of config drive and nova. So, I am in full support of solving this problem. I would also like to request that we implement this in a way that we can secure the content. We use the config drive mechanism to load per-instance certificates and keys into the instance at boot time and would like to continue to use this mechanism to "seed" the instance. Our current mechanism stores the data along with the instance image, which means it is as protected as the OS image we boot. As we design the future, it would be awesome if we can provide the same or better level of protection for that content. Michael On Tue, Dec 4, 2018 at 7:34 AM Jay Pipes wrote: > > On 12/04/2018 10:09 AM, Matt Riedemann wrote: > > On 12/4/2018 8:14 AM, Flavio Percoco wrote: > >> This is the current solution, which has allowed me to move forward > >> with the work I'm doing. Regardless, I would like us to discuss this. > >> I'd rather have the limit in Nova increased than adding a dependency > >> on another service that would, very likely, only be used for this > >> specific use case. > > > > As far as the DB limit, it's not just the actual instances.user_data > > table that matters [1] it's also the build_requests.instance column [2] > > and the latter is the bigger issue since it's an entire Instance object, > > including the user_data plus whatever else (like the flavor, metadata > > and system_metadata) serialized into that single MEDIUMTEXT field. > > That's what worries me about blowing up that field if we increase the > > API limit on user_data. > > How prescient. :) > > http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000523.html > > Best, > -jay > From mike.carden at gmail.com Tue Dec 4 22:08:36 2018 From: mike.carden at gmail.com (Mike Carden) Date: Wed, 5 Dec 2018 09:08:36 +1100 Subject: [Openstack] unexpected distribution of compute instances in queens In-Reply-To: <4273f2ee3117b06cb52f553e2bdbb2b20a816039.camel@redhat.com> References: <815799a8-c2d1-39b4-ba4a-8cc0f0d7b5cf@gmail.com> <4273f2ee3117b06cb52f553e2bdbb2b20a816039.camel@redhat.com> Message-ID: On Wed, Dec 5, 2018 at 8:54 AM Sean Mooney wrote: > > so one thing that came up recently downstream was a discusion around the > BuildFailureWeigher > Now there's an interesting idea. The compute node that's not being scheduled *hasn't* had build failures, but we had a lot of build failures in Ironic for a while due to the published RHEL7.6 qcow2 image having a wee typo in its grub conf. Those failures *shouldn't* influence non-ironic scheduling I'd have thought. Hmmm. -- MC -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Dec 4 22:12:19 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 4 Dec 2018 14:12:19 -0800 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: Zufar, Hi. Before I start with your questions I want to let you know that the Octavia team will see your message sooner if you add the [octavia] prefix to your e-mail subject line. As for the questions: 1. Yes, that should work, but the services project is usually named "service". It gives the Octavia service more permissions that it really needs, but will work as a starting point. 2. This can be accomplished in many ways. The traffic on the "lb-mgmt-net" is IP based, so can be routed if you need in your deployment. Others will use a provider network. Devstack pops a port off the neutron OVS. It might be helpful for you to look at our devstack setup script: https://github.com/openstack/octavia/blob/master/devstack/plugin.sh and/or the OpenStack Ansible role for Octavia: https://github.com/openstack/openstack-ansible-os_octavia for examples. As always, we hang out in the #openstack-lbaas IRC channel if you would like to chat about your deployment. Michael On Tue, Dec 4, 2018 at 8:21 AM Zufar Dhiyaulhaq wrote: > > Hi, I want to implement Octavia service in OpenStack Queens. > > I am stuck on two-step : > 1. Create Octavia User > > I am trying to create Octavia user with this command, is this the right way? > > openstack user create octavia --domain default --password octavia > openstack role add --user octavia --project services admin > > openstack service create --name octavia --description "OpenStack Octavia" load-balancer > openstack endpoint create --region RegionOne octavia public http://10.60.60.10:9876 > openstack endpoint create --region RegionOne octavia internal http://10.60.60.10:9876 > openstack endpoint create --region RegionOne octavia admin http://10.60.60.10:9876 > > 2. Load Balancer Network Configuration > "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." > > I don't know how to route from controller host into a private network, is any specific command for doing that? > > following tutorial from https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production. > > Thank You > Best Regards, > Zufar Dhiyaulhaq From mikal at stillhq.com Tue Dec 4 22:25:27 2018 From: mikal at stillhq.com (Michael Still) Date: Wed, 5 Dec 2018 09:25:27 +1100 Subject: [Nova] Increase size limits for user data In-Reply-To: References: <7a06df3739d66083a5042ad6346f77e1b8081f65.camel@redhat.com> Message-ID: Have you looked at vendor data? It gives you a trusted way to inject arbitrary data into a config drive (or metadata server call), where that additional data isn't stored by nova and can be of an arbitrary size. Michael On Wed, Dec 5, 2018 at 9:07 AM Michael Johnson wrote: > The limited size of user_data (not even a floppy disk worth of space) > is why Octavia has not used it either. We have used a, now deprecated, > feature of config drive and nova. > > So, I am in full support of solving this problem. > > I would also like to request that we implement this in a way that we > can secure the content. We use the config drive mechanism to load > per-instance certificates and keys into the instance at boot time and > would like to continue to use this mechanism to "seed" the instance. > Our current mechanism stores the data along with the instance image, > which means it is as protected as the OS image we boot. > > As we design the future, it would be awesome if we can provide the > same or better level of protection for that content. > > Michael > On Tue, Dec 4, 2018 at 7:34 AM Jay Pipes wrote: > > > > On 12/04/2018 10:09 AM, Matt Riedemann wrote: > > > On 12/4/2018 8:14 AM, Flavio Percoco wrote: > > >> This is the current solution, which has allowed me to move forward > > >> with the work I'm doing. Regardless, I would like us to discuss this. > > >> I'd rather have the limit in Nova increased than adding a dependency > > >> on another service that would, very likely, only be used for this > > >> specific use case. > > > > > > As far as the DB limit, it's not just the actual instances.user_data > > > table that matters [1] it's also the build_requests.instance column [2] > > > and the latter is the bigger issue since it's an entire Instance > object, > > > including the user_data plus whatever else (like the flavor, metadata > > > and system_metadata) serialized into that single MEDIUMTEXT field. > > > That's what worries me about blowing up that field if we increase the > > > API limit on user_data. > > > > How prescient. :) > > > > > http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000523.html > > > > Best, > > -jay > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Dec 4 22:45:45 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 Dec 2018 14:45:45 -0800 Subject: [TC] Forum TC Vision Retrospective summary Message-ID: During the Forum in Berlin, the Technical Committee along with interested community members took some time to look back at the vision that written in early 2017. We had three simple questions: * What went well? * What needs improvement? * What should the next steps be? To summarize, the group thought the vision helped guide some thoughts and decisions. Helped provide validation on what was thought to be important. We have seen adjacent communities fostered. We didn't solely focus on the vision which was viewed as a positive as things do change over time. It helped us contrast, and in the process of writing the vision we reached the use of the same words. Most importantly, we learned that we took on too much work. As with most retrospectives, the list of things that needed improvement was a bit longer. There was some perception that it fell off the map, and that not every item received work. Possibly that we even took the vision too literally and too detailed as opposed to use it as more a guiding document to help us evolve as time goes on. There was consensus that there was still room to improve and that we could have done a better at conveying context to express how, what, and why. For next steps, we feel that it is time to revise the vision, albeit in a shorter form. We also felt that there could be a vision for the TC itself, which led to the discussion of providing clarity to the role of the Technical Committee. As for action items and next steps that we reached consensus on: * To refine the technical vision document. * That it was time to compose a new vision for the community. * Consensus was reached that there should be a vision of the TC itself, and as part of this have a living document that describes the "Role of the TC". ** ttx, cdent, and TheJulia have volunteered to continue those discussions. ** mnaser would start a discussion with the community as to what the TC should and shouldn't do. For those reading this, please remember that the TC's role is defined in the foundation bylaws, so this would be more of a collection of perceptions. * TheJulia to propose a governance update to suggest that people proposing TC candidacy go ahead and preemptively seek to answer the question of what the candidate perceives as the role of the TC. The etherpad that followed the discussion can be found at: https://etherpad.openstack.org/p/BER-tc-vision-retrospective -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufar at onf-ambassador.org Tue Dec 4 23:54:09 2018 From: zufar at onf-ambassador.org (Zufar Dhiyaulhaq) Date: Wed, 5 Dec 2018 06:54:09 +0700 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: Hi all, Thank you, So the amphora will use a provider network. but how we can access this load balancer externally? via IP assign into amphora (provider network IP)? Another question, I am facing a problem with a keypair. I am generating a keypair with `create_certificates.sh` source /tmp/octavia/bin/create_certificates.sh /etc/octavia/certs /tmp/octavia/etc/certificates/openssl.cnf but when creating the load balancer service, I got this error from /var/log/octavia/worker.log ERROR oslo_messaging.rpc.server CertificateGenerationException: Could not sign the certificate request: Failed to load CA Private Key /etc/octavia/certs/private/cakey.pem. I am using this configuration under octavia.conf [certificates] ca_certificate = /etc/octavia/certs/ca_01.pem ca_private_key = /etc/octavia/certs/private/cakey.pem ca_private_key_passphrase = foobar Anyone know this issue? I am following Mr. Lingxian Kong blog in https://lingxiankong.github.io/2016-06-07-octavia-deployment-prerequisites.html Best Regards, Zufar Dhiyaulhaq On Wed, Dec 5, 2018 at 4:35 AM Lingxian Kong wrote: > On Wed, Dec 5, 2018 at 6:27 AM Gaël THEROND > wrote: > >> You can do it with any routed network that you’ll load as a provider >> network too. >> >> Way more simpler, no need for ovs manipulation, just get your network >> team to give you a vlan both available from computer node and controller >> plan. It can be a network subnet and vlan completely unknown from you >> controller as long as you get an intermediary equipment that route your >> traffic or that you add the proper route on your controllers. >> > > Yeah, that's also how we did for our Octavia service in production thanks > to our ops team. > > Cheers, > Lingxian Kong > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zufardhiyaulhaq at gmail.com Tue Dec 4 23:57:41 2018 From: zufardhiyaulhaq at gmail.com (Zufar Dhiyaulhaq) Date: Wed, 5 Dec 2018 06:57:41 +0700 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: Hi Michael, Thank you, I create with `services` because when I am trying to list the project, its give me `services` [root at zu-controller0 ~(keystone_admin)]# openstack project list | 27439cc0ba52421cad2426c980a0d0fa | admin | | 4020d4a5e7ad4d279fc3d1916d18ced1 | services | Best Regards, Zufar Dhiyaulhaq On Wed, Dec 5, 2018 at 5:12 AM Michael Johnson wrote: > Zufar, > > Hi. Before I start with your questions I want to let you know that the > Octavia team will see your message sooner if you add the [octavia] > prefix to your e-mail subject line. > > As for the questions: > 1. Yes, that should work, but the services project is usually named > "service". It gives the Octavia service more permissions that it > really needs, but will work as a starting point. > 2. This can be accomplished in many ways. The traffic on the > "lb-mgmt-net" is IP based, so can be routed if you need in your > deployment. Others will use a provider network. Devstack pops a port > off the neutron OVS. > > It might be helpful for you to look at our devstack setup script: > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh > and/or > > the OpenStack Ansible role for Octavia: > https://github.com/openstack/openstack-ansible-os_octavia for > examples. > > As always, we hang out in the #openstack-lbaas IRC channel if you > would like to chat about your deployment. > > Michael > > On Tue, Dec 4, 2018 at 8:21 AM Zufar Dhiyaulhaq > wrote: > > > > Hi, I want to implement Octavia service in OpenStack Queens. > > > > I am stuck on two-step : > > 1. Create Octavia User > > > > I am trying to create Octavia user with this command, is this the right > way? > > > > openstack user create octavia --domain default --password octavia > > openstack role add --user octavia --project services admin > > > > openstack service create --name octavia --description "OpenStack > Octavia" load-balancer > > openstack endpoint create --region RegionOne octavia public > http://10.60.60.10:9876 > > openstack endpoint create --region RegionOne octavia internal > http://10.60.60.10:9876 > > openstack endpoint create --region RegionOne octavia admin > http://10.60.60.10:9876 > > > > 2. Load Balancer Network Configuration > > "Add appropriate routing to/from the ‘lb-mgmt-net’ such that egress is > allowed, and the controller (to be created later) can talk to hosts on this > network." > > > > I don't know how to route from controller host into a private network, > is any specific command for doing that? > > > > following tutorial from > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#running-octavia-in-production > . > > > > Thank You > > Best Regards, > > Zufar Dhiyaulhaq > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Dec 5 00:35:32 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 4 Dec 2018 19:35:32 -0500 Subject: [openstack-ansible] avoid approving changes hitting centos-7 jobs Message-ID: <82905CC8-29BF-4A81-ACEA-F3A3DE917401@vexxhost.com> Hi everyone: With the release of CentOS 7.6, we are unable to merge any code which runs on CentOS 7 because of the fact that our containers are still 7.5 however the host is 7.6 There is an issue open to get CentOS 7.6 images in: https://github.com/CentOS/sig-cloud-instance-images/issues/133 We’re hoping that upstream can provide this CentOS image soon to unbreak us but until then we’ll be blocked. I’ll try to reach out to anyone on the CentOS 7 team who can help us, but for the meantime, let’s avoid approving anything with voting CentOS 7 jobs. Thanks! Mohammed From emilien at redhat.com Wed Dec 5 00:37:14 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 19:37:14 -0500 Subject: [tripleo] [validations] Replacement of undercloud_conf module In-Reply-To: References: Message-ID: On Tue, Dec 4, 2018 at 4:18 PM Dan Prince wrote: > Why is this not safe? > > I commented on the LP you linked but it seems to me that a simple fix > would be to set the same hiera setting we used before so that the > location of the undercloud.conf is known. We still use and support > hiera for the Undercloud. It would be a simple matter to set this in an > undercloud service via t-h-t. If you wanted to you could even cache a > copy of the used version somewhere and then consume it that way right? > I guess I wanted to use proper Ansible variables instead of Hiera, since we are moving toward more Ansible. That's all. I found it cleaner and simpler than relying on undercloud.conf file. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 5 01:02:02 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Dec 2018 10:02:02 +0900 Subject: [dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <1677be2d414.b86749a695953.736336492767392639@ghanshyammann.com> Hello everyone, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. - gmann & TC From emilien at redhat.com Wed Dec 5 01:53:15 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 4 Dec 2018 20:53:15 -0500 Subject: [tripleo] cleanup upgrade_tasks In-Reply-To: <3d8f8b993d23b2b3391a9b3abd508c0c724465cb.camel@redhat.com> References: <3d8f8b993d23b2b3391a9b3abd508c0c724465cb.camel@redhat.com> Message-ID: On Tue, Dec 4, 2018 at 4:32 PM Dan Prince wrote: > Would some of them be useful for fast forward upgrades in the future > though? I suppose it all depends on where you draw your "upgrade lines" > from major version to major version. > AFIK these tasks aren't useful in master (Stein) for FFU, as they were used between Newton and Queens. Since Queens only have containerized undercloud, and Undercloud isn't upgraded via FFU, I think these tasks could be removed. Many of these steps are very similar. It seems like it would be > possible to detect podman in the systemd unit file (systemctl show > | grep podman) or something and then set your Ansible > variables accordingly to disable the block if podman is being used. > > And improvement might be to put this logic into a playbook and consume > it from each module. That is, if we even want to keep this upgrade code > for the future. > Indeed, if upgrade team wants to keep or refactor them [0], it's fine. [0] https://review.openstack.org/#/c/582502 -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Dec 5 05:02:10 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 5 Dec 2018 16:02:10 +1100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> Message-ID: <20181205050207.GA19462@thor.bakeyournoodle.com> On Thu, Nov 29, 2018 at 11:38:45AM +0100, Tobias Urdin wrote: > Hello, > This got lost way down in my mailbox. > > I think we have a consensus about getting rid of the newton branches. > Does anybody in Stable release team have time to deprecate the stable/newton > branches? Just to be clear You're asking for the following repos to be marked EOL (current origin/stable/newton tagged as newton-eol and deleted, any open reviews abandoned) : # EOL repos belonging to Puppet OpenStack eol_branch.sh -- stable/newton newton-eol \ openstack/puppet-aodh openstack/puppet-barbican \ openstack/puppet-ceilometer openstack/puppet-cinder \ openstack/puppet-designate openstack/puppet-glance \ openstack/puppet-gnocchi openstack/puppet-heat \ openstack/puppet-horizon openstack/puppet-ironic \ openstack/puppet-keystone openstack/puppet-magnum \ openstack/puppet-manila openstack/puppet-mistral \ openstack/puppet-murano openstack/puppet-neutron \ openstack/puppet-nova \ openstack/puppet-openstack-integration \ openstack/puppet-openstack_extras \ openstack/puppet-openstack_spec_helper \ openstack/puppet-openstacklib openstack/puppet-oslo \ openstack/puppet-ovn openstack/puppet-sahara \ openstack/puppet-swift openstack/puppet-tempest \ openstack/puppet-trove openstack/puppet-vswitch \ openstack/puppet-zaqar Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From anlin.kong at gmail.com Wed Dec 5 08:07:53 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 5 Dec 2018 21:07:53 +1300 Subject: [ptl] [openstack-map] New tags for OpenStack Project Map In-Reply-To: <5C06C88C.1010303@openstack.org> References: <5C06C88C.1010303@openstack.org> Message-ID: On Wed, Dec 5, 2018 at 7:35 AM Jimmy McArthur wrote: > Following up on this thread from ttx [1], we are continuing to enhance > the content on the OpenStack Project Map [2],[3] through new tags that > are managed through the openstack-map repo [4]. > > * video (id, title, description) - This controls the Project Update > video you see on the project page. I've just pushed a review adding all > of the Project Updates for Berlin [5] > I'm wondering where does the id come from? Youtube URL suffix? Cheers, Lingxian Kong -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Wed Dec 5 08:14:27 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 5 Dec 2018 08:14:27 +0000 Subject: [scientific-sig] No IRC Meeting today Message-ID: <2CA0C7F5-8D50-4A27-A7E2-B9E5E9ABCDB8@telfer.org> Hi all - Apologies, there will be no Scientific SIG IRC meeting today. It’s too busy a week on other fronts. We’ll carry over agenda items to the next session and hopefully make up for lost time there. Best wishes, Stig From bdobreli at redhat.com Wed Dec 5 08:39:54 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 5 Dec 2018 09:39:54 +0100 Subject: [tripleo] [validations] Replacement of undercloud_conf module In-Reply-To: References: Message-ID: <093221e3-33f0-6258-5f4f-e7a139f8f083@redhat.com> On 12/5/18 1:37 AM, Emilien Macchi wrote: > > > On Tue, Dec 4, 2018 at 4:18 PM Dan Prince > wrote: > > Why is this not safe? > > I commented on the LP you linked but it seems to me that a simple fix > would be to set the same hiera setting we used before so that the > location of the undercloud.conf is known. We still use and support > hiera for the Undercloud. It would be a simple matter to set this in an > undercloud service via t-h-t. If you wanted to you could even cache a > copy of the used version somewhere and then consume it that way right? > > > I guess I wanted to use proper Ansible variables instead of Hiera, since > we are moving toward more Ansible. That's all. > I found it cleaner and simpler than relying on undercloud.conf file. +1 > -- > Emilien Macchi -- Best regards, Bogdan Dobrelya, Irc #bogdando From thierry at openstack.org Wed Dec 5 08:57:54 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Dec 2018 09:57:54 +0100 Subject: [TC] Forum TC Vision Retrospective summary In-Reply-To: References: Message-ID: Julia Kreger wrote: > * Consensus was reached that there should be a vision of the TC itself, > and as part of this have a living document that describes the "Role of > the TC". > ** ttx, cdent, and TheJulia have volunteered to continue those discussions. > ** mnaser would start a discussion with the community as to what the TC > should and shouldn't do. For those reading this, please remember that > the TC's role is defined in the foundation bylaws, so this would be more > of a collection of perceptions. A first draft of that document was posted for early comments at: https://review.openstack.org/#/c/622400/ -- Thierry Carrez (ttx) From lajos.katona at ericsson.com Wed Dec 5 08:58:03 2018 From: lajos.katona at ericsson.com (Lajos Katona) Date: Wed, 5 Dec 2018 08:58:03 +0000 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> Message-ID: <3d5fea81-f2f9-8522-ce88-18d393be652d@ericsson.com> Thanks, I uploaded the patch only to start the conversation early, I plan to add all the necessary methods to cover the needs now for bandwidth, and have a basic framework for adding more things later. Regards Lajos On 2018. 12. 04. 19:50, Archit Modi wrote: Great! There is already a patch from Lajos [1]. I'd like resource_provider_aggregates_client to be added too. (/resource_providers/{uuid}/aggregates) [1] https://review.openstack.org/#/c/622316/ On Tue, Dec 4, 2018 at 1:32 PM Chris Dent > wrote: On Tue, 4 Dec 2018, Dan Smith wrote: >> On 12/04/2018 06:13 AM, Chris Dent wrote: >>> Existing Tempests tests that do things like launching, resizing, >>> migrating servers already touch placement so may be sufficient. If >>> we wanted to make these more complete adding verification of >>> resource providers and their inventories before and after the tests >>> might be useful. [snip] > I don't disagree either. However, I do think that there are cases where > it may make sense to be _able_ to hit the placement endpoint from > tempest in order to verify that certain things are happening, even in a > scenario that involves other services. [snip] Based on conversation with Dan in IRC, we decided it might be useful to clarify that Dan and I are in agreement. It had seemed to me that he was saying something different from me, but we're both basically saying "yes, tempest needs to be able to talk to placement to confirm what it's holding because that's useful sometimes" and "no, tempest doesn't need to verify the workings of placement api itself". Which boils out to this: > I *think* that gmann's > question in the email was actually about placement endpoint support, > which is the former, and I think is probably legit. Yes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Tue Dec 4 23:54:20 2018 From: aspiers at suse.com (Adam Spiers) Date: Tue, 4 Dec 2018 23:54:20 +0000 Subject: [tc][all][self-healing-sig] Train Community Goals In-Reply-To: References: Message-ID: <20181204235419.bi7txdhnz45bsasf@pacific.linksys.moosehall> Lance Bragstad wrote: >Hi all, > >The purpose of this thread is to have a more focused discussion about what >we'd like to target for Train community goals, bootstrapped with the >outcomes from the session in Berlin [0]. > >During the session, we went through each item as a group and let the person >who added it share why they thought it would be a good community goal >candidate for the next release. Most goals have feedback captured in >etherpad describing next steps, but the following stuck out as top >contenders from the session (rated by upvotes): > > 1. Moving legacy clients to python-openstackclient > 2. Cleaning up resources when deleting a project > 3. Service-side health checks > >I don't think I missed any goals from the session, but if I did, please let >me know and I'll add it to the list so that we can discuss it here. > >Does anyone have strong opinions either way about the goals listed above? > >[0] https://etherpad.openstack.org/p/BER-t-series-goals I'm a fan of 3. service-side health checks, since I've been having discussions about this with various parties at the last 3--6 (I lost count) Forum / PTG events, and every time there seems to have been a decent amount of interest, e.g. - Deployment solutions have a clear motive for being able to programmatically determine a more accurate picture of the health of services, e.g. k8s-based deployments like Airship where containers would benefit from more accurate liveness probes. - The Self-healing SIG is obviously a natural customer for this functionality, since before you can heal you need to know what exactly needs healing. - It could benefit specific projects such as Vitrage, Horizon, and Monasca, and also automated tests / CI. Other factors it has in its favour as a community goal is that it's not too ambitious as to prevent a significant amount of progress in one cycle, and also progress should be easily measurable. FWIW here's some history ... For a good while we got stuck bike-shedding the spec: https://review.openstack.org/#/c/531456/ but in the API SIG session in Denver, we managed to break the deadlock and agreed to do the simplest thing which could possibly work in order to move forwards: https://etherpad.openstack.org/p/api-sig-stein-ptg Yes, there are many unresolved questions about the long-term design, but we decided to avoid any further paralysis and instead forge ahead based on the following principles: - The existing oslo.middleware mechanism is a good start, so just add a /v2 endpoint to avoid breaking existing consumers. - Only worry about API services for now. - Don't worry about authentication yet. - Endpoints should only report on their own health, not on the health of dependencies / related services. In Berlin Graham (@mugsie) pushed a very rough prototype to Gerrit: https://review.openstack.org/#/c/617924/ There's a story in StoryBoard tracking all of this. I've just updated it in an attempt to capture all the relevant history: https://storyboard.openstack.org/#!/story/2001439 From aspiers at suse.com Wed Dec 5 00:07:30 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 5 Dec 2018 00:07:30 +0000 Subject: [self-healing-sig] reminder: self-healing SIG IRC meetings tomorrow (Wed) Message-ID: <20181205000730.rgutstasf4xkhato@pacific.linksys.moosehall> Just a reminder that the second batch of the new series of IRC meetings for the self-healing SIG are happening tomorrow (well, today depending on where you live), i.e. Wednesday at 0900 UTC and 1700 UTC, in the #openstack-self-healing channel on Freenode: http://eavesdrop.openstack.org/#Self-healing_SIG_Meeting All are welcome! From geguileo at redhat.com Wed Dec 5 09:30:39 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 5 Dec 2018 10:30:39 +0100 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <20181204100812.og33xegl2fxmoo6g@localhost> Message-ID: <20181205093039.a7hv4lgo55aywhd7@localhost> On 04/12, Ben Nemec wrote: > Copying Mike Bayer since he's our resident DB expert. One more comment > inline. > > On 12/4/18 4:08 AM, Gorka Eguileor wrote: > > On 03/12, Julia Kreger wrote: > > > Indeed it is a considered a base service, but I'm unaware of why it was > > > decided to not have any abstraction layer on top. That sort of defeats the > > > adoption of tooz as a standard in the community. Plus with the rest of our > > > code bases, we have a number of similar or identical patterns and it would > > > be ideal to have a single library providing the overall interface for the > > > purposes of consistency. Could you provide some more background on that > > > decision? > > > > > > I guess what I'd really like to see is an oslo.db interface into etcd3. > > > > > > -Julia > > > > Hi, > > > > I think that some projects won't bother with the etcd interface since it > > would require some major rework of the whole service to get it working. > > I don't think Julia was suggesting that every project move to etcd, just > that we make it available for projects that want to use it this way. > Hi, My bad, I assumed that was the intention, otherwise, shouldn't we first ask how many projects would start using this key-value interface if it did exist? I mean, if there's only going to be 1 project using it, then it may be better to go with the standard pattern of implementing it in that project, and only extract it once there is a need for such a common library. Cheers, Gorka. > > > > Take Cinder for example. We do complex conditional updates that, as far > > as I know, cannot be satisfied with etcd's Compare-and-Swap > > functionality. We could modify all our code to make it support both > > relational databases and key-value stores, but I'm not convinced it > > would be worthwhile considering the huge effort it would require. > > > > I believe there are other OpenStack projects that have procedural code > > stored on the database, which would probably be hard to make compatible > > with key-value stores. > > > > Cheers, > > Gorka. > > > > > > > > On Mon, Dec 3, 2018 at 4:55 PM Fox, Kevin M wrote: > > > > > > > It is a full base service already: > > > > https://governance.openstack.org/tc/reference/base-services.html > > > > > > > > Projects have been free to use it for quite some time. I'm not sure if any > > > > actually are yet though. > > > > > > > > It was decided not to put an abstraction layer on top as its pretty simple > > > > and commonly deployed. > > > > > > > > Thanks, > > > > Kevin > > > > ------------------------------ > > > > *From:* Julia Kreger [juliaashleykreger at gmail.com] > > > > *Sent:* Monday, December 03, 2018 3:53 PM > > > > *To:* Ben Nemec > > > > *Cc:* Davanum Srinivas; geguileo at redhat.com; > > > > openstack-discuss at lists.openstack.org > > > > *Subject:* Re: [all] Etcd as DLM > > > > > > > > I would like to slightly interrupt this train of thought for an > > > > unscheduled vision of the future! > > > > > > > > What if we could allow a component to store data in etcd3's key value > > > > store like how we presently use oslo_db/sqlalchemy? > > > > > > > > While I personally hope to have etcd3 as a DLM for ironic one day, review > > > > bandwidth permitting, it occurs to me that etcd3 could be leveraged for > > > > more than just DLM. If we have a common vision to enable data storage, I > > > > suspect it might help provide overall guidance as to how we want to > > > > interact with the service moving forward. > > > > > > > > -Julia > > > > > > > > On Mon, Dec 3, 2018 at 2:52 PM Ben Nemec wrote: > > > > > > > > > Hi, > > > > > > > > > > I wanted to revisit this topic because it has come up in some downstream > > > > > discussions around Cinder A/A HA and the last time we talked about it > > > > > upstream was a year and a half ago[1]. There have certainly been changes > > > > > since then so I think it's worth another look. For context, the > > > > > conclusion of that session was: > > > > > > > > > > "Let's use etcd 3.x in the devstack CI, projects that are eventlet based > > > > > an use the etcd v3 http experimental API and those that don't can use > > > > > the etcd v3 gRPC API. Dims will submit a patch to tooz for the new > > > > > driver with v3 http experimental API. Projects should feel free to use > > > > > the DLM based on tooz+etcd3 from now on. Others projects can figure out > > > > > other use cases for etcd3." > > > > > > > > > > The main question that has come up is whether this is still the best > > > > > practice or if we should revisit the preferred drivers for etcd. Gorka > > > > > has gotten the grpc-based driver working in a Cinder driver that needs > > > > > etcd[2], so there's a question as to whether we still need the HTTP > > > > > etcd-gateway or if everything should use grpc. I will admit I'm nervous > > > > > about trying to juggle eventlet and grpc, but if it works then my only > > > > > argument is general misgivings about doing anything clever that involves > > > > > eventlet. :-) > > > > > > > > > > It looks like the HTTP API for etcd has moved out of experimental > > > > > status[3] at this point, so that's no longer an issue. There was some > > > > > vague concern from a downstream packaging perspective that the grpc > > > > > library might use a funky build system, whereas the etcd3-gateway > > > > > library only depends on existing OpenStack requirements. > > > > > > > > > > On the other hand, I don't know how much of a hassle it is to deploy and > > > > > manage a grpc-gateway. I'm kind of hoping someone has already been down > > > > > this road and can advise about what they found. > > > > > > > > > > Thanks. > > > > > > > > > > -Ben > > > > > > > > > > 1: https://etherpad.openstack.org/p/BOS-etcd-base-service > > > > > 2: > > > > > > > > > > https://github.com/embercsi/ember-csi/blob/5bd4dffe9107bc906d14a45cd819d9a659c19047/ember_csi/ember_csi.py#L1106-L1111 > > > > > 3: https://github.com/grpc-ecosystem/grpc-gateway > > > > > > > > > > From flfuchs at redhat.com Wed Dec 5 09:36:44 2018 From: flfuchs at redhat.com (Florian Fuchs) Date: Wed, 5 Dec 2018 10:36:44 +0100 Subject: [tripleo] [validations] Replacement of undercloud_conf module In-Reply-To: References: Message-ID: On Tue, Dec 4, 2018 at 10:20 PM Dan Prince wrote: > > On Tue, 2018-12-04 at 12:54 -0500, Emilien Macchi wrote: > > Hi folks, > > > > Context: https://bugs.launchpad.net/tripleo/+bug/1805825 > > > > Today I randomly found this module: > > https://github.com/openstack/tripleo-validations/blob/d21e7fa30f9be15bb980279197dc6c5206f38a38/validations/library/undercloud_conf.py > > > > And it gave me 2 ideas, as I think we don't need this module and > > would consider it as technical debt at this point: > > - it's relying on a file, which isn't super safe and flexible IMHO. > > We still use undercloud.conf though right? Why is it not safe (the data > has to be stored somewhere right)? > > > - a lot of validations rely on Hieradata which isn't safe either, we > > saw it with the Containerized Undercloud. > > Why is this not safe? > > I commented on the LP you linked but it seems to me that a simple fix > would be to set the same hiera setting we used before so that the > location of the undercloud.conf is known. We still use and support > hiera for the Undercloud. It would be a simple matter to set this in an > undercloud service via t-h-t. If you wanted to you could even cache a > copy of the used version somewhere and then consume it that way right? I guess this is pretty much what's happening in this patch (at least partly): https://review.openstack.org/#/c/614470/ However, I agree with Emilien that it's a good idea to remove the dependency on puppet/hieradata in favor of ansibles vars. > > Dan > > > > > So I propose that: > > - we export require parameters via the Heat templates into Ansible > > variables > > - we consume these variables from tripleo-validations (can be in the > > inventory or a dedicated var file for validations). Since tripleo-validations consume the inventory with every validation run anyway, I'm slightly in favor of storing the variables there instead of an extra validations file. Florian > > > > So that way we remove the dependency on having the undercloud.conf > > access from Mistral Executor and also stop depending on Puppet > > (hieradata) which we don't guarantee to be here in the future. > > > > Can someone from TripleO validations team ack this email and put this > > work in your backlog? If you need assistance we're happy to help but > > I believe this is an important effort to avoid technical debt here. > > > > Thanks, > > -- > > Emilien Macchi > > From thierry at openstack.org Wed Dec 5 10:31:10 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Dec 2018 11:31:10 +0100 Subject: [loci] Release management for LOCI In-Reply-To: <3855B170-6E38-4DB9-A91C-9389D16D387F@openstack.org> References: <3855B170-6E38-4DB9-A91C-9389D16D387F@openstack.org> Message-ID: <64a34fd9-d31b-5d7d-ae94-053d9bdebbad@openstack.org> Chris Hoge wrote: > [...] > To answer your question, as of right now we are at 2: "not meant to be > "released" or tagged but more like continuously published". This may > change after the meeting tomorrow. Looking at the meeting logs it appears the position has not changed. I proposed as a result: https://review.openstack.org/622902 Cheers, -- Thierry Carrez (ttx) From thierry at openstack.org Wed Dec 5 10:32:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 Dec 2018 11:32:51 +0100 Subject: [dev][qa][devstack] Release management for QA toold and plugins In-Reply-To: <16771aa6641.f7035e7c28699.479407897586326349@ghanshyammann.com> References: <20f06939-9590-4b93-3381-02c32570b990@openstack.org> <20181129191636.GA26514@sinanju.localdomain> <167627c0fb2.fedfd67828981.6889702971134127091@ghanshyammann.com> <167638457ad.fe9b8693739.8314446462346139058@ghanshyammann.com> <6be6bbf3-a862-df85-5120-b90f5c74c1cf@openstack.org> <16771aa6641.f7035e7c28699.479407897586326349@ghanshyammann.com> Message-ID: Ghanshyam Mann wrote: > ---- On Fri, 30 Nov 2018 19:05:36 +0900 Thierry Carrez wrote ---- > > [...] > > OK so in summary: > > > > eslint-config-openstack, karma-subunit-reporter, devstack-tools -> > > should be considered cycle-independent (with older releases history > > imported). Any future release would be done through openstack/releases > > > > devstack-vagrant -> does not need releases or release management, will > > be marked release-management:none in governance > > > > devstack-plugin-ceph -> does not need releases or cycle-related > > branching, so will be marked release-management:none in governance > > > > Other devstack-plugins maintainers should pick whether they need to be > > branched every cycle or not. Oslo-maintained plugins like > > devstack-plugin-zmq and devstack-plugin-pika will, for example. > > > > Unless someone objects, I'll push the related changes where needed. > > Thanks for the clarification ! > > +1. Those looks good. Thanks. See: https://review.openstack.org/622903 https://review.openstack.org/622904 https://review.openstack.org/#/c/622919/ Cheers, -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Wed Dec 5 10:53:43 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Dec 2018 19:53:43 +0900 Subject: [dev][nova][placement][qa] opinion on adding placement tests support in Tempest In-Reply-To: References: <16778b864ea.ed28ea0d72714.6704537180908793759@ghanshyammann.com> <52060b98-74f5-3a3a-1b51-8cba8aa7b00c@gmail.com> Message-ID: <1677e0088f3.c1fe7968101627.2677144604314914637@ghanshyammann.com> ---- On Wed, 05 Dec 2018 03:30:56 +0900 Chris Dent wrote ---- > On Tue, 4 Dec 2018, Dan Smith wrote: > > >> On 12/04/2018 06:13 AM, Chris Dent wrote: > >>> Existing Tempests tests that do things like launching, resizing, > >>> migrating servers already touch placement so may be sufficient. If > >>> we wanted to make these more complete adding verification of > >>> resource providers and their inventories before and after the tests > >>> might be useful. > > [snip] > > > I don't disagree either. However, I do think that there are cases where > > it may make sense to be _able_ to hit the placement endpoint from > > tempest in order to verify that certain things are happening, even in a > > scenario that involves other services. > > [snip] > > Based on conversation with Dan in IRC, we decided it might be useful > to clarify that Dan and I are in agreement. It had seemed to me that > he was saying something different from me, but we're both basically > saying "yes, tempest needs to be able to talk to placement to > confirm what it's holding because that's useful sometimes" and "no, > tempest doesn't need to verify the workings of placement api itself". Yeah, that is what we wanted. I think I mentioned that in my original mail but that sentence was not so clear. There will not be any overlap/duplicate tests between what existing functional test cover and what Tempest is going to cover. Tempest will need to talk to placement for extra verification in Tempest tests. Verify only placement API working is not in scope of Tempest. Which is nothing but : - Adding placement service clients with unit test coverage only - Those service client will be added on need basis. We do not want to maintain the unused service clients. - Use those service clients to talk to placement for extra verification in tests. > > Which boils out to this: > > > I *think* that gmann's > > question in the email was actually about placement endpoint support, > > which is the former, and I think is probably legit. > > Yes. Great, we all are on same page now. -gmann > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent From bdobreli at redhat.com Wed Dec 5 11:08:56 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 5 Dec 2018 12:08:56 +0100 Subject: [all][tc][Edge][FEMDC][tripleo][akraino][starlingx] Chronicles Of A Causal Consistency Message-ID: Background. Fact 0. Edge MVP reference architectures are limited to a single control plane that uses a central/global data backend by usual for boring Cloud computing meanings. Fact 1. Edge clouds in Fog computing world are WAN-distributed. Far and middle-level tiers may be communicating to their control planes over high-latency (~50/100ms or more) connections. Fact 2. Post-MVP phases [0] of future reference architectures for Edge imply high autonomity of edge sites (aka cloudlets [1][2]), which is having multiple control planes always maintaining CRUD operations locally and replicating shared state asynchronously, only when "uplinks" are available, if available at all. Fact 3. Distributed Compute Node in the post-MVP phases represents a multi-tiered star topology with middle-layer control planes aggregating thousands of computes at far edge sites and serving CRUD operations for those locally and fully autonomous to upper aggregation edge layers [3]. Those in turn might be aggregating tens of thousands of computes via tens/hundreds of such middle layers. And finally, there may be a central site or a few that want some data and metrics from all of the aggregation edge layers under its control, or pushing deployment configuration down hill through all of the layers. Reality check. That said, the given facts 1-3 contradict to strongly consistent data backends supported in today OpenStack (oslo.db), or Kubernetes as well. That means that neither of two IaaS/PaaS solutions is ready for future post-MVP phases of Edge as of yet. That also means that both will need a new, weaker consistent, data backend to pass the future reality check. If you're interested in formal proves of that claim, please see for sources [4][5][6][7][8]. A [tl;dr] of those: a) It is known that causal consistency is the best suitable for high-latency, high-scale and highly dynamic nature of membership in clusters b) "it it is significantly harder to implement causal consistency than eventual consistency. This explains the fact why there is not even a single commercial database system that uses causal consistency" [6] Challenge accepted! What can we as OpenStack community, joined the Kubernetes/OSF/CNCF communities perhaps, for the bright Edge future can do to make things passing that reality check? It's time to start thinking off it early, before we are to face the post-MVP phases for Edge, IMO. That is also something being discussed in the neighbour topic [9] and that I'm also trying to position as a challenge in that very high-level draft paper [10]. As of potential steps on the way of implementing/adopting such a causal data backend in OpenStack at least, we should start looking into the papers, like [4][5][6][7][8] (or even [11], why not having a FS for that?), and probably more of it as a "theoretical background". [0] https://wiki.openstack.org/w/index.php?title=OpenStack_Edge_Discussions_Dublin_PTG#Features_2 [1] https://github.com/State-of-the-Edge/glossary/blob/master/edge-glossary.md#cloudlet [2] https://en.wikipedia.org/wiki/Cloudlet [3] https://github.com/State-of-the-Edge/glossary/blob/master/edge-glossary.md#aggregation-edge-layer [4] http://www.bailis.org/papers/bolton-sigmod2013.pdf [5] http://www.cs.princeton.edu/~wlloyd/papers/eiger-nsdi13.pdf [6] https://www.ronpub.com/OJDB_2015v2i1n02_Elbushra.pdf [7] http://www.cs.cornell.edu/lorenzo/papers/cac-tr.pdf [8] https://www.cs.cmu.edu/~dga/papers/cops-sosp2011.pdf [9] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000492.html [10] https://github.com/bogdando/papers-ieee/blob/master/ICFC-2019/LaTeX/position_paper_1570506394.pdf [11] http://rainbowfs.lip6.fr/data/RainbowFS-2016-04-12.pdf -- Best regards, Bogdan Dobrelya, Irc #bogdando From gmann at ghanshyammann.com Wed Dec 5 11:39:57 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Dec 2018 20:39:57 +0900 Subject: [tc] Adapting office hours schedule to demand In-Reply-To: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Message-ID: <1677e2adcd5.b59b7cfb103641.8644361703617614132@ghanshyammann.com> ---- On Wed, 05 Dec 2018 00:45:20 +0900 Thierry Carrez wrote ---- > Hi, > > A while ago, the Technical Committee designated specific hours in the > week where members would make extra effort to be around on #openstack-tc > on IRC, so that community members looking for answers to their questions > or wanting to engage can find a time convenient for them and a critical > mass of TC members around. We currently have 3 weekly spots: > > - 09:00 UTC on Tuesdays > - 01:00 UTC on Wednesdays > - 15:00 UTC on Thursdays > > But after a few months it appears that: > > 1/ nobody really comes on channel at office hour time to ask questions. > We had a questions on the #openstack-tc IRC channel, but I wouldn't say > people take benefit of the synced time > > 2/ some office hours (most notably the 01:00 UTC on Wednesdays, but also > to a lesser extent the 09:00 UTC on Tuesdays) end up just being a couple > of TC members present > > So the schedule is definitely not reaching its objectives, and as such > may be a bit overkill. I was also wondering if this is not a case where > the offer is hurting the demand -- by having so many office hour spots > around, nobody considers them special. > > Should we: > > - Reduce office hours to one or two per week, possibly rotating times > > - Dump the whole idea and just encourage people to ask questions at any > time on #openstack-tc, and get asynchronous answers > > - Keep it as-is, it still has the side benefit of triggering spikes of > TC member activity I vote for keeping it to two in a week which can cover both Asia and USA/EU TZ which mean either dropping either Tuesday or Wednesday. If traffic is same in office hours then also it is ok as it does not take any extra effort from us. we keep doing our day activity and keep eyes on channel during that time. Obviously it does not mean we will not active in other time but it is good to save a particular slot where people can find more than one TC. -gmann > > Thoughts ? > > -- > Thierry Carrez (ttx) > > From zhaolihuisky at aliyun.com Wed Dec 5 05:50:26 2018 From: zhaolihuisky at aliyun.com (zhaolihuisky) Date: Wed, 05 Dec 2018 13:50:26 +0800 Subject: =?UTF-8?B?W29wZW5zdGFja11bZGlza2ltYWdlLWJ1aWxkZXJdIEhvdyB0byBpbnN0YWxsIHNvZnR3YXJl?= =?UTF-8?B?IHBhY2thZ2VzIGF0IGJ1aWxkaW5nIG11cmFuby1hZ2VudCBpbWFnZQ==?= Message-ID: hi, guys How to install software packages at building murano-agent image. I have download telegraf-x.x.x-x86_64.rpm file and how to install this package into murano-agent image. Is there any suggestion? Best Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Wed Dec 5 05:55:06 2018 From: liliueecg at gmail.com (Li Liu) Date: Wed, 5 Dec 2018 00:55:06 -0500 Subject: [cyborg] IRC meeting for Dec 5th Message-ID: Hi team, The IRC meeting will be held at the normal time again this week. *Starting next week*, we will move it to 0300 UTC, which 10:00 pm est(Tuesday) / 7:00 pm pst(Tuesday) / 11am beijing time (Wednesday) For this week's meeting, we will try to 1. finalize the new DB scheme design 2. finalize the question from Xinran's email on whether we should use conductor or agent to do the diff/update in DB 3. Track status on miscs -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 5 12:02:08 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Dec 2018 21:02:08 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <1673a198ed5.ffd33ea931022.2184156112862411416@ghanshyammann.com> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> <166eb10f8fb.117c25ba675341.116732651371514382@ghanshyammann.com> <1673a198ed5.ffd33ea931022.2184156112862411416@ghanshyammann.com> Message-ID: <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> Reminder to test your project specific jobs if those are dependent on Devstack or Tempest base jobs and keep adding the results on etherpad- https://etherpad.openstack.org/p/devstack-bionic We will merge the Devstack and Tempest base job on Bionic on 10th Dec 2018. -gmann ---- On Thu, 22 Nov 2018 15:26:52 +0900 Ghanshyam Mann wrote ---- > Hi All, > > Let's go with approach 1 means migrating the Devstack and Tempest base jobs to Bionic. This will move most of the jobs to Bionic. > > We have two patches up which move all Devstack and Tempest jobs to Bionic and it's working fine. > > 1. All DevStack jobs to Bionic - https://review.openstack.org/#/c/610977/ > - This will move devstack-minimal, devstack, devstack-ipv6, devstack-multinode jobs to bionic only for master which means it will be stein onwards. All these jobs will use > xenial till stable/rocky. > > 2. All Tempest base jobs (except stable branches job running on master) to Bionic - https://review.openstack.org/#/c/618169/ > - This will move devstack-tempest, tempest-all, devstack-tempest-ipv6, tempest-full, tempest-full-py3, tempest-multinode-full, tempest-slow jobs to bionic. > Note- Even Tempest is branchless and these tempest jobs have been moved to Bionic, they will still use xenial for all stable branches(till stable/rocky) testing. with zuulv3 magic and devstack base jobs nodeset for stable branch (xenial) and master (stein onwards -bionic) will take care of that. Tested on [1] and working fine. Thanks corvus and clarkb for guiding to this optimized way. > > 3. Legacy jobs are not migrated to bionic. They should get migrated to Bionic while they are moved to zuulv3 native. So if your projects have many legacy jobs then, they will still run on xenial. > > > Any job inherits from those base jobs will behave the same way (running on bionic from stein onwards and xenial till stable/rocky). > > I am writing the plan and next action item to complete this migration activity: > > 1 Project teams: need to test their jobs 1. which are inherited from devstack/tempest base jobs and should pass as it is 2. Any zuulv3 jobs not using devstack/tempest base job required to migrate to use bionic (Devstack patch provide the bionic nodeset) and test it. Start writing the results on etherpad[2] > > 2 QA team will merge the above patches by 10th Dec so that we can find and fix any issues as early and to avoid the same during release time. > > Let's finish the pre-testing till 10th Dec and then merge the bionic migration patches. > > > [1] https://review.openstack.org/#/c/618181/ https://review.openstack.org/#/c/618176/ > [2] https://etherpad.openstack.org/p/devstack-bionic > > -gmann > > ---- On Wed, 07 Nov 2018 08:45:45 +0900 Doug Hellmann wrote ---- > > Ghanshyam Mann writes: > > > > > ---- On Wed, 07 Nov 2018 06:51:32 +0900 Slawomir Kaplonski wrote ---- > > > > Hi, > > > > > > > > > Wiadomość napisana przez Jeremy Stanley w dniu 06.11.2018, o godz. 22:25: > > > > > > > > > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: > > > > > [...] > > > > >> also add jobs like "devstack-xenial" and "tempest-full-xenial" > > > > >> which projects can use still for some time if their job on Bionic > > > > >> would be broken now? > > > > > [...] > > > > > > > > > > That opens the door to piecemeal migration, which (as we similarly > > > > > saw during the Trusty to Xenial switch) will inevitably lead to > > > > > projects who no longer gate on Xenial being unable to integration > > > > > test against projects who don't yet support Bionic. At the same > > > > > time, projects which have switched to Bionic will start merging > > > > > changes which only work on Bionic without realizing it, so that > > > > > projects which test on Xenial can't use them. In short, you'll be > > > > > broken either way. On top of that, you can end up with projects that > > > > > don't get around to switching completely before release comes, and > > > > > then they're stuck having to manage a test platform transition on a > > > > > stable branch. > > > > > > > > I understand Your point here but will option 2) from first email lead to the same issues then? > > > > > > seems so. approach 1 is less risky for such integrated testing issues and requires less work. In approach 1, we can coordinate the base job migration with project side testing with bionic. > > > > > > -gmann > > > > I like the approach of updating the devstack jobs to move everything to > > Bionic at one time because it sounds like it presents less risk of us > > ending up with something that looks like it works together but doesn't > > actually because it's tested on a different platform, as well as being > > less likely to cause us to have to do major porting work in stable > > branches after the release. > > > > We'll need to take the same approach when updating the version of Python > > 3 used inside of devstack. > > > > Doug > > > > From dprince at redhat.com Wed Dec 5 12:31:40 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 05 Dec 2018 07:31:40 -0500 Subject: [tripleo] [validations] Replacement of undercloud_conf module In-Reply-To: References: Message-ID: <006550d1fdab4a4c6ab94c72c791687cd4e654fb.camel@redhat.com> On Tue, 2018-12-04 at 19:37 -0500, Emilien Macchi wrote: > On Tue, Dec 4, 2018 at 4:18 PM Dan Prince wrote: > > Why is this not safe? > > > > > > > > I commented on the LP you linked but it seems to me that a simple > > fix > > > > would be to set the same hiera setting we used before so that the > > > > location of the undercloud.conf is known. We still use and support > > > > hiera for the Undercloud. It would be a simple matter to set this > > in an > > > > undercloud service via t-h-t. If you wanted to you could even cache > > a > > > > copy of the used version somewhere and then consume it that way > > right? > > I guess I wanted to use proper Ansible variables instead of Hiera, > since we are moving toward more Ansible. That's all. > I found it cleaner and simpler than relying on undercloud.conf file. Agreed. After reviewing your patches it all looked fine. Dan > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Dec 5 12:59:44 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 5 Dec 2018 07:59:44 -0500 Subject: [ironic] Time to discuss clean/deploy steps In-Reply-To: References: Message-ID: On Tue, Dec 4, 2018 at 4:39 PM Julia Kreger wrote: > All, > > I've looked at the doodle poll results and it looks like the best > available time is 3:00 PM UTC on Friday December 7th. > Turns out I won't be able to make that after all. :( Will drop my thoughts in the spec. > > I suggest we use bluejeans[2] as that has worked fairly well for us thus > far. The specification documented related to the discussion can be found in > review[3]. > > Thanks, > > -Julia > > [1] https://doodle.com/poll/yan4wyvztf7mpq46 > [2] https://bluejeans.com/u/jkreger/ > [3] https://review.openstack.org/#/c/606199/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Wed Dec 5 12:06:34 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Wed, 5 Dec 2018 13:06:34 +0100 Subject: [tripleo] Spec for upgrades including operating system upgrade Message-ID: Hi folks, there's work ongoing to figure out how we can include operating system upgrade into the TripleO upgrade procedure. As we explore the options, it becomes apparent that we need to change the operator workflow, and the OpenStack upgrade including an operating system upgrade will likely end up being our most daring, complex, risky upgrade procedure yet. There's a spec [1] which outlines the expected operator workflow and its implementation, alternatives, current gaps, risks etc. If you maintain any of our composable services, please take some time to review the spec, consider if the upgrade of the service fits the proposed upgrade workflow, and provide feedback or suggestions. The spec is fairly long (the topic is complex), but it's been structured to make the read as comfortable as possible. Thank you, Jirka [1] patch: https://review.openstack.org/#/c/622324 rendered: http://logs.openstack.org/24/622324/1/check/openstack-tox-docs/b047271/html/specs/stein/upgrades-with-operating-system.html From ltoscano at redhat.com Wed Dec 5 13:19:52 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 05 Dec 2018 14:19:52 +0100 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> References: <1673a198ed5.ffd33ea931022.2184156112862411416@ghanshyammann.com> <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> Message-ID: <4795573.S6joe1DXvj@whitebase.usersys.redhat.com> On Wednesday, 5 December 2018 13:02:08 CET Ghanshyam Mann wrote: > Reminder to test your project specific jobs if those are dependent on > Devstack or Tempest base jobs and keep adding the results on etherpad- > https://etherpad.openstack.org/p/devstack-bionic > > We will merge the Devstack and Tempest base job on Bionic on 10th Dec 2018. > I can't test it right now using the gates (so I can't really report this on the etherpad), but a quick local test shows that devstack-plugin-ceph shows does not seem to support bionic. I may try to prepare a test job later if no one beats me at it. Ciao -- Luigi From ltoscano at redhat.com Wed Dec 5 13:29:10 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 05 Dec 2018 14:29:10 +0100 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <4795573.S6joe1DXvj@whitebase.usersys.redhat.com> References: <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> <4795573.S6joe1DXvj@whitebase.usersys.redhat.com> Message-ID: <4000535.P2t9rJb4HH@whitebase.usersys.redhat.com> On Wednesday, 5 December 2018 14:19:52 CET Luigi Toscano wrote: > On Wednesday, 5 December 2018 13:02:08 CET Ghanshyam Mann wrote: > > Reminder to test your project specific jobs if those are dependent on > > Devstack or Tempest base jobs and keep adding the results on etherpad- > > https://etherpad.openstack.org/p/devstack-bionic > > > > We will merge the Devstack and Tempest base job on Bionic on 10th Dec > > 2018. > > I can't test it right now using the gates (so I can't really report this on > the etherpad), but a quick local test shows that devstack-plugin-ceph shows > does not seem to support bionic. I may try to prepare a test job later if no > one beats me at it. > Erp, sorry, I didn't notice https://review.openstack.org/#/c/611594/ - I confirm that it makes devstack-plugin-ceph compatible with bionic, so please merge it :) Ciao -- Luigi From jimmy at openstack.org Wed Dec 5 14:10:30 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 05 Dec 2018 08:10:30 -0600 Subject: [ptl] [openstack-map] New tags for OpenStack Project Map In-Reply-To: References: <5C06C88C.1010303@openstack.org> Message-ID: <5C07DC56.8050000@openstack.org> Apologies for not clarifying. Yes. The video ID is the YoutTubeID. > Lingxian Kong > December 5, 2018 at 2:07 AM > On Wed, Dec 5, 2018 at 7:35 AM Jimmy McArthur > wrote: > > Following up on this thread from ttx [1], we are continuing to > enhance > the content on the OpenStack Project Map [2],[3] through new tags > that > are managed through the openstack-map repo [4]. > > * video (id, title, description) - This controls the Project Update > video you see on the project page. I've just pushed a review > adding all > of the Project Updates for Berlin [5] > > > I'm wondering where does the id come from? Youtube URL suffix? > Cheers, > Lingxian Kong > Jimmy McArthur > December 4, 2018 at 12:33 PM > Following up on this thread from ttx [1], we are continuing to enhance > the content on the OpenStack Project Map [2],[3] through new tags that > are managed through the openstack-map repo [4]. > > * video (id, title, description) - This controls the Project Update > video you see on the project page. I've just pushed a review adding > all of the Project Updates for Berlin [5] > > * depends-on - Things that are a strong dependency. This should be > used ONLY if your component requires another one to work (e.g. nova -> > glance) > > * see-also - Should list things that are a week dependency or an > adjacent relevant thing > > * support-teams (name: link) - This is meant to give credit to > adjacent projects that aren't necessary to run the software (e.g. > Oslo, i18n, Docs). We are still determining how best to implement > this tag, but we feel it's important to give some credit to these > other teams that are so critical in helping to maintain, support, and > build OpenStack > > If you have some time, please go to the git repo [4] and review your > project and help flesh out these new tags (or update old ones) so we > can display them in the Project Map [2]. > > Cheers, > Jimmy > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000178.html > > [2] > https://www.openstack.org/software/project-navigator/openstack-components > [3] > https://www.openstack.org/assets/software/projectmap/openstack-map.pdf > [4] https://git.openstack.org/cgit/openstack/openstack-map/ > [5] https://review.openstack.org/622485 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luka.peschke at objectif-libre.com Wed Dec 5 14:16:01 2018 From: luka.peschke at objectif-libre.com (Luka Peschke) Date: Wed, 05 Dec 2018 15:16:01 +0100 Subject: [cloudkitty] IRC Meeting of the 7/12 will exceptionally happen at 9h AM UTC Message-ID: Hello, After discussion on IRC, it has been decided that the next CloudKitty IRC meeting (which was supposed to happen at 15h UTC on the 7/12) will exceptionally be held at 9h UTC (10h CET) on the same day. The schedule for other meetings remains unchanged (First friday of each month at 15h UTC). Cheers, -- Luka Peschke Développeur +33 (0) 5 82 95 65 36 5 rue du Moulin Bayard - 31000 Toulouse www.objectif-libre.com From doug at doughellmann.com Wed Dec 5 14:18:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Dec 2018 09:18:49 -0500 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <20181204100812.og33xegl2fxmoo6g@localhost> Message-ID: Mike Bayer writes: > On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec wrote: >> >> Copying Mike Bayer since he's our resident DB expert. One more comment >> inline. > > so the level of abstraction oslo.db itself provides is fairly light - > it steps in for the initial configuration of the database engine, for > the job of reworking exceptions into something more locallized, and > then for supplying a basic transactional begin/commit pattern that > includes concepts that openstack uses a lot. it also has some > helpers for things like special datatypes, test frameworks, and stuff > like that. > > That is, oslo.db is not a full blown "abstraction" layer, it exposes > the SQLAlchemy API which is then where you have the major level of > abstraction. > > Given that, making oslo.db do for etcd3 what it does for SQLAlchemy > would be an appropriate place for such a thing. It would be all new > code and not really have much overlap with anything that's there right > now, but still would be feasible at least at the level of, "get a > handle to etcd3, here's the basic persistence / query pattern we use > with it, here's a test framework that will allow test suites to use > it". If there's no real overlap, it sounds like maybe a new (or at least different, see below) library would be more appropriate. That would let the authors/reviewers focus on whatever configuration abstraction we need for etcd3, and not worry about the relational database stuff in oslo.db now. > At the level of actually reading and writing data to etcd3 as well as > querying, that's a bigger task, and certainly that is not a SQLAlchemy > thing either. If etcd3's interface is a simple enough "get" / "put" > / "query" and then some occasional special operations, those kinds of > abstraction APIs are often not too terrible to write. There are a zillion client libraries for etcd already. Let's see which one has the most momentum, and use that. > Also note that we have a key/value database interface right now in > oslo.cache which uses dogpile.cache against both memcached and redis > right now. If you really only needed put/get with etcd3, it could > do that also, but I would assume we have the need for more of a fine > grained interface than that. Haven't studied etcd3 as of yet. But > I'd be interested in supporting it in oslo somewhere. Using oslo.cache might make sense, too. Doug From doug at doughellmann.com Wed Dec 5 14:40:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Dec 2018 09:40:30 -0500 Subject: [TC] Forum TC Vision Retrospective summary In-Reply-To: References: Message-ID: Julia Kreger writes: > During the Forum in Berlin, the Technical Committee along with interested > community members took some time to look back at the vision that written in > early 2017. > > We had three simple questions: > * What went well? > * What needs improvement? > * What should the next steps be? > > To summarize, the group thought the vision helped guide some thoughts and > decisions. Helped provide validation on what was thought to be important. > We have seen adjacent communities fostered. We didn't solely focus on the > vision which was viewed as a positive as things do change over time. It > helped us contrast, and in the process of writing the vision we reached the > use of the same words. > > Most importantly, we learned that we took on too much work. > > As with most retrospectives, the list of things that needed improvement was > a bit longer. There was some perception that it fell off the map, and that > not every item received work. Possibly that we even took the vision too > literally and too detailed as opposed to use it as more a guiding document > to help us evolve as time goes on. There was consensus that there was still > room to improve and that we could have done a better at conveying context > to express how, what, and why. > > For next steps, we feel that it is time to revise the vision, albeit in a > shorter form. We also felt that there could be a vision for the TC itself, > which led to the discussion of providing clarity to the role of the > Technical Committee. > > As for action items and next steps that we reached consensus on: > > * To refine the technical vision document. > * That it was time to compose a new vision for the community. > * Consensus was reached that there should be a vision of the TC itself, and > as part of this have a living document that describes the "Role of the TC". > ** ttx, cdent, and TheJulia have volunteered to continue those discussions. > ** mnaser would start a discussion with the community as to what the TC > should and shouldn't do. For those reading this, please remember that the > TC's role is defined in the foundation bylaws, so this would be more of a > collection of perceptions. > * TheJulia to propose a governance update to suggest that people proposing > TC candidacy go ahead and preemptively seek to answer the question of what > the candidate perceives as the role of the TC. Do we need a resolution for this? Or just for someone to remember to ask the question when the time comes? > > The etherpad that followed the discussion can be found at: > https://etherpad.openstack.org/p/BER-tc-vision-retrospective > > -Julia -- Doug From opensrloo at gmail.com Wed Dec 5 14:57:05 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Wed, 5 Dec 2018 09:57:05 -0500 Subject: Proposing KaiFeng Wang for ironic-core In-Reply-To: References: Message-ID: On Sun, Dec 2, 2018 at 9:45 AM Julia Kreger wrote: > I'd like to propose adding KaiFeng to the ironic-core reviewer group. > Previously, we had granted KaiFeng rights on ironic-inspector-core and I > personally think they have done a great job there. > > Kaifeng has also been reviewing other repositories in ironic's scope[1]. > Their reviews and feedback have been insightful and meaningful. They have > also been very active[2] at reviewing which is an asset for any project. > > I believe they will be an awesome addition to the team. > > -Julia > > [1]: http://stackalytics.com/?module=ironic-group&user_id=kaifeng > [2]: http://stackalytics.com/report/contribution/ironic-group/90 > Totally agree ++, thanks for proposing Kaifeng and thank you Kaifeng for all the great work so far! --ruby -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike_mp at zzzcomputing.com Wed Dec 5 14:58:33 2018 From: mike_mp at zzzcomputing.com (Mike Bayer) Date: Wed, 5 Dec 2018 09:58:33 -0500 Subject: [all] Etcd as DLM In-Reply-To: References: <3147d433-13b4-3582-c831-25c29a5799ca@nemebean.com> <1A3C52DFCD06494D8528644858247BF01C24C6BD@EX10MBOX03.pnnl.gov> <20181204100812.og33xegl2fxmoo6g@localhost> Message-ID: On Wed, Dec 5, 2018 at 9:18 AM Doug Hellmann wrote: > > Mike Bayer writes: > > > On Tue, Dec 4, 2018 at 11:42 AM Ben Nemec wrote: > >> > >> Copying Mike Bayer since he's our resident DB expert. One more comment > >> inline. > > > > so the level of abstraction oslo.db itself provides is fairly light - > > it steps in for the initial configuration of the database engine, for > > the job of reworking exceptions into something more locallized, and > > then for supplying a basic transactional begin/commit pattern that > > includes concepts that openstack uses a lot. it also has some > > helpers for things like special datatypes, test frameworks, and stuff > > like that. > > > > That is, oslo.db is not a full blown "abstraction" layer, it exposes > > the SQLAlchemy API which is then where you have the major level of > > abstraction. > > > > Given that, making oslo.db do for etcd3 what it does for SQLAlchemy > > would be an appropriate place for such a thing. It would be all new > > code and not really have much overlap with anything that's there right > > now, but still would be feasible at least at the level of, "get a > > handle to etcd3, here's the basic persistence / query pattern we use > > with it, here's a test framework that will allow test suites to use > > it". > > If there's no real overlap, it sounds like maybe a new (or at least > different, see below) library would be more appropriate. That would let > the authors/reviewers focus on whatever configuration abstraction we > need for etcd3, and not worry about the relational database stuff in > oslo.db now. OK, my opinion on that is informed by how oslo.db is organized; in that it has no relational database concepts in the base, which are instead local to oslo_db.sqlalchemy. It originally intended to be abstraction for "databases" in general. There may be some value sharing some concepts across relational and key/value databases, to the extent they are used as the primary data storage service for an application and not just a cache, although this may not be practical right now and we might consider oslo_db to just be slightly mis-named. > > > At the level of actually reading and writing data to etcd3 as well as > > querying, that's a bigger task, and certainly that is not a SQLAlchemy > > thing either. If etcd3's interface is a simple enough "get" / "put" > > / "query" and then some occasional special operations, those kinds of > > abstraction APIs are often not too terrible to write. > > There are a zillion client libraries for etcd already. Let's see which > one has the most momentum, and use that. Right, but I'm not talking about client libraries I'm talking about an abstraction layer. So that the openstack app that talks to etcd3 and tomorrow might want to talk to FoundationDB wouldn't have to rip all the code out entirely. or more immediately, when the library that has the "most momentum" no longer does, and we need to switch. Openstack's switch from MySQL-python to pymysql is a great example of this, as well as the switch of memcached drivers from python-memcached to pymemcached. consumers of oslo libraries should only have to change a configuration string for changes like this, not any imports or calling conventions. Googling around I'm not seeing much that does this other than dogpile.cache and a few small projects that don't look very polished. This is probably because it's sort of trivial to make a basic one and then sort of hard to expose vendor-specific features once you've done so. but still IMO worthwhile. > > > Also note that we have a key/value database interface right now in > > oslo.cache which uses dogpile.cache against both memcached and redis > > right now. If you really only needed put/get with etcd3, it could > > do that also, but I would assume we have the need for more of a fine > > grained interface than that. Haven't studied etcd3 as of yet. But > > I'd be interested in supporting it in oslo somewhere. > > Using oslo.cache might make sense, too. I think the problems of caching are different than those of primary data store. Caching assumes data is impermanent, that it expires with a given length of time, and that the thing being stored is opaque and can't be queried directly or in the aggregate at least as far as the caching API is concerned (e.g. no "fetch by 'field'", for starters). Whereas a database abstraction API would include support for querying, as well as that it would treat the data as permanent and critical rather than a transitory, stale copy of something. so while I'm 0 on not using oslo.db I'm -1 on using oslo.cache. > > Doug From sjamgade at suse.com Wed Dec 5 15:30:06 2018 From: sjamgade at suse.com (Sumit Jamgade) Date: Wed, 5 Dec 2018 16:30:06 +0100 Subject: [glance] is glance-cache deprecated Message-ID: Hello, $subject or are there any plans to migrate it to v2 ? I see Ia086230cc8c92f7b7dfd5b001923110d5bc55d4d removed the underlying entrypoint for glanace-cache-manage entrypoint -- thanks Sumit From doug at doughellmann.com Wed Dec 5 15:41:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Dec 2018 10:41:01 -0500 Subject: [tc][all] Train Community Goals In-Reply-To: References: Message-ID: Julia Kreger writes: > Off-hand, I think there needs to be a few more words agreed upon for each > in terms of what each item practically means. > > In other words, does #1 mean each python-clientlibrary's OSC plugin is > ready to rock and roll, or we talking about everyone rewriting all client > interactions in to openstacksdk, and porting existing OSC plugins use that > different python sdk. We talked about those things as separate phases. IIRC, the first phase was to include ensuring that python-openstackclient has full feature coverage for non-admin operations for all microversions, using the existing python-${service}client library or SDK as is appropriate. The next phase was to ensure that the SDK has full feature coverage for all microversions. After that point we could update OSC to use the SDK and start deprecating the service-specific client libraries. > In other words, some projects could find it very easy or that they are > already done, where as others could find themselves with a huge lift that > is also dependent upon review bandwidth that is outside of their control or > influence which puts such a goal at risk if we try and push too hard. > > -Julia > > > On Tue, Dec 4, 2018 at 9:43 AM Lance Bragstad wrote: > >> Hi all, >> >> The purpose of this thread is to have a more focused discussion about what >> we'd like to target for Train community goals, bootstrapped with the >> outcomes from the session in Berlin [0]. >> >> During the session, we went through each item as a group and let the >> person who added it share why they thought it would be a good community >> goal candidate for the next release. Most goals have feedback captured in >> etherpad describing next steps, but the following stuck out as top >> contenders from the session (rated by upvotes): >> >> 1. Moving legacy clients to python-openstackclient >> 2. Cleaning up resources when deleting a project >> 3. Service-side health checks >> >> I don't think I missed any goals from the session, but if I did, please >> let me know and I'll add it to the list so that we can discuss it here. >> >> Does anyone have strong opinions either way about the goals listed above? >> >> [0] https://etherpad.openstack.org/p/BER-t-series-goals >> -- Doug From rosmaita.fossdev at gmail.com Wed Dec 5 16:36:49 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 5 Dec 2018 11:36:49 -0500 Subject: [glance] is glance-cache deprecated In-Reply-To: References: Message-ID: <56432dc9-d61a-f1e0-16bd-a51bf29b2fb6@gmail.com> On 12/5/18 10:30 AM, Sumit Jamgade wrote: > Hello, > > $subject No. It's just not easily manageable ATM. > or are there any plans to migrate it to v2 ? Yes, see this spec for Stein: https://git.openstack.org/cgit/openstack/glance-specs/commit/?id=862f2212c7a382a832456829be8bd6f2f9ee2561 > > I see Ia086230cc8c92f7b7dfd5b001923110d5bc55d4d removed the underlying > entrypoint > for glanace-cache-manage entrypoint > > -- > thanks > Sumit > From dangtrinhnt at gmail.com Wed Dec 5 16:50:14 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Dec 2018 01:50:14 +0900 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: <20181203170339.psadnws63wfywtrs@yuggoth.org> Message-ID: Hi all, I just wonder why the CI uses "searchlight==6.0.0.0b2.dev11" [1] when the latest release I made is "6.0.0.0b1"? [1] http://logs.openstack.org/71/622871/1/check/openstack-tox-py27/aca5881/job-output.txt.gz On Tue, Dec 4, 2018 at 9:09 AM Trinh Nguyen wrote: > Thank Jeremy for the clear instructions. > > On Tue, Dec 4, 2018 at 2:05 AM Jeremy Stanley wrote: > >> On 2018-12-04 00:28:30 +0900 (+0900), Trinh Nguyen wrote: >> > Currently, [1] fails tox py27 tests on Zuul check for just updating the >> log >> > text. The tests are successful at local dev env. Just wondering there is >> > any new change at Zuul CI? >> > >> > [1] https://review.openstack.org/#/c/619162/ >> >> I don't know of any recent changes which would result in the >> indicated test failures. According to the log it looks like it's a >> functional testsuite and the tests are failing to connect to the >> search API. I don't see your job collecting any service logs >> however, so it's unclear whether the API service is failing to >> start, or spontaneously crashes, or something else is going on. My >> first guess would be that one of your dependencies has released and >> brought some sort of regression. >> >> According to >> >> http://zuul.openstack.org/builds?job_name=openstack-tox-py27&project=openstack%2Fsearchlight&branch=master >> the last time that job passed for your repo was 2018-11-07 with the >> installed package versions listed in the >> >> http://logs.openstack.org/56/616056/1/gate/openstack-tox-py27/e413441/tox/py27-5.log >> file, and the first failure I see matching the errors in yours ran >> with the versions in >> >> http://logs.openstack.org/62/619162/1/check/openstack-tox-py27/809a281/tox/py27-5.log >> on 2018-11-21 (it wasn't run for the intervening 2 weeks so we have >> a fairly large window of potential external breakage there). A diff >> of those suggests the following dependencies updated between them: >> >> coverage: 4.5.1 -> 4.5.2 >> cryptography: 2.3.1 -> 2.4.1 >> httplib2: 0.11.3 -> 0.12.0 >> oslo.cache: 1.31.1 -> 1.31.0 (downgraded) >> oslo.service: 1.32.0 -> 1.33.0 >> python-neutronclient: 6.10.0 -> 6.11.0 >> requests: 2.20.0 -> 2.20.1 >> WebOb: 1.8.3 -> 1.8.4 >> >> Make sure with your local attempts at reproduction you're running >> with these newer versions of dependencies, for example by clearing >> any existing tox envs with the -r flag or `git clean -dfx` so that >> stale versions aren't used instead. >> -- >> Jeremy Stanley >> > > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Dec 5 17:10:23 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 05 Dec 2018 09:10:23 -0800 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: <20181203170339.psadnws63wfywtrs@yuggoth.org> Message-ID: <1544029823.1906308.1599921208.06E91026@webmail.messagingengine.com> On Wed, Dec 5, 2018, at 8:50 AM, Trinh Nguyen wrote: > Hi all, > > I just wonder why the CI uses "searchlight==6.0.0.0b2.dev11" [1] when the > latest release I made is "6.0.0.0b1"? > > [1] > http://logs.openstack.org/71/622871/1/check/openstack-tox-py27/aca5881/job-output.txt.gz > It is testing the change you pushed which is 11 commits ahead of 6.0.0.0b1. PBR knows that if the most recent release was 6.0.0.0b1 then the next possible release must be at least 6.0.0.0b2. The 11 comments since 6.0.0.0b1 form the .dev11 suffix. Basically this is PBR being smart to attempt to give you monotonically increasing version numbers that are also valid should you tag a release. More details at https://docs.openstack.org/pbr/latest/user/features.html#version. Clark From fungi at yuggoth.org Wed Dec 5 17:10:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Dec 2018 17:10:53 +0000 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: <20181203170339.psadnws63wfywtrs@yuggoth.org> Message-ID: <20181205171053.lnifnu5pqxthz3eh@yuggoth.org> On 2018-12-06 01:50:14 +0900 (+0900), Trinh Nguyen wrote: > I just wonder why the CI uses "searchlight==6.0.0.0b2.dev11" [1] when the > latest release I made is "6.0.0.0b1"? > > [1] http://logs.openstack.org/71/622871/1/check/openstack-tox-py27/aca5881/job-output.txt.gz [...] Python PEP 440 sorts .dev versions earlier than the equivalent string prior to the .dev portion, so PBR is generating a version for you which sorts after 6.0.0.0b1 (the most recent tag on that branch) but before 6.0.0.0b2 (the next possible beta tag you might use in the future). The "11" there indicates it sees you have 11 additional commits on that branch since the 6.0.0.0b1 tag was applied. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dangtrinhnt at gmail.com Wed Dec 5 17:17:32 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Dec 2018 02:17:32 +0900 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: <1544029823.1906308.1599921208.06E91026@webmail.messagingengine.com> References: <20181203170339.psadnws63wfywtrs@yuggoth.org> <1544029823.1906308.1599921208.06E91026@webmail.messagingengine.com> Message-ID: Oh Jeremy, Clark, Thank for the info. Pretty helpful. Bests, On Thu, Dec 6, 2018 at 2:12 AM Clark Boylan wrote: > On Wed, Dec 5, 2018, at 8:50 AM, Trinh Nguyen wrote: > > Hi all, > > > > I just wonder why the CI uses "searchlight==6.0.0.0b2.dev11" [1] when the > > latest release I made is "6.0.0.0b1"? > > > > [1] > > > http://logs.openstack.org/71/622871/1/check/openstack-tox-py27/aca5881/job-output.txt.gz > > > > It is testing the change you pushed which is 11 commits ahead of > 6.0.0.0b1. PBR knows that if the most recent release was 6.0.0.0b1 then the > next possible release must be at least 6.0.0.0b2. The 11 comments since > 6.0.0.0b1 form the .dev11 suffix. > > Basically this is PBR being smart to attempt to give you monotonically > increasing version numbers that are also valid should you tag a release. > More details at > https://docs.openstack.org/pbr/latest/user/features.html#version. > > Clark > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Dec 5 17:18:37 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 5 Dec 2018 11:18:37 -0600 Subject: [dev] How to develop changes in a series In-Reply-To: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: This started off as a response to some questions Sundar asked, but I thought it might be interesting/useful for new[er] OpenStack developers at large, so broadcasting. I'm sure there's a document somewhere that covers this, but here's a quick run-down on how to develop multiple changes in a series. (Or at least, how I do it.) Start on a freshly-pull'd master branch: efried at efried-ThinkPad-W520:~/Neo/nova$ git checkout master Switched to branch 'master' Your branch is up-to-date with 'origin/master'. efried at efried-ThinkPad-W520:~/Neo/nova$ git pull --all When you're working on a blueprint, you want to name your local branch after the blueprint. So in this case, bp/nova-cyborg-interaction. efried at efried-ThinkPad-W520:~/Neo/nova$ git checkout -b bp/nova-cyborg-interaction Switched to a new branch 'bp/nova-cyborg-interaction' efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -1 --decorate 5bf6f63 (HEAD, origin/master, origin/HEAD, gerrit/master, master, bp/nova-cyborg-interaction) Merge "Deprecate the nova-xvpvncproxy service" When you `git commit` (without `--amend`), you're creating a new commit on top of whatever commit you started at. If you started with a clean, freshly pull'd master branch, that'll be whatever the most recently merged commit in the master branch was. In this example, that's commit 5bf6f63. So let's say I make an edit for my first patch and commit it: efried at efried-ThinkPad-W520:~/Neo/nova$ echo 'python-cyborgclient>=1.0' >> requirements.txt efried at efried-ThinkPad-W520:~/Neo/nova$ echo 'python-cyborgclient==1.1' >> lower-constraints.txt efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -a -m "Add cyborg client to requirements" [bp/nova-cyborg-interaction 1b2c453] Add cyborg client to requirements  2 files changed, 2 insertions(+) efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -2 --decorate 1b2c453 (HEAD, bp/nova-cyborg-interaction) Add cyborg client to requirements 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge "Deprecate the nova-xvpvncproxy service" I just made commit 1b2c453 on top of 5bf6f63. You'll notice my branch name (bp/nova-cyborg-interaction) came along with me. Now I'm going to make another change, but just part of it, a work-in-progress commit: efried at efried-ThinkPad-W520:~/Neo/nova$ mkdir nova/pci/cyborg efried at efried-ThinkPad-W520:~/Neo/nova$ touch nova/pci/cyborg/__init__.py efried at efried-ThinkPad-W520:~/Neo/nova$ git add nova/pci/cyborg/__init__.py efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -m "WIP: Cyborg PCI handling" [bp/nova-cyborg-interaction ebb3505] WIP: Cyborg PCI handling  1 file changed, 0 insertions(+), 0 deletions(-)  create mode 100644 nova/pci/cyborg/__init__.py efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -3 --decorate ebb3505 (HEAD, bp/nova-cyborg-interaction) WIP: Cyborg PCI handling 1b2c453 Add cyborg client to requirements 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge "Deprecate the nova-xvpvncproxy service" Now commit ebb3505 is on top of 1b2c453, which is still on top of 5bf6f63 (the master). Note that my branch name came with me again. At this point, I push my series up to gerrit. Note that it makes me confirm that I really want to push two commits at once. efried at efried-ThinkPad-W520:~/Neo/nova$ git review You are about to submit multiple commits. This is expected if you are submitting a commit that is dependent on one or more in-review commits, or if you are submitting multiple self-contained but dependent changes. Otherwise you should consider squashing your changes into one commit before submitting (for indivisible changes) or submitting from separate branches (for independent changes). The outstanding commits are: ebb3505 (HEAD, bp/nova-cyborg-interaction) WIP: Cyborg PCI handling 1b2c453 Add cyborg client to requirements Do you really want to submit the above commits? Type 'yes' to confirm, other to cancel: yes remote: remote: Processing changes: new: 2, refs: 2 (\)        remote: Processing changes: new: 2, refs: 2 (\)        remote: Processing changes: new: 2, refs: 2 (\)        remote: Processing changes: new: 2, refs: 2, done            remote: remote: New Changes:        remote:   https://review.openstack.org/623026 Add cyborg client to requirements        remote:   https://review.openstack.org/623027 WIP: Cyborg PCI handling        remote: To ssh://efried at review.openstack.org:29418/openstack/nova.git  * [new branch]      HEAD -> refs/publish/master/bp/nova-cyborg-interaction Now if you go to either of those links - e.g. https://review.openstack.org/#/c/623026/ - you'll see that the patches are stacked up in series on the top right. But oops, I made a mistake in my first commit. My lower constraint can't be higher than my minimum in requirements.txt. If I still had my branch locally, I could skip this next step, but as a matter of rigor to avoid some common pratfalls, I will pull the whole series afresh from gerrit by asking git review to grab the *top* change: efried at efried-ThinkPad-W520:~/Neo/nova$ git review -d 623027 Downloading refs/changes/27/623027/1 from gerrit Switched to branch "review/eric_fried/bp/nova-cyborg-interaction" Now I'm sitting on the top change (which you'll notice happens to be exactly the same as before I pushed it - again, meaning I could technically have just worked from where I was, but see above): efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -3 --decorate ebb3505 (HEAD, review/eric_fried/bp/nova-cyborg-interaction, bp/nova-cyborg-interaction) WIP: Cyborg PCI handling 1b2c453 Add cyborg client to requirements 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge "Deprecate the nova-xvpvncproxy service" But I want to edit 1b2c453, while leaving ebb3505 properly stacked on top of it. Here I use a tool called `git restack` (run `pip install git-restack` to install it). efried at efried-ThinkPad-W520:~/Neo/nova$ git restack This pops me into an editor showing me all the commits between wherever I am and the main branch (now they're in top-first order): pick 1b2c453 Add cyborg client to requirements pick ebb3505 WIP: Cyborg PCI handling I want to fix the first one, so I change to: edit 1b2c453 Add cyborg client to requirements pick ebb3505 WIP: Cyborg PCI handling Save and quit the editor, and I see: Stopped at 1b2c453453242a3fa57f2d4fdc80c837b02b804f... Add cyborg client to requirements You can amend the commit now, with         git commit --amend Once you are satisfied with your changes, run         git rebase --continue I fix lower-constraints: efried at efried-ThinkPad-W520:~/Neo/nova$ sed -i 's/cyborgclient==1.1/cyborgclient==1.0/' lower-constraints.txt ...and *amend* the current commit efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -a --amend --no-edit [detached HEAD 6b3455f] Add cyborg client to requirements  Date: Wed Dec 5 09:43:15 2018 -0600  2 files changed, 2 insertions(+) ...and tell `git restack` to proceed efried at efried-ThinkPad-W520:~/Neo/nova$ git rebase --continue Successfully rebased and updated refs/heads/review/eric_fried/bp/nova-cyborg-interaction. If I had a taller series, and I had changed 'pick' to 'edit' for more than one commit, I would now be sitting on the next one I needed to edit. As it is, that was the only thing I needed to do, so I'm done and sitting on the top of my series again. 124b612 (HEAD, review/eric_fried/bp/nova-cyborg-interaction) WIP: Cyborg PCI handling 6b3455f Add cyborg client to requirements 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge "Deprecate the nova-xvpvncproxy service" Notice that the commit hashes have changed for *both* commits (but not the master). The top one changed because it got rebased onto the new version of the middle one. Now if I push the series back up to gerrit, I get the same confirmation prompt, and both changes get a new patch set. If you look at the top patch in gerrit, you'll see that PS2 shows up as just a rebase. ==================== That ought to be enough for now. There's a couple of gotchas when restacking and the automatic rebase results in merge conflicts, but we can save that for another time. To answer your specific question: > * If you have a patch sequence A followed by B, where patch B depends on patch A, >   how do you communicate that in the submission? If they're in the same commit series, like the example above, It Just Happens. Zuul and the CIs know to pull the whole series when they run; gerrit won't merge N+1 until N is merged; etc. If they're *not* in the same commit series, or if they're not in the same repository, you can use Depends-On in the commit message, to the same effect.  (But I'm not sure if/how Depends-On works for feature branches.) HTH, efried . From dangtrinhnt at gmail.com Wed Dec 5 17:24:45 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Dec 2018 02:24:45 +0900 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: <20181203170339.psadnws63wfywtrs@yuggoth.org> <1544029823.1906308.1599921208.06E91026@webmail.messagingengine.com> Message-ID: I'm not sure if this matters: "failed with error code 1 in /home/zuul/src/ git.openstack.org/openstack/searchlight, falling back to uneditable format, Could not determine repository location of /home/zuul/src/ git.openstack.org/openstack/searchlight...Could not determine repository location,searchlight==6.0.0.0b2.dev11" On Thu, Dec 6, 2018 at 2:17 AM Trinh Nguyen wrote: > Oh Jeremy, Clark, > > Thank for the info. Pretty helpful. > > Bests, > > On Thu, Dec 6, 2018 at 2:12 AM Clark Boylan wrote: > >> On Wed, Dec 5, 2018, at 8:50 AM, Trinh Nguyen wrote: >> > Hi all, >> > >> > I just wonder why the CI uses "searchlight==6.0.0.0b2.dev11" [1] when >> the >> > latest release I made is "6.0.0.0b1"? >> > >> > [1] >> > >> http://logs.openstack.org/71/622871/1/check/openstack-tox-py27/aca5881/job-output.txt.gz >> > >> >> It is testing the change you pushed which is 11 commits ahead of >> 6.0.0.0b1. PBR knows that if the most recent release was 6.0.0.0b1 then the >> next possible release must be at least 6.0.0.0b2. The 11 comments since >> 6.0.0.0b1 form the .dev11 suffix. >> >> Basically this is PBR being smart to attempt to give you monotonically >> increasing version numbers that are also valid should you tag a release. >> More details at >> https://docs.openstack.org/pbr/latest/user/features.html#version. >> >> Clark >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 5 17:28:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Dec 2018 17:28:22 +0000 Subject: [Searchlight][infra] tox failed tests at zuul check only In-Reply-To: References: <20181203170339.psadnws63wfywtrs@yuggoth.org> <1544029823.1906308.1599921208.06E91026@webmail.messagingengine.com> Message-ID: <20181205172821.crmaj3y5lzxcjxof@yuggoth.org> On 2018-12-06 02:24:45 +0900 (+0900), Trinh Nguyen wrote: > I'm not sure if this matters: > > "failed with error code 1 in /home/zuul/src/ > git.openstack.org/openstack/searchlight, falling back to uneditable format, > Could not determine repository location of /home/zuul/src/ > git.openstack.org/openstack/searchlight...Could not determine repository > location,searchlight==6.0.0.0b2.dev11" [...] It's benign. Because Zuul pushes the repository onto the test node, it has no Git remote. Clark has proposed https://github.com/pypa/pip/pull/4760 to substitute a file:// URL. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Wed Dec 5 17:35:59 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 05 Dec 2018 09:35:59 -0800 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: <1544031359.1912751.1599945120.4758C77C@webmail.messagingengine.com> On Wed, Dec 5, 2018, at 9:18 AM, Eric Fried wrote: > This started off as a response to some questions Sundar asked, but I > thought it might be interesting/useful for new[er] OpenStack developers > at large, so broadcasting. Snip. Note the content I've removed was really good and worth a read if you work with stacks of commits and Gerrit. > > * If you have a patch sequence A followed by B, where patch B depends > on patch A, > >   how do you communicate that in the submission? > > If they're in the same commit series, like the example above, It Just > Happens. Zuul and the CIs know to pull the whole series when they run; > gerrit won't merge N+1 until N is merged; etc. > > If they're *not* in the same commit series, or if they're not in the > same repository, you can use Depends-On in the commit message, to the > same effect.  (But I'm not sure if/how Depends-On works for feature > branches.) This does work with feature branches as long as the assumption that a feature branch is roughly equivalent to master holds (typically this is the case as new features go into master). On the Zuul side of things Zuul ensures that every branch for each project under test (as specified by depends on and required-projects in the job config) is up to date. This includes applying any speculative state from gating or dependencies. It is then up to tools like devstack (and in the past devstack-gate) to consume those branches as necessary. For example grenade will start with stable/old and upgrade to stable/new|master. Typically in the case of feature branches the assumption in the tooling is use the feature branch in all the projects and if they don't have that feature branch fall back to master. This is where the feature branch implies something like master assumption comes from. You can always refer back to the logs to see what exactly was installed for each project under test to confirm it was testing what you expected it to. Clark From melwittt at gmail.com Wed Dec 5 17:39:31 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 5 Dec 2018 09:39:31 -0800 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> Message-ID: On Mon, 3 Dec 2018 15:01:58 -0600, Matt Riedemann wrote: > On 12/3/2018 2:46 PM, Jimmy McArthur wrote: >> We're looking at ways to improve both formats in Denver, so I'd say >> stand by.  If there is a presentation that you feel is too difficult to >> follow, we can reach out to those presenters and encourage them again to >> upload their slides. > > Here is a good example of what I'm talking about: > > https://youtu.be/J9K-x0yVZ4U?t=425 > > There is full view of the slides, but my eyes can't read most of that > text. Compare that to YVR: > > https://youtu.be/U5V_2CUj-6A?t=576 > > And it's night and day. Thanks for mentioning this. I've just sent the slides to speakersupport@, so hopefully they'll be linked soon to the session page. General feedback regarding slide upload: last summit, we received an email from speakersupport@ letting us know the self-service slide upload system was ready and we were able to upload slides ourselves through openstack.org. I thought that was a nice system, FWIW. This time, I was expecting a similar message process for uploading slides and didn't realize we could have added slides already by now. Cheers, -melanie From cboylan at sapwetik.org Wed Dec 5 17:43:35 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 05 Dec 2018 09:43:35 -0800 Subject: [openstack][diskimage-builder] How to install software packages at building murano-agent image In-Reply-To: References: Message-ID: <1544031815.1914972.1599954544.6EFB578F@webmail.messagingengine.com> On Tue, Dec 4, 2018, at 9:50 PM, zhaolihuisky wrote: > hi, guys > > How to install software packages at building murano-agent image. > > I have download telegraf-x.x.x-x86_64.rpm file and how to install this > package into murano-agent image. > > Is there any suggestion? > > Best Regards. For packages in default distro repos you can pass -p $packagename to disk-image-create to specify packages to install. In this case the software is coming from an rpm you are downloading from outside the package repos so the process is a little more involved. In this case I would create an element with an install.d/89-install-telegraf.sh script. In that script you can download the rpm then install it with rpm -i. Then add the new element to your disk-image-create elements list. Clark From jimmy at openstack.org Wed Dec 5 17:53:26 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 05 Dec 2018 11:53:26 -0600 Subject: OpenStack Summit Berlin Videos Now Available In-Reply-To: References: <5C057470.8050405@openstack.org> <636ad729-570e-c673-c33d-9b6a2382ece0@gmail.com> <5C059627.9040304@openstack.org> Message-ID: <5C081096.5080003@openstack.org> Thanks for the feedback, Melanie. We did send that email out this go round, but I think there were two tracks that were excluded that weren't meant to be. We're expanding the functionality so that you can upload from the CFP/Speaker Profile w/o having to be prompted by us, so stand by for more features :) Cheers, Jimmy > melanie witt > December 5, 2018 at 11:39 AM > > > Thanks for mentioning this. I've just sent the slides to > speakersupport@, so hopefully they'll be linked soon to the session page. > > General feedback regarding slide upload: last summit, we received an > email from speakersupport@ letting us know the self-service slide > upload system was ready and we were able to upload slides ourselves > through openstack.org. I thought that was a nice system, FWIW. This > time, I was expecting a similar message process for uploading slides > and didn't realize we could have added slides already by now. > > Cheers, > -melanie > > > > > > > Matt Riedemann > December 3, 2018 at 3:01 PM > > > Here is a good example of what I'm talking about: > > https://youtu.be/J9K-x0yVZ4U?t=425 > > There is full view of the slides, but my eyes can't read most of that > text. Compare that to YVR: > > https://youtu.be/U5V_2CUj-6A?t=576 > > And it's night and day. > > Jimmy McArthur > December 3, 2018 at 2:46 PM > In Berlin, in rooms where we had a full view of the screen, we didn't > do a second screen for slides only. As mentioned, presenters can > upload their slides to help with that. > > For places like the Marketplace demo theater where we had a smaller > screen format, we added the view of both presenter and slide: > https://www.openstack.org/videos/berlin-2018/how-to-avoid-vendor-lock-in-in-a-multi-cloud-world-with-zenko > > We're looking at ways to improve both formats in Denver, so I'd say > stand by. If there is a presentation that you feel is too difficult > to follow, we can reach out to those presenters and encourage them > again to upload their slides. > > > > Matt Riedemann > December 3, 2018 at 2:32 PM > > > So uh, I don't really want to be that guy, but I'm sure others have > noticed the deal with the slides being different in the recordings > from years past, in that you can't view them (hopefully people are > uploading their slides). I'm mostly curious if there was a reason for > that? Budget cuts? Technical issues? > > Jimmy McArthur > December 3, 2018 at 12:22 PM > Thank you again for a wonderful Summit in Berlin. I'm pleased to > announce the Summit Videos are now up on the openstack.org website: > https://www.openstack.org/videos/summits/berlin-2018 If there was a > session you missed, now is your chance to catch up! These videos will > also be available in the Summit App as well as on the web under the > Berlin Summit Schedule > (https://www.openstack.org/summit/berlin-2018/summit-schedule/). > > If you have any questions or concerns about the videos, please write > speakersupport at openstack.org. > > Cheers, > Jimmy > > _______________________________________________ > Staff mailing list > Staff at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/staff -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Dec 5 17:54:36 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 5 Dec 2018 09:54:36 -0800 Subject: [openstack-dev] [nova] Stein forum session notes In-Reply-To: References: <9614718a-77d4-f9ca-a7ba-73bc9af28795@gmail.com> <69ad0921-852a-3eaa-9f34-293a30766fdb@gmail.com> Message-ID: <05bf0894-033c-c921-feb3-d59280dedecc@gmail.com> On Tue, 27 Nov 2018 08:32:40 -0800, Dan Smith wrote: >>>> Change of ownership of resources >>>> ================================ >>>> - Ignore the network piece for now, it's the most complicated. Being >>>> able to transfer everything else would solve 90% of City Network's use >>>> cases >>>> - Some ideas around having this be a keystone auth-based access granting >>>> instead of an update of project/user, but if keystone could hand user A >>>> a token for user B, that token would apply to all resources of user B's, >>>> not just the ones desired for transfer >>> >>> Whatever happened with the os-chown project Dan started in Denver? >>> >>> https://github.com/kk7ds/oschown >> >> What we distilled from the forum session is that at the heart of it, >> what is actually wanted is to be able to grant access to a resource >> owned by project A to project B, for example. It's not so much about >> wanting to literally change project_id/user_id from A to B. So, we >> asked the question, "what if project A could grant access to its >> resources to project B via keystone?" This could work if it is OK for >> project B to gain access to _all_ of project A's resources (since we >> currently have no way to scope access to specific resources). For a >> use case where it is OK for project A to grant access to all of >> project B's resources, the idea of accomplishing this is >> keystone-only, could work. Doing it auth-based through keystone-only >> would leave project_id/user_id and all dependencies intact, making the >> change only at the auth/project level. It is simpler and cleaner. >> >> However, for a use case where it is not OK for project B to gain >> access to all of project A's resources, because we lack the ability to >> scope access to specific resources, the os-chown approach is the only >> proposal we know of that can address it. >> >> So, depending on the use cases, we might be able to explore a keystone >> approach. From what I gathered in the forum session, it sounded like >> City Network might be OK with a project-wide access grant, but Oath >> might need a resource-specific scoped access grant. If those are both >> the case, we would find use in both a keystone access approach and the >> os-chown approach. > > FWIW, this is not what I gathered from the discussion, and I don't see > anything about that on the etherpad: > > https://etherpad.openstack.org/p/BER-change-ownership-of-resources > > I know the self-service project-wide grant of access was brought up, but > I don't recall any of the operators present saying that would actually > solve their use cases (including City Network). I'm not really sure how > granting another project access to all resources of another is really > anything other than a temporary solution applicable in cases where > supreme trust exists. > > I could be wrong, but I thought they specifically still wanted an API in > each project that would forcibly transfer (i.e. actually change > userid/project on) resources. Did I miss something in the hallway track > afterwards? No, you didn't miss additional discussion after the session. I realize now from your and Tobias replies that I must have misunderstood the access grant part of the discussion. What I had interpreted when I brought up the idea of a keystone-based access grant was that Adrian thought it could solve their ownership transfer use case (and it's possible I misunderstood his response as well). And I don't recall Tobias saying something in objection to the idea, so I wrongly thought it could work for his use case too. I apologize for my misunderstanding and muddying the waters for everyone on this. Correcting myself: really what is wanted is to literally change project_id and user_id for resources, and that allowing the addition of another owner for a project's resources is not sufficient. Best, -melanie From openstack at nemebean.com Wed Dec 5 17:56:19 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 5 Dec 2018 11:56:19 -0600 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: I will take this opportunity to pimp the video series I did on this exact topic: https://www.youtube.com/watch?v=mHyvP7zp4Ko&list=PLR97FKPZ-mD9XJCfwDE5c-td9lZGIPfS5&index=4 On 12/5/18 11:18 AM, Eric Fried wrote: > This started off as a response to some questions Sundar asked, but I > thought it might be interesting/useful for new[er] OpenStack developers > at large, so broadcasting. > > I'm sure there's a document somewhere that covers this, but here's a > quick run-down on how to develop multiple changes in a series. (Or at > least, how I do it.) > > Start on a freshly-pull'd master branch: > > efried at efried-ThinkPad-W520:~/Neo/nova$ git checkout master > Switched to branch 'master' > Your branch is up-to-date with 'origin/master'. > efried at efried-ThinkPad-W520:~/Neo/nova$ git pull --all > > > When you're working on a blueprint, you want to name your local branch > after the blueprint. So in this case, bp/nova-cyborg-interaction. > > efried at efried-ThinkPad-W520:~/Neo/nova$ git checkout -b > bp/nova-cyborg-interaction > Switched to a new branch 'bp/nova-cyborg-interaction' > efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -1 --decorate > 5bf6f63 (HEAD, origin/master, origin/HEAD, gerrit/master, master, > bp/nova-cyborg-interaction) Merge "Deprecate the nova-xvpvncproxy service" > > When you `git commit` (without `--amend`), you're creating a new commit > on top of whatever commit you started at. If you started with a clean, > freshly pull'd master branch, that'll be whatever the most recently > merged commit in the master branch was. In this example, that's commit > 5bf6f63. > > So let's say I make an edit for my first patch and commit it: > > efried at efried-ThinkPad-W520:~/Neo/nova$ echo 'python-cyborgclient>=1.0' >>> requirements.txt > efried at efried-ThinkPad-W520:~/Neo/nova$ echo 'python-cyborgclient==1.1' >>> lower-constraints.txt > efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -a -m "Add cyborg > client to requirements" > [bp/nova-cyborg-interaction 1b2c453] Add cyborg client to requirements >  2 files changed, 2 insertions(+) > efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -2 --decorate > 1b2c453 (HEAD, bp/nova-cyborg-interaction) Add cyborg client to requirements > 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge > "Deprecate the nova-xvpvncproxy service" > > I just made commit 1b2c453 on top of 5bf6f63. You'll notice my branch > name (bp/nova-cyborg-interaction) came along with me. > > Now I'm going to make another change, but just part of it, a > work-in-progress commit: > > efried at efried-ThinkPad-W520:~/Neo/nova$ mkdir nova/pci/cyborg > efried at efried-ThinkPad-W520:~/Neo/nova$ touch nova/pci/cyborg/__init__.py > efried at efried-ThinkPad-W520:~/Neo/nova$ git add nova/pci/cyborg/__init__.py > efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -m "WIP: Cyborg PCI > handling" > [bp/nova-cyborg-interaction ebb3505] WIP: Cyborg PCI handling >  1 file changed, 0 insertions(+), 0 deletions(-) >  create mode 100644 nova/pci/cyborg/__init__.py > efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -3 --decorate > ebb3505 (HEAD, bp/nova-cyborg-interaction) WIP: Cyborg PCI handling > 1b2c453 Add cyborg client to requirements > 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge > "Deprecate the nova-xvpvncproxy service" > > Now commit ebb3505 is on top of 1b2c453, which is still on top of > 5bf6f63 (the master). Note that my branch name came with me again. > > At this point, I push my series up to gerrit. Note that it makes me > confirm that I really want to push two commits at once. > > efried at efried-ThinkPad-W520:~/Neo/nova$ git review > You are about to submit multiple commits. This is expected if you are > submitting a commit that is dependent on one or more in-review > commits, or if you are submitting multiple self-contained but > dependent changes. Otherwise you should consider squashing your > changes into one commit before submitting (for indivisible changes) or > submitting from separate branches (for independent changes). > > The outstanding commits are: > > ebb3505 (HEAD, bp/nova-cyborg-interaction) WIP: Cyborg PCI handling > 1b2c453 Add cyborg client to requirements > > Do you really want to submit the above commits? > Type 'yes' to confirm, other to cancel: yes > remote: > remote: Processing changes: new: 2, refs: 2 (\) > remote: Processing changes: new: 2, refs: 2 (\) > remote: Processing changes: new: 2, refs: 2 (\) > remote: Processing changes: new: 2, refs: 2, done > remote: > remote: New Changes: > remote:   https://review.openstack.org/623026 Add cyborg client to > requirements > remote:   https://review.openstack.org/623027 WIP: Cyborg PCI > handling > remote: > To ssh://efried at review.openstack.org:29418/openstack/nova.git >  * [new branch]      HEAD -> refs/publish/master/bp/nova-cyborg-interaction > > Now if you go to either of those links - e.g. > https://review.openstack.org/#/c/623026/ - you'll see that the patches > are stacked up in series on the top right. > > But oops, I made a mistake in my first commit. My lower constraint can't > be higher than my minimum in requirements.txt. If I still had my branch > locally, I could skip this next step, but as a matter of rigor to avoid > some common pratfalls, I will pull the whole series afresh from gerrit > by asking git review to grab the *top* change: > > efried at efried-ThinkPad-W520:~/Neo/nova$ git review -d 623027 > Downloading refs/changes/27/623027/1 from gerrit > Switched to branch "review/eric_fried/bp/nova-cyborg-interaction" > > Now I'm sitting on the top change (which you'll notice happens to be > exactly the same as before I pushed it - again, meaning I could > technically have just worked from where I was, but see above): > > efried at efried-ThinkPad-W520:~/Neo/nova$ git log --oneline -3 --decorate > ebb3505 (HEAD, review/eric_fried/bp/nova-cyborg-interaction, > bp/nova-cyborg-interaction) WIP: Cyborg PCI handling > 1b2c453 Add cyborg client to requirements > 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge > "Deprecate the nova-xvpvncproxy service" > > But I want to edit 1b2c453, while leaving ebb3505 properly stacked on > top of it. Here I use a tool called `git restack` (run `pip install > git-restack` to install it). > > efried at efried-ThinkPad-W520:~/Neo/nova$ git restack > > This pops me into an editor showing me all the commits between wherever > I am and the main branch (now they're in top-first order): > > pick 1b2c453 Add cyborg client to requirements > pick ebb3505 WIP: Cyborg PCI handling > > > I want to fix the first one, so I change to: > > edit 1b2c453 Add cyborg client to requirements > pick ebb3505 WIP: Cyborg PCI handling > > > Save and quit the editor, and I see: > > Stopped at 1b2c453453242a3fa57f2d4fdc80c837b02b804f... Add cyborg client > to requirements > You can amend the commit now, with > >         git commit --amend > > Once you are satisfied with your changes, run > >         git rebase --continue > > I fix lower-constraints: > > efried at efried-ThinkPad-W520:~/Neo/nova$ sed -i > 's/cyborgclient==1.1/cyborgclient==1.0/' lower-constraints.txt > > ...and *amend* the current commit > > efried at efried-ThinkPad-W520:~/Neo/nova$ git commit -a --amend --no-edit > [detached HEAD 6b3455f] Add cyborg client to requirements >  Date: Wed Dec 5 09:43:15 2018 -0600 >  2 files changed, 2 insertions(+) > > ...and tell `git restack` to proceed > > efried at efried-ThinkPad-W520:~/Neo/nova$ git rebase --continue > Successfully rebased and updated > refs/heads/review/eric_fried/bp/nova-cyborg-interaction. > > If I had a taller series, and I had changed 'pick' to 'edit' for more > than one commit, I would now be sitting on the next one I needed to > edit. As it is, that was the only thing I needed to do, so I'm done and > sitting on the top of my series again. > > 124b612 (HEAD, review/eric_fried/bp/nova-cyborg-interaction) WIP: Cyborg > PCI handling > 6b3455f Add cyborg client to requirements > 5bf6f63 (origin/master, origin/HEAD, gerrit/master, master) Merge > "Deprecate the nova-xvpvncproxy service" > > Notice that the commit hashes have changed for *both* commits (but not > the master). The top one changed because it got rebased onto the new > version of the middle one. > > Now if I push the series back up to gerrit, I get the same confirmation > prompt, and both changes get a new patch set. If you look at the top > patch in gerrit, you'll see that PS2 shows up as just a rebase. > > > ==================== > > That ought to be enough for now. There's a couple of gotchas when > restacking and the automatic rebase results in merge conflicts, but we > can save that for another time. > > To answer your specific question: > >> * If you have a patch sequence A followed by B, where patch B depends > on patch A, >>   how do you communicate that in the submission? > > If they're in the same commit series, like the example above, It Just > Happens. Zuul and the CIs know to pull the whole series when they run; > gerrit won't merge N+1 until N is merged; etc. > > If they're *not* in the same commit series, or if they're not in the > same repository, you can use Depends-On in the commit message, to the > same effect.  (But I'm not sure if/how Depends-On works for feature > branches.) > > HTH, > efried > . > > From doug at doughellmann.com Wed Dec 5 18:03:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Dec 2018 13:03:04 -0500 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: Eric Fried writes: > This started off as a response to some questions Sundar asked, but I > thought it might be interesting/useful for new[er] OpenStack developers > at large, so broadcasting. > > I'm sure there's a document somewhere that covers this, but here's a > quick run-down on how to develop multiple changes in a series. (Or at > least, how I do it.) This information would be a great addition to the contributor's guide at https://docs.openstack.org/contributors/code-and-documentation/index.html (that's the openstack/contributor-guide repo). -- Doug From melwittt at gmail.com Wed Dec 5 18:38:06 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 5 Dec 2018 10:38:06 -0800 Subject: [dev][nova] time for another spec review day? Message-ID: Hi All, Our spec freeze is milestone 2 January 10 and I was thinking, because of holiday time coming up, it might be a good idea to have another spec review day ahead of the freeze early next year. I was thinking maybe Tuesday next week December 11, to allow the most amount of time before holiday PTO starts. Please let me know what you think. Cheers, -melanie From juliaashleykreger at gmail.com Wed Dec 5 19:16:50 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 5 Dec 2018 11:16:50 -0800 Subject: [TC] Forum TC Vision Retrospective summary In-Reply-To: References: Message-ID: I wasn't thinking a resolution, more a suggestion in the TC election details, but I've not had a chance to really get to this. We could just try to make a commitment to explicitly asking, but I'm afraid that reminder may get lost in everything else that any given community is already having to context switch between. On Wed, Dec 5, 2018 at 6:40 AM Doug Hellmann wrote: > Julia Kreger writes: > > > During the Forum in Berlin, the Technical Committee along with interested > > community members took some time to look back at the vision that written > in > > early 2017. > > > > We had three simple questions: > > * What went well? > > * What needs improvement? > > * What should the next steps be? > > > > To summarize, the group thought the vision helped guide some thoughts and > > decisions. Helped provide validation on what was thought to be important. > > We have seen adjacent communities fostered. We didn't solely focus on the > > vision which was viewed as a positive as things do change over time. It > > helped us contrast, and in the process of writing the vision we reached > the > > use of the same words. > > > > Most importantly, we learned that we took on too much work. > > > > As with most retrospectives, the list of things that needed improvement > was > > a bit longer. There was some perception that it fell off the map, and > that > > not every item received work. Possibly that we even took the vision too > > literally and too detailed as opposed to use it as more a guiding > document > > to help us evolve as time goes on. There was consensus that there was > still > > room to improve and that we could have done a better at conveying context > > to express how, what, and why. > > > > For next steps, we feel that it is time to revise the vision, albeit in a > > shorter form. We also felt that there could be a vision for the TC > itself, > > which led to the discussion of providing clarity to the role of the > > Technical Committee. > > > > As for action items and next steps that we reached consensus on: > > > > * To refine the technical vision document. > > * That it was time to compose a new vision for the community. > > * Consensus was reached that there should be a vision of the TC itself, > and > > as part of this have a living document that describes the "Role of the > TC". > > ** ttx, cdent, and TheJulia have volunteered to continue those > discussions. > > ** mnaser would start a discussion with the community as to what the TC > > should and shouldn't do. For those reading this, please remember that the > > TC's role is defined in the foundation bylaws, so this would be more of a > > collection of perceptions. > > * TheJulia to propose a governance update to suggest that people > proposing > > TC candidacy go ahead and preemptively seek to answer the question of > what > > the candidate perceives as the role of the TC. > > Do we need a resolution for this? Or just for someone to remember to ask > the question when the time comes? > > > > > The etherpad that followed the discussion can be found at: > > https://etherpad.openstack.org/p/BER-tc-vision-retrospective > > > > -Julia > > -- > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vdrok at mirantis.com Wed Dec 5 19:18:59 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Wed, 5 Dec 2018 11:18:59 -0800 Subject: Proposing KaiFeng Wang for ironic-core In-Reply-To: References: Message-ID: +1 -vlad On Wed, Dec 5, 2018 at 6:59 AM Ruby Loo wrote: > > On Sun, Dec 2, 2018 at 9:45 AM Julia Kreger > wrote: > >> I'd like to propose adding KaiFeng to the ironic-core reviewer group. >> Previously, we had granted KaiFeng rights on ironic-inspector-core and I >> personally think they have done a great job there. >> >> Kaifeng has also been reviewing other repositories in ironic's scope[1]. >> Their reviews and feedback have been insightful and meaningful. They have >> also been very active[2] at reviewing which is an asset for any project. >> >> I believe they will be an awesome addition to the team. >> >> -Julia >> >> [1]: http://stackalytics.com/?module=ironic-group&user_id=kaifeng >> [2]: http://stackalytics.com/report/contribution/ironic-group/90 >> > > Totally agree ++, thanks for proposing Kaifeng and thank you Kaifeng for > all the great work so far! > > --ruby > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Dec 5 19:27:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 05 Dec 2018 14:27:08 -0500 Subject: [dev][goal][python3][qa][devstack][ptl] changing devstack's python 3 behavior Message-ID: Today devstack requires each project to explicitly indicate that it can be installed under python 3, even when devstack itself is running with python 3 enabled. As part of the python3-first goal, I have proposed a change to devstack to modify that behavior [1]. With the change in place, when devstack runs with python3 enabled all services are installed under python 3, unless explicitly listed as not supporting python 3. If your project has a devstack plugin or runs integration or functional test jobs that use devstack, please test your project with the patch (you can submit a trivial change to your project and use Depends-On to pull in the devstack change). [1] https://review.openstack.org/#/c/622415/ -- Doug From edmondsw at us.ibm.com Wed Dec 5 19:48:37 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Wed, 5 Dec 2018 14:48:37 -0500 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: Eric Fried wrote on 12/05/2018 12:18:37 PM: > But I want to edit 1b2c453, while leaving ebb3505 properly stacked on > top of it. Here I use a tool called `git restack` (run `pip install > git-restack` to install it). It's worth noting that you can just use `git rebase` [1], you don't have to use git-restack. This is why later you're using `git rebase --continue`, because git-restack is actually using rebase under the covers. [1] https://stackoverflow.com/questions/1186535/how-to-modify-a-specified-commit -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Dec 5 19:52:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 Dec 2018 19:52:28 +0000 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> Message-ID: <20181205195227.4j3rkpinlgts3ujv@yuggoth.org> On 2018-12-05 14:48:37 -0500 (-0500), William M Edmonds wrote: > Eric Fried wrote on 12/05/2018 12:18:37 PM: > > > > > But I want to edit 1b2c453, while leaving ebb3505 properly stacked on > > top of it. Here I use a tool called `git restack` (run `pip install > > git-restack` to install it). > > It's worth noting that you can just use `git rebase` [1], you don't have to > use git-restack. This is why later you're using `git rebase --continue`, > because git-restack is actually using rebase under the covers. > > [1] https://stackoverflow.com/questions/1186535/how-to-modify-a-specified-commit You can, however what git-restack does for you is figure out which commit to rebase on top of so that you don't inadvertently rebase your stack of changes onto a newer branch state and then make things harder on reviewers. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From edmondsw at us.ibm.com Wed Dec 5 20:40:25 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Wed, 5 Dec 2018 15:40:25 -0500 Subject: [dev] How to develop changes in a series In-Reply-To: <20181205195227.4j3rkpinlgts3ujv@yuggoth.org> References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> <20181205195227.4j3rkpinlgts3ujv@yuggoth.org> Message-ID: Jeremy Stanley wrote on 12/05/2018 02:52:28 PM: > On 2018-12-05 14:48:37 -0500 (-0500), William M Edmonds wrote: > > Eric Fried wrote on 12/05/2018 12:18:37 PM: > > > > > > > > > But I want to edit 1b2c453, while leaving ebb3505 properly stacked on > > > top of it. Here I use a tool called `git restack` (run `pip install > > > git-restack` to install it). > > > > It's worth noting that you can just use `git rebase` [1], you don't have to > > use git-restack. This is why later you're using `git rebase --continue`, > > because git-restack is actually using rebase under the covers. > > > > [1] https://stackoverflow.com/questions/1186535/how-to-modify-a- > specified-commit > > You can, however what git-restack does for you is figure out which > commit to rebase on top of so that you don't inadvertently rebase > your stack of changes onto a newer branch state and then make things > harder on reviewers. > -- > Jeremy Stanley Ah, that's good to know. Also, found this existing documentation [2] if someone wants to propose an update or link from another location. Note that it doesn't currently mention git-restack, just rebase. [2] https://docs.openstack.org/contributors/code-and-documentation/patch-best-practices.html#how-to-handle-chains -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed Dec 5 21:06:24 2018 From: allison at openstack.org (Allison Price) Date: Wed, 5 Dec 2018 15:06:24 -0600 Subject: What's Happening in the Open Infrastructure Community Message-ID: <5B66F0D8-B789-42F7-8864-E1A68685A306@openstack.org> Hi everyone, This week, we distributed the first Open Infrastructure Community Newsletter. Our goal with the bi-weekly newsletter is to provide a digest of the latest developments and activities across open infrastructure projects, events and users. This week, we highlighted the Berlin Summit as well as brief updates from OpenStack as well as the OSF's four pilot projects: Airship, Kata Containers, StarlingX and Zuul. You can checkout the full newsletter on Superuser [1], and if you are interested in receiving the upcoming newsletters, you can subscribe here [2]. If you would like to contribute or have feedback, please reach out to community at openstack.org. Thanks! Allison Allison Price OpenStack Foundation allison at openstack.org [1] http://superuser.openstack.org/articles/inside-open-infrastructure-newsletter1 [2] https://www.openstack.org/community/email-signup -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Dec 5 21:27:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Dec 2018 15:27:16 -0600 Subject: [nova][dev] Bug about disabled compute during scheduling Message-ID: Belmiro/Surya, I'm trying to follow up on something Belmiro mentioned at the summit before I forget about it. CERN sets this value low: https://docs.openstack.org/nova/latest/configuration/config.html#scheduler.max_placement_results And as a result, when disabling nova-computes during maintenance, you can fail during scheduling because placement only returns resource providers for disabled computes. I believe Dan and I kicked around some ideas on how we could deal with this, like either via a periodic in the compute service or when the compute service is disabled in the API, we would set the 'reserved' inventory value equal to the total to take those computes out of scheduling. I think Belmiro said this is what CERN is doing today as a workaround? For the latter solution, I don't know if we'd proxy that change directly from nova-api to placement, or make an RPC cast/call to nova-compute to do it, but that's an implementation detail. I mostly just want to make sure we get a bug reported for this so we don't lose track of it. Can one of you open a bug with your scenario and current workaround? -- Thanks, Matt From conrad.kimball at boeing.com Wed Dec 5 21:57:06 2018 From: conrad.kimball at boeing.com (Kimball (US), Conrad) Date: Wed, 5 Dec 2018 21:57:06 +0000 Subject: Anyone using ScaleIO block storage? Message-ID: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> Is anyone using ScaleIO (from Dell EMC) as a Cinder storage provider? What has been your experience with it, and at what scale? Our enterprise storage team is moving to ScaleIO and wants our OpenStack deployments to use it, so I'm looking for real life experiences to calibrate vendor stories of wonderfulness. One concern I do have is that it uses a proprietary protocol that in turn requires a proprietary "data client". For VM hosting this data client can be installed in the compute node host OS, but seems like we wouldn't be able to boot a bare-metal instance from a ScaleIO-backed Cinder volume. Conrad Kimball Associate Technical Fellow Enterprise Architecture Chief Architect, Enterprise Cloud Services conrad.kimball at boeing.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Dec 5 23:24:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 5 Dec 2018 17:24:41 -0600 Subject: [goals][upgrade-checkers] Week R-18 Update Message-ID: First, I'm sorry about the big gap in updates, the last one being before the summit. Things have been busy elsewhere. Second, I'm happy to report the majority of projects have merged the initial framework patch for the "$project-status upgrade check" command. There are a few outstanding patches for projects which need core review (I went through this list today and +1 on them): https://review.openstack.org/#/q/topic:upgrade-checkers+status:open There are a few projects adding real upgrade checks as well, like designate and cloudkitty, which is good to see. For projects that have already merged the framework check, please talk in your meetings about what "real" upgrade check you could add in stein to replace the placeholder/sample check that came with the framework patch. Note that checks added in stein don't necessarily need to be for stein upgrade issues, they could be for something that was an upgrade impact in rocky, queens, etc, because with fast-forward upgrades people will be rolling through and might have missed something in the release notes. If you have questions, feel free to reach out to me in #openstack-dev (mriedem) or reply to this thread. -- Thanks, Matt From juliaashleykreger at gmail.com Wed Dec 5 23:29:07 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 5 Dec 2018 15:29:07 -0800 Subject: Anyone using ScaleIO block storage? In-Reply-To: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> Message-ID: On Wed, Dec 5, 2018 at 2:02 PM Kimball (US), Conrad < conrad.kimball at boeing.com> wrote: [trim] > One concern I do have is that it uses a proprietary protocol that in turn > requires a proprietary “data client”. For VM hosting this data client can > be installed in the compute node host OS, but seems like we wouldn’t be > able to boot a bare-metal instance from a ScaleIO-backed Cinder volume. > Not supporting iSCSI would indeed be an issue for bare-metal instances. The same basic issue exists for Ceph backed storage, although I've been encouraging the cinder team to provide a capability of returning an iscsi volume mapping for Ceph. If there is a similar possibility, please let me know as it might change the overall discussion regarding providing storage for bare metal instances. -Julia > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu Dec 6 00:13:02 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 5 Dec 2018 16:13:02 -0800 Subject: Octavia Production Deployment Confused In-Reply-To: References: Message-ID: Hi Zufar, Tenant traffic into the VIP and out to member servers is isolated from the lb-mgmt-net. The VIP network is hot-plugged into the amphora network namespace for tenant traffic when a user creates a load balancer and specifies the VIP subnet or network. As for the certificate creation, please see this document awaiting patch review: https://review.openstack.org/613454 I wrote up a detailed certificate configuration guide that should help you resolve your certificate configuration issue. Michael On Tue, Dec 4, 2018 at 3:59 PM Zufar Dhiyaulhaq wrote: > > Hi all, > > Thank you, > So the amphora will use a provider network. but how we can access this load balancer externally? via IP assign into amphora (provider network IP)? > > Another question, I am facing a problem with a keypair. I am generating a keypair with `create_certificates.sh` > source /tmp/octavia/bin/create_certificates.sh /etc/octavia/certs /tmp/octavia/etc/certificates/openssl.cnf > > but when creating the load balancer service, I got this error from /var/log/octavia/worker.log > ERROR oslo_messaging.rpc.server CertificateGenerationException: Could not sign the certificate request: Failed to load CA Private Key /etc/octavia/certs/private/cakey.pem. > > I am using this configuration under octavia.conf > [certificates] > > ca_certificate = /etc/octavia/certs/ca_01.pem > > ca_private_key = /etc/octavia/certs/private/cakey.pem > > ca_private_key_passphrase = foobar > > Anyone know this issue? > I am following Mr. Lingxian Kong blog in https://lingxiankong.github.io/2016-06-07-octavia-deployment-prerequisites.html > > Best Regards, > Zufar Dhiyaulhaq > > > On Wed, Dec 5, 2018 at 4:35 AM Lingxian Kong wrote: >> >> On Wed, Dec 5, 2018 at 6:27 AM Gaël THEROND wrote: >>> >>> You can do it with any routed network that you’ll load as a provider network too. >>> >>> Way more simpler, no need for ovs manipulation, just get your network team to give you a vlan both available from computer node and controller plan. It can be a network subnet and vlan completely unknown from you controller as long as you get an intermediary equipment that route your traffic or that you add the proper route on your controllers. >> >> >> Yeah, that's also how we did for our Octavia service in production thanks to our ops team. >> >> Cheers, >> Lingxian Kong From jaosorior at redhat.com Thu Dec 6 00:17:03 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Wed, 5 Dec 2018 19:17:03 -0500 Subject: [tripleo] PTL on vacations Message-ID: <2e53ebf1-ca96-56c8-5491-c659c88d9ce8@redhat.com> Hello folks! I'll be on vacations from December 10th to January 2nd. For this reason, I won't be hosting the weekly meetings until I'm back. Best regards From jaosorior at redhat.com Thu Dec 6 00:17:54 2018 From: jaosorior at redhat.com (Juan Antonio Osorio Robles) Date: Wed, 5 Dec 2018 19:17:54 -0500 Subject: [tripleo] No weekly meeting on December 25 Message-ID: <248391bd-6c2c-2f6d-8fdc-6dee83a8490b@redhat.com> Enjoy the holidays! From Burak.Hoban at iag.com.au Thu Dec 6 00:46:31 2018 From: Burak.Hoban at iag.com.au (Burak Hoban) Date: Thu, 6 Dec 2018 00:46:31 +0000 Subject: Anyone using ScaleIO block storage? Message-ID: We've been using ScaleIO alongside OpenStack for about 2 years now. From a stability point of view, no issues... we're still running on Mitaka so some of the integration isn't great but with the upgrade cycle coming up for us all of that should be solved. We only utilise (VM) instances on OpenStack though, not bare metal. _____________________________________________________________________ The information transmitted in this message and its attachments (if any) is intended only for the person or entity to which it is addressed. The message may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information, by persons or entities other than the intended recipient is prohibited. If you have received this in error, please contact the sender and delete this e-mail and associated material from any computer. The intended recipient of this e-mail may only use, reproduce, disclose or distribute the information contained in this e-mail and any attached files, with the permission of the sender. This message has been scanned for viruses. _____________________________________________________________________ From shiriul.lol at gmail.com Thu Dec 6 01:30:28 2018 From: shiriul.lol at gmail.com (SIRI KIM) Date: Thu, 6 Dec 2018 10:30:28 +0900 Subject: [loci] How to add some agent to loci images Message-ID: Hello, Jean. I tried to add lbaas-agent and fwaas-agent to official loci neutron image. To pass openstack-helm gate test, I need lbaas-agent and fwaas-agent. I found your openstack source repository is repository: 172.17.0.1:5000/loci/requirements Please let me know I can I add lbaas-agent and fwaas-agent to official loci neutron image. Thanks, Siri -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Thu Dec 6 01:47:47 2018 From: saphi070 at gmail.com (Sa Pham) Date: Thu, 6 Dec 2018 08:47:47 +0700 Subject: Anyone using ScaleIO block storage? In-Reply-To: References: Message-ID: Do you have a problems such as block request from compute to storage? So It makes VM is switched to read-only mode. On Thu, Dec 6, 2018 at 7:51 AM Burak Hoban wrote: > We've been using ScaleIO alongside OpenStack for about 2 years now. > > From a stability point of view, no issues... we're still running on Mitaka > so some of the integration isn't great but with the upgrade cycle coming up > for us all of that should be solved. > > We only utilise (VM) instances on OpenStack though, not bare metal. > > _____________________________________________________________________ > > The information transmitted in this message and its attachments (if any) > is intended > only for the person or entity to which it is addressed. > The message may contain confidential and/or privileged material. Any > review, > retransmission, dissemination or other use of, or taking of any action in > reliance > upon this information, by persons or entities other than the intended > recipient is > prohibited. > > If you have received this in error, please contact the sender and delete > this e-mail > and associated material from any computer. > > The intended recipient of this e-mail may only use, reproduce, disclose or > distribute > the information contained in this e-mail and any attached files, with the > permission > of the sender. > > This message has been scanned for viruses. > _____________________________________________________________________ > -- Sa Pham Dang Cloud RnD Team - VCCloud Phone/Telegram: 0986.849.582 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Dec 6 02:59:48 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Dec 2018 11:59:48 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <4000535.P2t9rJb4HH@whitebase.usersys.redhat.com> References: <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> <4795573.S6joe1DXvj@whitebase.usersys.redhat.com> <4000535.P2t9rJb4HH@whitebase.usersys.redhat.com> Message-ID: <167817503c2.af0c3727123070.8090564432642728571@ghanshyammann.com> ---- On Wed, 05 Dec 2018 22:29:10 +0900 Luigi Toscano wrote ---- > On Wednesday, 5 December 2018 14:19:52 CET Luigi Toscano wrote: > > On Wednesday, 5 December 2018 13:02:08 CET Ghanshyam Mann wrote: > > > Reminder to test your project specific jobs if those are dependent on > > > Devstack or Tempest base jobs and keep adding the results on etherpad- > > > https://etherpad.openstack.org/p/devstack-bionic > > > > > > We will merge the Devstack and Tempest base job on Bionic on 10th Dec > > > 2018. > > > > I can't test it right now using the gates (so I can't really report this on > > the etherpad), but a quick local test shows that devstack-plugin-ceph shows > > does not seem to support bionic. I may try to prepare a test job later if no > > one beats me at it. > > > > Erp, sorry, I didn't notice https://review.openstack.org/#/c/611594/ - I > confirm that it makes devstack-plugin-ceph compatible with bionic, so please > merge it :) Yeah, frickler had the fix up and now it is merged. Thanks. -gmann > > Ciao > -- > Luigi > > > From sean.mcginnis at gmx.com Thu Dec 6 06:16:50 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 6 Dec 2018 00:16:50 -0600 Subject: [tc] Adapting office hours schedule to demand In-Reply-To: <1677e2adcd5.b59b7cfb103641.8644361703617614132@ghanshyammann.com> References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> <1677e2adcd5.b59b7cfb103641.8644361703617614132@ghanshyammann.com> Message-ID: <20181206061649.GA28275@sm-workstation> > > > > Should we: > > > > - Reduce office hours to one or two per week, possibly rotating times > > > > - Dump the whole idea and just encourage people to ask questions at any > > time on #openstack-tc, and get asynchronous answers > > > > - Keep it as-is, it still has the side benefit of triggering spikes of > > TC member activity > > I vote for keeping it to two in a week which can cover both Asia and USA/EU TZ which mean either dropping either Tuesday or Wednesday. If traffic is same in office hours then also it is ok as it does not take any extra effort from us. we keep doing our day activity and keep eyes on channel during that time. Obviously it does not mean we will not active in other time but it is good to save a particular slot where people can find more than one TC. > > -gmann > This seems reasonable. The 01:00 UTC office hour on Wednesday has never had much activity. I think there are usually a few folks around in case someone does show up with some questions, but I have yet to see that actually happen. I think we could drop Wednesday with little noticeable impact, while still staying accessible via IRC or the mailing list. Sean From sean.mcginnis at gmx.com Thu Dec 6 06:20:36 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 6 Dec 2018 00:20:36 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: Message-ID: <20181206062035.GB28275@sm-workstation> > > > > In other words, does #1 mean each python-clientlibrary's OSC plugin is > > ready to rock and roll, or we talking about everyone rewriting all client > > interactions in to openstacksdk, and porting existing OSC plugins use that > > different python sdk. > > We talked about those things as separate phases. IIRC, the first phase > was to include ensuring that python-openstackclient has full feature > coverage for non-admin operations for all microversions, using the > existing python-${service}client library or SDK as is appropriate. The > next phase was to ensure that the SDK has full feature coverage for all > microversions. After that point we could update OSC to use the SDK and > start deprecating the service-specific client libraries. > That was my recollection as well. > > In other words, some projects could find it very easy or that they are > > already done, where as others could find themselves with a huge lift that > > is also dependent upon review bandwidth that is outside of their control or > > influence which puts such a goal at risk if we try and push too hard. > > > > -Julia > > I do think there is still a lot of foundation work that needs to be done before we can make it a cycle goal to move more completely to osc. Before we get there, I think we need to see more folks involved on the project to be ready for the increased attention. Right now, I would classify this goal as a "huge lift". Sean From sean.mcginnis at gmx.com Thu Dec 6 06:41:31 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 6 Dec 2018 00:41:31 -0600 Subject: [release] Release countdown for week R-17, Dec 10-14 Message-ID: <20181206064131.GA28816@sm-workstation> Development Focus ----------------- We are coming up on end of year holiday's, so the next few weeks will probably go by fast. Teams should be focused on development plans, but please be aware if there are any project specific milestone 2 deadlines you need to be aware of. General Information ------------------- Just a reminder about the changes this cycle with library deliverables that follow the cycle-with-milestones release model. As announced, we will be automatically proposing releases for these libraries at milestone 2 if there have been any functional changes since the last release to help ensure those changes are picked up by consumers with plenty of time to identify and correct issues. More detail can be found in the original mailing list post describing the changes: http://lists.openstack.org/pipermail/openstack-dev/2018-October/135689.html There are also a set of deliverables following the cycle-with-intermediary model that are not considered libraries. These are services and other somewhat different deliverables. We want to make sure the cycle-with-intermediary release model is not being used as a way to just perform one final release. The intent of this release model is to have multiple releases throughout a development cycle. If you own one of these deliverables, please think about performing a release if one has not already been done for Stein, or decide if the updated cycle-with-rc release model is more appropriate for your needs. More information can be found here: http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000465.html Upcoming Deadlines & Dates -------------------------- Stein-2 Milestone: January 10 -- Sean McGinnis (smcginnis) From Burak.Hoban at iag.com.au Thu Dec 6 07:08:33 2018 From: Burak.Hoban at iag.com.au (Burak Hoban) Date: Thu, 6 Dec 2018 07:08:33 +0000 Subject: Anyone using ScaleIO block storage? In-Reply-To: References: Message-ID: The only time you should realistically see disks going into read-only mode from ScaleIO is if you’re having _major_ underlying issues, e.g. something like major networking issues, which may cause mass drops of MDMs or bad oscillating network failures. ScaleIO (VxFlex OS) can handle MDMs disconnecting fairly well, but the only time you’d really see issues if there’s something seriously wrong with the environment; this would be similar in a Ceph or any other software defined storage platform. We actually run ScaleIO environments (MDM, SDC and SDS) on top our Compute Nodes to offer instances block storage, but there’s also no issue in connecting external SDC’s to a ScaleIO cluster (e.g. if SDS is on storage only nodes) as it’s fully supported. _____________________________________________________________________ The information transmitted in this message and its attachments (if any) is intended only for the person or entity to which it is addressed. The message may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information, by persons or entities other than the intended recipient is prohibited. If you have received this in error, please contact the sender and delete this e-mail and associated material from any computer. The intended recipient of this e-mail may only use, reproduce, disclose or distribute the information contained in this e-mail and any attached files, with the permission of the sender. This message has been scanned for viruses. _____________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Thu Dec 6 07:17:40 2018 From: saphi070 at gmail.com (Sa Pham) Date: Thu, 6 Dec 2018 14:17:40 +0700 Subject: Anyone using ScaleIO block storage? In-Reply-To: References: Message-ID: So you are running SDS Service on Compute node. On Thu, Dec 6, 2018 at 2:08 PM Burak Hoban wrote: > The only time you should realistically see disks going into read-only mode > from ScaleIO is if you’re having _*major*_ underlying issues, e.g. > something like major networking issues, which may cause mass drops of MDMs > or bad oscillating network failures. ScaleIO (VxFlex OS) can handle MDMs > disconnecting fairly well, but the only time you’d really see issues if > there’s something seriously wrong with the environment; this would be > similar in a Ceph or any other software defined storage platform. > > > > We actually run ScaleIO environments (MDM, SDC and SDS) on top our Compute > Nodes to offer instances block storage, but there’s also no issue in > connecting external SDC’s to a ScaleIO cluster (e.g. if SDS is on storage > only nodes) as it’s fully supported. > > > > > > _____________________________________________________________________ > > The information transmitted in this message and its attachments (if any) > is intended > only for the person or entity to which it is addressed. > The message may contain confidential and/or privileged material. Any > review, > retransmission, dissemination or other use of, or taking of any action in > reliance > upon this information, by persons or entities other than the intended > recipient is > prohibited. > > If you have received this in error, please contact the sender and delete > this e-mail > and associated material from any computer. > > The intended recipient of this e-mail may only use, reproduce, disclose or > distribute > the information contained in this e-mail and any attached files, with the > permission > of the sender. > > This message has been scanned for viruses. > _____________________________________________________________________ > -- Sa Pham Dang Cloud RnD Team - VCCloud Phone/Telegram: 0986.849.582 Skype: great_bn -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Thu Dec 6 08:40:57 2018 From: tobias.urdin at binero.se (Tobias Urdin) Date: Thu, 6 Dec 2018 09:40:57 +0100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: <20181205050207.GA19462@thor.bakeyournoodle.com> References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> <20181205050207.GA19462@thor.bakeyournoodle.com> Message-ID: <3e084433-c965-3881-53ac-761606b5604b@binero.se> Hello Tony, Yes that list is correct and complete, please go ahead. Thanks! On 12/05/2018 06:06 AM, Tony Breeds wrote: > On Thu, Nov 29, 2018 at 11:38:45AM +0100, Tobias Urdin wrote: >> Hello, >> This got lost way down in my mailbox. >> >> I think we have a consensus about getting rid of the newton branches. >> Does anybody in Stable release team have time to deprecate the stable/newton >> branches? > Just to be clear You're asking for the following repos to be marked > EOL (current origin/stable/newton tagged as newton-eol and deleted, any > open reviews abandoned) > : > # EOL repos belonging to Puppet OpenStack > eol_branch.sh -- stable/newton newton-eol \ > openstack/puppet-aodh openstack/puppet-barbican \ > openstack/puppet-ceilometer openstack/puppet-cinder \ > openstack/puppet-designate openstack/puppet-glance \ > openstack/puppet-gnocchi openstack/puppet-heat \ > openstack/puppet-horizon openstack/puppet-ironic \ > openstack/puppet-keystone openstack/puppet-magnum \ > openstack/puppet-manila openstack/puppet-mistral \ > openstack/puppet-murano openstack/puppet-neutron \ > openstack/puppet-nova \ > openstack/puppet-openstack-integration \ > openstack/puppet-openstack_extras \ > openstack/puppet-openstack_spec_helper \ > openstack/puppet-openstacklib openstack/puppet-oslo \ > openstack/puppet-ovn openstack/puppet-sahara \ > openstack/puppet-swift openstack/puppet-tempest \ > openstack/puppet-trove openstack/puppet-vswitch \ > openstack/puppet-zaqar > > Yours Tony. From ifatafekn at gmail.com Thu Dec 6 09:32:13 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Thu, 6 Dec 2018 11:32:13 +0200 Subject: [all][ptl][heat][senlin][magnum][vitrage] New SIG for Autoscaling? plus, Session Summary: Autoscaling Integration, improvement, and feedback In-Reply-To: References: Message-ID: +1 for that. I see a lot of similarities between this SIG and the self-healing SIG, although their scopes are slightly different. In both cases, there is a need to decide when an action should be taken (based on Ceilometer, Monasca, Vitrage etc.) what action to take (healing/scaling) and how to execute it (using Heat, Senlin, Mistral, …). The main differences are the specific triggers and the actions to perform. I think that as a first step we should document the use cases. Ifat On Thu, Nov 29, 2018 at 9:34 PM Joseph Davis wrote: > I agree with Duc and Witek that this communication could be really good. > > One of the first items for a new SIG would be to define the relationship > with the Self-Healing SIG. The two SIGs have a lot in common but some > important differences. They can both use some of the same tools and data > (Heat, Monasca, Senlin, Vitrage, etc) to achieve their purpose, but > Self-Healing is about recovering a cloud when something goes wrong, while > Autoscaling is about adjusting resources to avoid something going wrong. > Having a clear statement may help a new user or contributor understand > where there interests lie and how they can be part of the group. > > Writing some clear use cases will be really valuable for all the component > teams to reference. It may also be of value to identify a few reference > architectures or configurations to illustrate how the use cases could be > addressed. I'm thinking of stories like "A cloud with Monasca and Senlin > services has 20 active VMs. When Monasca recognizes the 20 VMs have hit 90% > utilization each it raises an alarm and Senlin triggers the creation of 5 > more VMs to meet expected loads." Plus lots of details I just skipped > over. :) > > > joseph > > > On Wed, Nov 28, 2018 at 4:00 AM Rico Lin > wrote: > > > > I gonna use this ML to give a summary of the forum [1] and asking for > > feedback for the idea of new SIG. > > > > So if you have any thoughts for the new SIG (good or bad) please share it > > here. > > > > [1] > https://etherpad.openstack.org/p/autoscaling-integration-and-feedback > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjamgade at suse.com Thu Dec 6 09:50:19 2018 From: sjamgade at suse.com (Sumit Jamgade) Date: Thu, 6 Dec 2018 10:50:19 +0100 Subject: [glance] is glance-cache deprecated In-Reply-To: <56432dc9-d61a-f1e0-16bd-a51bf29b2fb6@gmail.com> References: <56432dc9-d61a-f1e0-16bd-a51bf29b2fb6@gmail.com> Message-ID: On 12/05/2018 05:36 PM, Brian Rosmaita wrote: > On 12/5/18 10:30 AM, Sumit Jamgade wrote: >> Hello, >> >> $subject > > No. It's just not easily manageable ATM. so is there an alternative to the glance-cache-manage cmd for Rocky to queue/list/delete/... images > >> or are there any plans to migrate it to v2 ? > > Yes, see this spec for Stein: > https://git.openstack.org/cgit/openstack/glance-specs/commit/?id=862f2212c7a382a832456829be8bd6f2f9ee2561 thanks for this, I did not know about lite-specs. From Burak.Hoban at iag.com.au Thu Dec 6 11:26:17 2018 From: Burak.Hoban at iag.com.au (Burak Hoban) Date: Thu, 6 Dec 2018 11:26:17 +0000 Subject: Anyone using ScaleIO block storage? In-Reply-To: References: Message-ID: In our environment, the Compute Nodes run the MDM, SDC and SDS components of ScaleIO (VxFlex OS). However external SDC’s and SDC’s connecting to multiple environments are fully supported. As long as the SDC-SDS network is available, and at least one MDM network is also available then there shouldn’t be any issues. Have you had issues before? _____________________________________________________________________ The information transmitted in this message and its attachments (if any) is intended only for the person or entity to which it is addressed. The message may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information, by persons or entities other than the intended recipient is prohibited. If you have received this in error, please contact the sender and delete this e-mail and associated material from any computer. The intended recipient of this e-mail may only use, reproduce, disclose or distribute the information contained in this e-mail and any attached files, with the permission of the sender. This message has been scanned for viruses. _____________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From surya.seetharaman9 at gmail.com Thu Dec 6 00:26:00 2018 From: surya.seetharaman9 at gmail.com (Surya Seetharaman) Date: Thu, 6 Dec 2018 01:26:00 +0100 Subject: [nova][dev] Bug about disabled compute during scheduling In-Reply-To: References: Message-ID: Hi Matt, Thanks for looking into this, On Wed, Dec 5, 2018 at 10:27 PM Matt Riedemann wrote: > Belmiro/Surya, > > I'm trying to follow up on something Belmiro mentioned at the summit > before I forget about it. > > CERN sets this value low: > > > https://docs.openstack.org/nova/latest/configuration/config.html#scheduler.max_placement_results > > And as a result, when disabling nova-computes during maintenance, you > can fail during scheduling because placement only returns resource > providers for disabled computes. > > I believe Dan and I kicked around some ideas on how we could deal with > this, like either via a periodic in the compute service or when the > compute service is disabled in the API, we would set the 'reserved' > inventory value equal to the total to take those computes out of > scheduling. Just read the discussion on the channel and saw there were a couple of approaches proposed like traits and neg-aggregates in addition to the above two. > I think Belmiro said this is what CERN is doing today as a > workaround? > > As far as I know we don't have it in PROD, I will let Belmiro confirm this anyways > For the latter solution, I don't know if we'd proxy that change directly > from nova-api to placement, or make an RPC cast/call to nova-compute to > do it, but that's an implementation detail. > > I mostly just want to make sure we get a bug reported for this so we > don't lose track of it. Can one of you open a bug with your scenario and > current workaround? > > We have already filed a bug for this: https://bugs.launchpad.net/nova/+bug/1805984. Will add the workaround we have into the description. ------------ Regards, Surya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sairamb at taashee.com Thu Dec 6 07:33:27 2018 From: sairamb at taashee.com (sairam) Date: Thu, 6 Dec 2018 00:33:27 -0700 (MST) Subject: [TripleO] Configure SR-IOV VFs in tripleo In-Reply-To: References: Message-ID: <1544081607797-0.post@n7.nabble.com> we deployed overcloud rhosp10 through sriov but i am not able to create instance through please help me -- Sent from: http://openstack.10931.n7.nabble.com/Developer-f2.html From dougal at redhat.com Thu Dec 6 09:52:40 2018 From: dougal at redhat.com (Dougal Matthews) Date: Thu, 6 Dec 2018 09:52:40 +0000 Subject: [openstack-dev] [tripleo] Workflows Squad changes In-Reply-To: References: Message-ID: On Wed, 28 Nov 2018 at 10:15, Jiri Tomasek wrote: > Hi all, > > Recently, the workflows squad has been reorganized and people from the > squad are joining different squads. I would like to discuss how we are > going to adjust to this situation to make sure that tripleo-common > development is not going to be blocked in terms of feature work and reviews. > > With this change, most of the tripleo-common maintenance work goes > naturally to UI & Validations squad as CLI and GUI are the consumers of the > API provided by tripleo-common. Adriano Petrich from workflows squad has > joined UI squad to take on this work. > > As a possible solution, I would like to propose Adriano as a core reviewer > to tripleo-common and adding tripleo-ui cores right to +2 tripleo-common > patches. > +2, I support this idea. Adriano has been a valuable member of the Mistral core team for almost a year (and active in Mistral for some time before that). So he has lots of experience directly relevant to the workflow and action development in tripleo-common. Recently he has started contributing and reviewing regularly so I am confident he would make a positive impact to the tripleo core team. It would be great to hear opinions especially former members of Workflows > squad and regular contributors to tripleo-common on these changes and in > general on how to establish regular reviews and maintenance to ensure that > tripleo-common codebase is moved towards converging the CLI and GUI > deployment workflow. > > Thanks > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Dec 6 12:45:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 06:45:55 -0600 Subject: [dev][nova][glance] Interesting bug about deleting shelved server snapshot Message-ID: I came across this bug during triage today: https://bugs.launchpad.net/nova/+bug/1807110 They are advocating that nova/glance somehow keep a shelved server snapshot image from being inadvertently deleted by the user since it could result in data loss as they can't unshelve the server later (there is metadata in nova that links the shelved server to the snapshot image in glance which is used during unshelve). I don't see a base description field on images but I suppose nova could write a description property that explains what the snapshot is and warn against deleting it. Going a step further, nova could potentially set the protected flag to true so the image cannot be deleted, but I have two concerns about that: 1. I don't see any way to force delete a protected image in glance - does that exist or has it been discussed before? 2. Would the user be able to PATCH the image to change the protected value to false and then delete the image if they really wanted to? The other problem with nova marking the image as protected is that if the user deletes the server, the compute API tries to delete the snapshot image [1] which would fail if it's still protected, and then we could see snapshot images getting orphaned in glance. Arguably nova could detect this situation, update the protected field to false, and then delete the image. Other thoughts? Has this come up before? [1] https://github.com/openstack/nova/blob/c9dca64fa64005e5bea327f06a7a3f4821ab72b1/nova/compute/api.py#L1950 -- Thanks, Matt From navdeep.uniyal at bristol.ac.uk Thu Dec 6 14:10:10 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Thu, 6 Dec 2018 14:10:10 +0000 Subject: Networking-sfc Installation on Openstack Queens Message-ID: Dear all, I would like to use the networking-sfc in my multi-node openstack (Queens) vanilla installation. All the guides I found focussed on devstack. Please let me know if there is a proper documentation/guide to install the sfc service on my controller and compute nodes including the prerequisites. Please note: as of now I am using linuxbridge agent only. Please state if openVSwitch agent is required and how to install it. Kind Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Dec 6 14:14:16 2018 From: smooney at redhat.com (Sean Mooney) Date: Thu, 6 Dec 2018 14:14:16 +0000 Subject: [dev][nova][glance] Interesting bug about deleting shelved server snapshot In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 12:52 PM Matt Riedemann wrote: > > I came across this bug during triage today: > > https://bugs.launchpad.net/nova/+bug/1807110 > > They are advocating that nova/glance somehow keep a shelved server > snapshot image from being inadvertently deleted by the user since it > could result in data loss as they can't unshelve the server later (there > is metadata in nova that links the shelved server to the snapshot image > in glance which is used during unshelve). > > I don't see a base description field on images but I suppose nova could > write a description property that explains what the snapshot is and warn > against deleting it. > > Going a step further, nova could potentially set the protected flag to > true so the image cannot be deleted, but I have two concerns about that: > > 1. I don't see any way to force delete a protected image in glance - > does that exist or has it been discussed before? > > 2. Would the user be able to PATCH the image to change the protected > value to false and then delete the image if they really wanted to? would they need too? if they wanted to delete the snapshot could thye not just delete the shelved instnace. if the snapshot is goin i assume we will not be able to unshvel it anyway by falling back to the base image or something like that so is there a usecase where deleteing the snap shot leave the shelved instance in a valid unshelvable state? if not i think setting the protected flag is ok to do. > > The other problem with nova marking the image as protected is that if > the user deletes the server, the compute API tries to delete the > snapshot image [1] which would fail if it's still protected, and then we > could see snapshot images getting orphaned in glance. Arguably nova > could detect this situation, update the protected field to false, and > then delete the image. that seams sane to me. if nova set teh protected field when shelving the instance it shold be able to unprotect the snapshot when unshelving. > > Other thoughts? Has this come up before? > > [1] > https://github.com/openstack/nova/blob/c9dca64fa64005e5bea327f06a7a3f4821ab72b1/nova/compute/api.py#L1950 > > -- > > Thanks, > > Matt > From smooney at redhat.com Thu Dec 6 14:29:21 2018 From: smooney at redhat.com (Sean Mooney) Date: Thu, 6 Dec 2018 14:29:21 +0000 Subject: Networking-sfc Installation on Openstack Queens In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 2:15 PM Navdeep Uniyal wrote: > > Dear all, > > > > I would like to use the networking-sfc in my multi-node openstack (Queens) vanilla installation. > > All the guides I found focussed on devstack. Please let me know if there is a proper documentation/guide to install the sfc service on my controller and compute nodes including the prerequisites. > > > > Please note: as of now I am using linuxbridge agent only. Please state if openVSwitch agent is required and how to install it. as far as i am aware, networking-sfc does not supprot linux bridge. the default backend is ovs but it has support for odl and onos also the last time i checked. few openstack installers have native supprot for networking-sfc in production. looking at the offical install guide https://docs.openstack.org/networking-sfc/latest/install/index.html it does seam to assume devstack in places and is generally quite light on content. that said networking sfc runs as an extention to neutron so the basic steps descibed in the guide can be adapted to any deployment. i do not work on networking-sfc so i cant really help but just so you are aware the openstack at lists.openstack.org list is nolonger used and we have moved all openstack lists to openstack-discuss at list.openstack.org. i have updated the list address but if you are not already on the new list i would suggest joining to recive the responce. > > > > Kind Regards, > > Navdeep > > > > From navdeep.uniyal at bristol.ac.uk Thu Dec 6 13:08:33 2018 From: navdeep.uniyal at bristol.ac.uk (Navdeep Uniyal) Date: Thu, 6 Dec 2018 13:08:33 +0000 Subject: Networking-sfc Installation on Openstack Queens Message-ID: Dear all, I would like to use the networking-sfc in my multi-node openstack (Queens) vanilla installation. All the guides I found focussed on devstack. Please let me know if there is a proper documentation/guide to install the sfc service on my controller and compute nodes including the prerequisites. Please note: as of now I am using linuxbridge agent only. Please state if openVSwitch agent is required and how to install it. Kind Regards, Navdeep -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Dec 6 14:50:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 08:50:20 -0600 Subject: [dev][nova][glance] Interesting bug about deleting shelved server snapshot In-Reply-To: References: Message-ID: On 12/6/2018 8:14 AM, Sean Mooney wrote: >> 2. Would the user be able to PATCH the image to change the protected >> value to false and then delete the image if they really wanted to? > would they need too? > if they wanted to delete the snapshot could thye not just delete the > shelved instnace. if the snapshot is goin i assume we will not be able > to unshvel > it anyway by falling back to the base image or something like that so is > there a usecase where deleteing the snap shot leave the shelved instance in a > valid unshelvable state? if not i think setting the protected flag is ok to do. I'm having a hard time understanding what you're saying. Are you saying, the user should delete the protected snapshot via deleting the shelved server? I don't think that's very clear. But yes you can't unshelve the instance if the image is deleted (or if the user does not have access to it, which is a separate bug [1675791]). I think you're just saying, the user shouldn't need to delete the protected shelve snapshot image and if they do, the server should be deleted as well. >> The other problem with nova marking the image as protected is that if >> the user deletes the server, the compute API tries to delete the >> snapshot image [1] which would fail if it's still protected, and then we >> could see snapshot images getting orphaned in glance. Arguably nova >> could detect this situation, update the protected field to false, and >> then delete the image. > that seams sane to me. if nova set teh protected field when shelving the > instance it shold be able to unprotect the snapshot when unshelving. It's not unshelve, it's delete. -- Thanks, Matt From lbragstad at gmail.com Thu Dec 6 15:15:04 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Dec 2018 09:15:04 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: <20181206062035.GB28275@sm-workstation> References: <20181206062035.GB28275@sm-workstation> Message-ID: Today in the TC meeting, we discussed the status of the three candidate goals [0]. Ultimately, we as the TC, are wondering who would be willing to drive the goal work. Having a champion step up early on will help us get answers to questions about the feasibility of the goal, it's impact across OpenStack, among other things that will help us, as a community, make an informed decision. Remember, championing a goal doesn't need to fall on a single individual. With proper communication, work can be spread out to lighten the load. What I'd like is to open this up to the community and see who would be willing to drive the proposed goals. If you have any questions about championing a goal, please don't hesitate to swing by #openstack-tc, or you can ping me privately. [0] http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.log.html#l-104 On Thu, Dec 6, 2018 at 12:20 AM Sean McGinnis wrote: > > > > > > In other words, does #1 mean each python-clientlibrary's OSC plugin is > > > ready to rock and roll, or we talking about everyone rewriting all > client > > > interactions in to openstacksdk, and porting existing OSC plugins use > that > > > different python sdk. > > > > We talked about those things as separate phases. IIRC, the first phase > > was to include ensuring that python-openstackclient has full feature > > coverage for non-admin operations for all microversions, using the > > existing python-${service}client library or SDK as is appropriate. The > > next phase was to ensure that the SDK has full feature coverage for all > > microversions. After that point we could update OSC to use the SDK and > > start deprecating the service-specific client libraries. > > > > That was my recollection as well. > > > > In other words, some projects could find it very easy or that they are > > > already done, where as others could find themselves with a huge lift > that > > > is also dependent upon review bandwidth that is outside of their > control or > > > influence which puts such a goal at risk if we try and push too hard. > > > > > > -Julia > > > > > I do think there is still a lot of foundation work that needs to be done > before > we can make it a cycle goal to move more completely to osc. Before we get > there, I think we need to see more folks involved on the project to be > ready > for the increased attention. > > Right now, I would classify this goal as a "huge lift". > > Sean > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Thu Dec 6 15:24:37 2018 From: balazs.gibizer at ericsson.com (=?Windows-1252?Q?Bal=E1zs_Gibizer?=) Date: Thu, 6 Dec 2018 15:24:37 +0000 Subject: Anyone using ScaleIO block storage? In-Reply-To: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> Message-ID: <1544109874.26914.3@smtp.office365.com> On Wed, Dec 5, 2018 at 10:57 PM, Kimball (US), Conrad wrote: > Is anyone using ScaleIO (from Dell EMC) as a Cinder storage provider? > What has been your experience with it, and at what scale? My employer has multiple customers using our OpenStack based cloud solution with ScaleIO as volume backend. These customers are mostly telco operators running virtual network functions in their cloud, but there are customers using the cloud for other non telco IT purpose too. There are various types and flavors of the ScaleIO deployments at these customers, including low footprint deployment providing nx100 GiB raw capacity with small number of servers, medium capacity ultra HA systems with nx10 servers using multiple protection domains and fault sets, high capacity systems with petabyte range raw capacity, hyperconverged systems running storage and compute services on the same servers. The general feedback from the customers are positive, we did not hear about performance or stability issues. However, one common property of these customers and deployments that none of them handle bare metal instances, therefore, we do not have experience with that. In order to boot bare metal instance from ScaleIO volume, the BIOS should be able to act as ScaleIO client, which will likely never happen. ScaleIO used to have a capability to expose the volumes over standard iSCSI, but this capability has been removed long time ago. As this was a feature in the past, making Dell/EMC to re-introduce it may not be completely impossible if there is high enough interest for that. However, this would vanish the power of the proprietary protocol which let the client to balance the load towards multiple servers. Cheers, gibi > > Our enterprise storage team is moving to ScaleIO and wants our > OpenStack deployments to use it, so I’m looking for real life > experiences to calibrate vendor stories of wonderfulness. > > One concern I do have is that it uses a proprietary protocol that in > turn requires a proprietary “data client”. For VM hosting this > data client can be installed in the compute node host OS, but seems > like we wouldn’t be able to boot a bare-metal instance from a > ScaleIO-backed Cinder volume. > > Conrad Kimball > Associate Technical Fellow > Enterprise Architecture > Chief Architect, Enterprise Cloud Services > conrad.kimball at boeing.com From smooney at redhat.com Thu Dec 6 15:26:18 2018 From: smooney at redhat.com (Sean Mooney) Date: Thu, 6 Dec 2018 15:26:18 +0000 Subject: [dev][nova][glance] Interesting bug about deleting shelved server snapshot In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 2:50 PM Matt Riedemann wrote: > > On 12/6/2018 8:14 AM, Sean Mooney wrote: > >> 2. Would the user be able to PATCH the image to change the protected > >> value to false and then delete the image if they really wanted to? > > would they need too? > > if they wanted to delete the snapshot could thye not just delete the > > shelved instnace. if the snapshot is goin i assume we will not be able > > to unshvel > > it anyway by falling back to the base image or something like that so is > > there a usecase where deleteing the snap shot leave the shelved instance in a > > valid unshelvable state? if not i think setting the protected flag is ok to do. > > I'm having a hard time understanding what you're saying. Are you saying, > the user should delete the protected snapshot via deleting the shelved > server? I don't think that's very clear. But yes you can't unshelve the > instance if the image is deleted (or if the user does not have access to > it, which is a separate bug [1675791]). I think you're just saying, the > user shouldn't need to delete the protected shelve snapshot image and if > they do, the server should be deleted as well. yes sorry i did not say that clearly. basically i wanted to say that since the user would break the unshelving of an instance by deleting the snapshot nova created we should prevent them from doing that by setting the protected flag. if they really wanted to still delete the snappshot they should therefor delete the shelved instance which should cause nova to delete the snapshot. > > >> The other problem with nova marking the image as protected is that if > >> the user deletes the server, the compute API tries to delete the > >> snapshot image [1] which would fail if it's still protected, and then we > >> could see snapshot images getting orphaned in glance. Arguably nova > >> could detect this situation, update the protected field to false, and > >> then delete the image. > > that seams sane to me. if nova set teh protected field when shelving the > > instance it shold be able to unprotect the snapshot when unshelving. > > It's not unshelve, it's delete. sorry you are correct on deleting the instance i think nova should be able to unprotect the snapshot if the instnace is still shelved. that said there could be issues with this if someone manually booted another instance form the snapshot but im not sure if that would have other issues. > > -- > > Thanks, > > Matt From jsbryant at electronicjungle.net Thu Dec 6 15:30:53 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Thu, 6 Dec 2018 09:30:53 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: <20181206062035.GB28275@sm-workstation> References: <20181206062035.GB28275@sm-workstation> Message-ID: >> We talked about those things as separate phases. IIRC, the first phase >> was to include ensuring that python-openstackclient has full feature >> coverage for non-admin operations for all microversions, using the >> existing python-${service}client library or SDK as is appropriate. The >> next phase was to ensure that the SDK has full feature coverage for all >> microversions. After that point we could update OSC to use the SDK and >> start deprecating the service-specific client libraries. > >That was my recollection as well. This was my understanding as well and I think the phased approach is important to take given that I don't know that we have as many people with SDK experience. At least that is the case in Cinder. > I do think there is still a lot of foundation work that needs to be done before > we can make it a cycle goal to move more completely to osc. Before we get > there, I think we need to see more folks involved on the project to be ready > for the increased attention. > Right now, I would classify this goal as a "huge lift". I think that moving to OSC and away from the other client interfaces is a good goal. It will make for a better user experience and would hopefully help make documentation easier to understand. With that said, I know that there is a sizable gap between what OSC has for Cinder and what is available for python-cinderclient. If we make this a goal we are doing to need good organization and documentation of those gaps and volunteers to help make this change happen. On Thu, Dec 6, 2018 at 12:21 AM Sean McGinnis wrote: > > > > > > In other words, does #1 mean each python-clientlibrary's OSC plugin is > > > ready to rock and roll, or we talking about everyone rewriting all > client > > > interactions in to openstacksdk, and porting existing OSC plugins use > that > > > different python sdk. > > > > We talked about those things as separate phases. IIRC, the first phase > > was to include ensuring that python-openstackclient has full feature > > coverage for non-admin operations for all microversions, using the > > existing python-${service}client library or SDK as is appropriate. The > > next phase was to ensure that the SDK has full feature coverage for all > > microversions. After that point we could update OSC to use the SDK and > > start deprecating the service-specific client libraries. > > > > That was my recollection as well. > > > > In other words, some projects could find it very easy or that they are > > > already done, where as others could find themselves with a huge lift > that > > > is also dependent upon review bandwidth that is outside of their > control or > > > influence which puts such a goal at risk if we try and push too hard. > > > > > > -Julia > > > > > I do think there is still a lot of foundation work that needs to be done > before > we can make it a cycle goal to move more completely to osc. Before we get > there, I think we need to see more folks involved on the project to be > ready > for the increased attention. > > Right now, I would classify this goal as a "huge lift". > > Sean > > -- jsbryant at electronicjungle.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Thu Dec 6 15:36:48 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Thu, 6 Dec 2018 09:36:48 -0600 Subject: Anyone using ScaleIO block storage? In-Reply-To: References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> Message-ID: > Not supporting iSCSI would indeed be an issue for bare-metal instances. The same basic issue exists for Ceph backed storage, although I've been encouraging the cinder team to provide a capability of returning an iscsi volume mapping for Ceph. If there > is a similar possibility, please let me know as it might change the overall discussion regarding providing storage for bare metal instances. Julia, This is an interesting idea. Depending on how things go with the Ceph iSCSI implementation goes I wonder if we can look at doing something more general where the volume node can act as an iSCSI gateway for any user that wants iSCSI support. I am not sure how hard creating a general solution would be or what the performance impact would be. It puts the volume node in the data path which may cause people to hesitate on this. Something to think about though. Jay On Wed, Dec 5, 2018 at 5:30 PM Julia Kreger wrote: > > > On Wed, Dec 5, 2018 at 2:02 PM Kimball (US), Conrad < > conrad.kimball at boeing.com> wrote: > [trim] > >> One concern I do have is that it uses a proprietary protocol that in turn >> requires a proprietary “data client”. For VM hosting this data client can >> be installed in the compute node host OS, but seems like we wouldn’t be >> able to boot a bare-metal instance from a ScaleIO-backed Cinder volume. >> > > Not supporting iSCSI would indeed be an issue for bare-metal instances. > The same basic issue exists for Ceph backed storage, although I've been > encouraging the cinder team to provide a capability of returning an iscsi > volume mapping for Ceph. If there is a similar possibility, please let me > know as it might change the overall discussion regarding providing storage > for bare metal instances. > > -Julia > >> -- jsbryant at electronicjungle.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Dec 6 15:53:06 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 6 Dec 2018 15:53:06 +0000 Subject: [tc][all] Train Community Goals In-Reply-To: References: <20181206062035.GB28275@sm-workstation> Message-ID: <670372467d5245ebb7e314867c2aca0f@AUSX13MPS308.AMER.DELL.COM> Suggest we get User community involved. If a user have tools written to current client libraries it will be impacted. So getting their feedback on impact and, for sure, continues reminder that this is coming and when will be good. From: Jay Bryant [mailto:jsbryant at electronicjungle.net] Sent: Thursday, December 6, 2018 9:31 AM To: Sean McGinnis Cc: openstack-discuss at lists.openstack.org Subject: Re: [tc][all] Train Community Goals [EXTERNAL EMAIL] >> We talked about those things as separate phases. IIRC, the first phase >> was to include ensuring that python-openstackclient has full feature >> coverage for non-admin operations for all microversions, using the >> existing python-${service}client library or SDK as is appropriate. The >> next phase was to ensure that the SDK has full feature coverage for all >> microversions. After that point we could update OSC to use the SDK and >> start deprecating the service-specific client libraries. > >That was my recollection as well. This was my understanding as well and I think the phased approach is important to take given that I don't know that we have as many people with SDK experience. At least that is the case in Cinder. > I do think there is still a lot of foundation work that needs to be done before > we can make it a cycle goal to move more completely to osc. Before we get > there, I think we need to see more folks involved on the project to be ready > for the increased attention. > Right now, I would classify this goal as a "huge lift". I think that moving to OSC and away from the other client interfaces is a good goal. It will make for a better user experience and would hopefully help make documentation easier to understand. With that said, I know that there is a sizable gap between what OSC has for Cinder and what is available for python-cinderclient. If we make this a goal we are doing to need good organization and documentation of those gaps and volunteers to help make this change happen. On Thu, Dec 6, 2018 at 12:21 AM Sean McGinnis > wrote: > > > > In other words, does #1 mean each python-clientlibrary's OSC plugin is > > ready to rock and roll, or we talking about everyone rewriting all client > > interactions in to openstacksdk, and porting existing OSC plugins use that > > different python sdk. > > We talked about those things as separate phases. IIRC, the first phase > was to include ensuring that python-openstackclient has full feature > coverage for non-admin operations for all microversions, using the > existing python-${service}client library or SDK as is appropriate. The > next phase was to ensure that the SDK has full feature coverage for all > microversions. After that point we could update OSC to use the SDK and > start deprecating the service-specific client libraries. > That was my recollection as well. > > In other words, some projects could find it very easy or that they are > > already done, where as others could find themselves with a huge lift that > > is also dependent upon review bandwidth that is outside of their control or > > influence which puts such a goal at risk if we try and push too hard. > > > > -Julia > > I do think there is still a lot of foundation work that needs to be done before we can make it a cycle goal to move more completely to osc. Before we get there, I think we need to see more folks involved on the project to be ready for the increased attention. Right now, I would classify this goal as a "huge lift". Sean -- jsbryant at electronicjungle.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 6 15:58:13 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Dec 2018 15:58:13 +0000 Subject: [ops] Anyone using ScaleIO block storage? In-Reply-To: <1544109874.26914.3@smtp.office365.com> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> <1544109874.26914.3@smtp.office365.com> Message-ID: <20181206155812.5bmixglpv65tjp4w@yuggoth.org> On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: [...] > In order to boot bare metal instance from ScaleIO volume, the BIOS > should be able to act as ScaleIO client, which will likely never > happen. ScaleIO used to have a capability to expose the volumes > over standard iSCSI, but this capability has been removed long > time ago. As this was a feature in the past, making Dell/EMC to > re-introduce it may not be completely impossible if there is high > enough interest for that. However, this would vanish the power of > the proprietary protocol which let the client to balance the load > towards multiple servers. [...] You'd only need iSCSI support for bootstrapping though, right? Once you're able to boot a ramdisk with the ScaleIO (my friends at EMC would want me to remind everyone it's called "VFlexOS" now) driver it should be able to pivot to their proprietary protocol. In theory some running service on the network could simply act as an iSCSI proxy for that limited purpose. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Thu Dec 6 16:04:04 2018 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 6 Dec 2018 10:04:04 -0600 Subject: [ops] Anyone using ScaleIO block storage? In-Reply-To: <20181206155812.5bmixglpv65tjp4w@yuggoth.org> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> <1544109874.26914.3@smtp.office365.com> <20181206155812.5bmixglpv65tjp4w@yuggoth.org> Message-ID: On 12/6/2018 9:58 AM, Jeremy Stanley wrote: > On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: > [...] >> In order to boot bare metal instance from ScaleIO volume, the BIOS >> should be able to act as ScaleIO client, which will likely never >> happen. ScaleIO used to have a capability to expose the volumes >> over standard iSCSI, but this capability has been removed long >> time ago. As this was a feature in the past, making Dell/EMC to >> re-introduce it may not be completely impossible if there is high >> enough interest for that. However, this would vanish the power of >> the proprietary protocol which let the client to balance the load >> towards multiple servers. > [...] > > You'd only need iSCSI support for bootstrapping though, right? Once > you're able to boot a ramdisk with the ScaleIO (my friends at EMC > would want me to remind everyone it's called "VFlexOS" now) driver > it should be able to pivot to their proprietary protocol. In theory > some running service on the network could simply act as an iSCSI > proxy for that limited purpose. Good question.  Don't know the details there.  I am going to add Helen Walsh who works on the Dell/EMC drivers to see if she could help give some insight. Jay From rosmaita.fossdev at gmail.com Thu Dec 6 16:17:57 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 6 Dec 2018 11:17:57 -0500 Subject: [dev][nova][glance] Interesting bug about deleting shelved server snapshot In-Reply-To: References: Message-ID: <66e2902d-8a43-c293-6243-b8ca908d4bf9@gmail.com> (Just addressing the specific Glance questions, not taking a position on the proposal.) On 12/6/18 7:45 AM, Matt Riedemann wrote: > I came across this bug during triage today: > > https://bugs.launchpad.net/nova/+bug/1807110 > > They are advocating that nova/glance somehow keep a shelved server > snapshot image from being inadvertently deleted by the user since it > could result in data loss as they can't unshelve the server later (there > is metadata in nova that links the shelved server to the snapshot image > in glance which is used during unshelve). > > I don't see a base description field on images but I suppose nova could > write a description property that explains what the snapshot is and warn > against deleting it. Yes, any user can add a 'description' property (unless prohibited by property protections). > Going a step further, nova could potentially set the protected flag to > true so the image cannot be deleted, but I have two concerns about that: > > 1. I don't see any way to force delete a protected image in glance - > does that exist or has it been discussed before? You cannot force delete a protected image in glance, but an admin can PATCH the image to update 'protected' to false, and then delete the image, which is functionally the same thing. > > 2. Would the user be able to PATCH the image to change the protected > value to false and then delete the image if they really wanted to? Yes, replacing the value of the 'protected' property on an image can be done by the image owner. (There is no specific policy for this other than the generic "modify_image" policy. I guess I should mention that there's also a "delete_image" policy. The default value for both policies is unrestricted ("").) > > The other problem with nova marking the image as protected is that if > the user deletes the server, the compute API tries to delete the > snapshot image [1] which would fail if it's still protected, and then we > could see snapshot images getting orphaned in glance. Arguably nova > could detect this situation, update the protected field to false, and > then delete the image. > > Other thoughts? Has this come up before? > > [1] > https://github.com/openstack/nova/blob/c9dca64fa64005e5bea327f06a7a3f4821ab72b1/nova/compute/api.py#L1950 > > From balazs.gibizer at ericsson.com Thu Dec 6 16:41:49 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Thu, 6 Dec 2018 16:41:49 +0000 Subject: [nova] What to do with the legacy notification interface? Message-ID: <1544114506.15837.0@smtp.office365.com> Hi, Last decision from the Denver PTG [1] was to * finish the remaining transformations [2] * change the default value of notification_format conf option from 'both' to 'versioned' * communicate that the legacy notification interface has been deprecated, unsupported but the code will not be removed Since the PTG the last two transformation patches [2] has been approved and the deprecation patch has been prepared [3]. However recently we got a bug report [4] where the performance impact of emiting both legacy and versioned notification caused unnecessary load on the message bus. (Btw this performance impact was raised in the original spec [5]). The original reason we emited both type of notifications was to keep backward compatibility with consumers only understanding legacy notifications. It worked so well that most of the consumers still depend on the legacy notifications (detailed status is in [1]). I see three options to move forward: A) Do nothing. By default, emit both type of notifications. This means that deployers seeing the performance impact of [4] needs to reconfigure nova. Also this is agains our decision to move away from supporting legacy notifications. B) Follow the plan from the PTG and change the default value of the config to emit only versioned notifications. This solves the immediate effect of the bug [4]. This is following our original plan. BUT this is backward incompatible change which in the current situation means most of the deployments (e.g. those, deploying notification consumer modules like ceilometer) need to change this config to 'unversioned' or 'both'. There was discussion about helping out notification consumers to convert their code using the new versioned interface but I have no spare time at the moment and I failed to drum up resources internally for this work. Also I don't see others volunteering for such work. C) Change the default to 'legacy'. This solves the bug [4]. BUT sends a really mixed message. As our original goal was to stop supporting the legacy notification interface of nova. Matt stated on IRC recently, there is no real bug inflow for the legacy code. Also looking at changes in the code I see that last cycle we had couple of blueprints extending the new versioned notifications with extra fields but that inflow stopped in this cycle too. So we most probably can keep the legacy code in place and supported forever without too much pain and too much divergence from the versioned interface. -- Personally, I spent enough time implementing and reviewing the versioned interface that I'm biased towards option B). But I do understand that our original goals setting in 2015 was made in such environment that changed significantly in the past cycles. So I can accept option C) as well if we can agree. Cheers, gibi [1] https://etherpad.openstack.org/p/nova-ptg-stein L765 [2] https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein+status:open [3] https://review.openstack.org/#/c/603079/ [4] https://bugs.launchpad.net/nova/+bug/1805659 [5] https://review.openstack.org/#/c/224755/11/specs/mitaka/approved/versioned-notification-api.rst at 687 From juliaashleykreger at gmail.com Thu Dec 6 17:09:23 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 6 Dec 2018 09:09:23 -0800 Subject: Anyone using ScaleIO block storage? In-Reply-To: <1544109874.26914.3@smtp.office365.com> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> <1544109874.26914.3@smtp.office365.com> Message-ID: On Thu, Dec 6, 2018 at 7:31 AM Balázs Gibizer wrote: > [trim] > ScaleIO used to have a capability to expose the volumes over standard > iSCSI, but this capability has been removed long time ago. As this was > a feature > in the past, making Dell/EMC to re-introduce it may not be completely > impossible if there is high enough interest for that. However, this > would vanish > the power of the proprietary protocol which let the client to balance > the load towards multiple servers. > [trim] iSCSI does have the ability to communicate additional paths that a client may choose to invoke, the issue then largely becomes locking across paths, which becomes a huge issue if lun locking is being used as part of something like a clustered file system. Of course, most initial initiators may not be able to support this, and as far as I'm aware what iscsi initiators that we can control in hardware don't have or have limited iscsi multipath support. Of course, if they iBFT load that.... Well, I'll stop now because of limitations with iBFT. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Dec 6 17:17:23 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 6 Dec 2018 09:17:23 -0800 Subject: [ops] Anyone using ScaleIO block storage? In-Reply-To: <20181206155812.5bmixglpv65tjp4w@yuggoth.org> References: <4c69e844a1a14e9388b59a8b7646bc37@boeing.com> <1544109874.26914.3@smtp.office365.com> <20181206155812.5bmixglpv65tjp4w@yuggoth.org> Message-ID: On Thu, Dec 6, 2018 at 8:04 AM Jeremy Stanley wrote: > On 2018-12-06 15:24:37 +0000 (+0000), Balázs Gibizer wrote: > [...] > > In order to boot bare metal instance from ScaleIO volume, the BIOS > > should be able to act as ScaleIO client, which will likely never > > happen. ScaleIO used to have a capability to expose the volumes > > over standard iSCSI, but this capability has been removed long > > time ago. As this was a feature in the past, making Dell/EMC to > > re-introduce it may not be completely impossible if there is high > > enough interest for that. However, this would vanish the power of > > the proprietary protocol which let the client to balance the load > > towards multiple servers. [...] > > You'd only need iSCSI support for bootstrapping though, right? Once > you're able to boot a ramdisk with the ScaleIO (my friends at EMC > would want me to remind everyone it's called "VFlexOS" now) driver > it should be able to pivot to their proprietary protocol. In theory > some running service on the network could simply act as an iSCSI > proxy for that limited purpose. > -- > Jeremy Stanley > This is a great point. I fear the issue would be how to inform the guest of what and how to pivot. At some point it might just be easier to boot the known kernel/ramdisk and have a command line argument. That being said things like this is why ironic implemented the network booting ramdisk interface so an operator could choose something along similar lines. If some abstraction pattern could be identified, and be well unit tested at least, I feel like we might be able to pass along the necessary information if needed. Naturally the existing ironic community does not have access to this sort of hardware, and it would be a bespoke sort of integration. We investigated doing something similar for Ceph integration but largely pulled back due to a lack of initial ramdisk loader standardization and even support for the root filesystem on Ceph. -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu Dec 6 17:29:01 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 6 Dec 2018 18:29:01 +0100 Subject: [tc][all] Train Community Goals In-Reply-To: References: <20181206062035.GB28275@sm-workstation> Message-ID: On Thu, Dec 6, 2018, 16:19 Lance Bragstad wrote: > Today in the TC meeting, we discussed the status of the three candidate > goals [0]. Ultimately, we as the TC, are wondering who would be willing to > drive the goal work. > > Having a champion step up early on will help us get answers to questions > about the feasibility of the goal, it's impact across OpenStack, among > other things that will help us, as a community, make an informed decision. > > Remember, championing a goal doesn't need to fall on a single individual. > With proper communication, work can be spread out to lighten the load. > > What I'd like is to open this up to the community and see who would be > willing to drive the proposed goals. If you have any questions about > championing a goal, please don't hesitate to swing by #openstack-tc, or you > can ping me privately. > I was waiting for the start of switching osc "services" to SDK for quite a while now. I am definitely interested and committed to support the real coding work here. I would also like to volunteer driving the goal if noone objects. > [0] > http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.log.html#l-104 > > On Thu, Dec 6, 2018 at 12:20 AM Sean McGinnis > wrote: > >> > > >> > > In other words, does #1 mean each python-clientlibrary's OSC plugin is >> > > ready to rock and roll, or we talking about everyone rewriting all >> client >> > > interactions in to openstacksdk, and porting existing OSC plugins use >> that >> > > different python sdk. >> > >> > We talked about those things as separate phases. IIRC, the first phase >> > was to include ensuring that python-openstackclient has full feature >> > coverage for non-admin operations for all microversions, using the >> > existing python-${service}client library or SDK as is appropriate. The >> > next phase was to ensure that the SDK has full feature coverage for all >> > microversions. After that point we could update OSC to use the SDK and >> > start deprecating the service-specific client libraries. >> > >> >> That was my recollection as well. >> >> > > In other words, some projects could find it very easy or that they are >> > > already done, where as others could find themselves with a huge lift >> that >> > > is also dependent upon review bandwidth that is outside of their >> control or >> > > influence which puts such a goal at risk if we try and push too hard. >> > > >> > > -Julia >> > > >> >> I do think there is still a lot of foundation work that needs to be done >> before >> we can make it a cycle goal to move more completely to osc. Before we get >> there, I think we need to see more folks involved on the project to be >> ready >> for the increased attention. >> >> Right now, I would classify this goal as a "huge lift". >> >> Sean >> > Artem > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Dec 6 17:35:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 11:35:03 -0600 Subject: [nova] What to do with the legacy notification interface? In-Reply-To: <1544114506.15837.0@smtp.office365.com> References: <1544114506.15837.0@smtp.office365.com> Message-ID: <1466d5db-316f-34a3-9f08-a5931ab6ce86@gmail.com> On 12/6/2018 10:41 AM, Balázs Gibizer wrote: > Hi, > > Last decision from the Denver PTG [1] was to > * finish the remaining transformations [2] > * change the default value of notification_format conf option from > 'both' to 'versioned' > * communicate that the legacy notification interface has been > deprecated, unsupported but the code will not be removed > > Since the PTG the last two transformation patches [2] has been approved > and the deprecation patch has been prepared [3]. However recently we > got a bug report [4] where the performance impact of emiting both > legacy and versioned notification caused unnecessary load on the > message bus. (Btw this performance impact was raised in the original > spec [5]). The original reason we emited both type of notifications was > to keep backward compatibility with consumers only understanding legacy > notifications. It worked so well that most of the consumers still > depend on the legacy notifications (detailed status is in [1]). > > I see three options to move forward: > > A) Do nothing. By default, emit both type of notifications. This means > that deployers seeing the performance impact of [4] needs to > reconfigure nova. Also this is agains our decision to move away from > supporting legacy notifications. > > B) Follow the plan from the PTG and change the default value of the > config to emit only versioned notifications. This solves the immediate > effect of the bug [4]. This is following our original plan. BUT this is > backward incompatible change which in the current situation means most > of the deployments (e.g. those, deploying notification consumer modules > like ceilometer) need to change this config to 'unversioned' or 'both'. > > There was discussion about helping out notification consumers to > convert their code using the new versioned interface but I have no > spare time at the moment and I failed to drum up resources internally > for this work. Also I don't see others volunteering for such work. > > C) Change the default to 'legacy'. This solves the bug [4]. BUT sends a > really mixed message. As our original goal was to stop supporting the > legacy notification interface of nova. > Matt stated on IRC recently, there is no real bug inflow for the legacy > code. Also looking at changes in the code I see that last cycle we had > couple of blueprints extending the new versioned notifications with > extra fields but that inflow stopped in this cycle too. So we most > probably can keep the legacy code in place and supported forever > without too much pain and too much divergence from the versioned > interface. > > -- > > Personally, I spent enough time implementing and reviewing the > versioned interface that I'm biased towards option B). But I do > understand that our original goals setting in 2015 was made in such > environment that changed significantly in the past cycles. So I can > accept option C) as well if we can agree. > > Cheers, > gibi > > [1]https://etherpad.openstack.org/p/nova-ptg-stein L765 > [2] > https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein+status:open > [3]https://review.openstack.org/#/c/603079/ > [4]https://bugs.launchpad.net/nova/+bug/1805659 > [5] > https://review.openstack.org/#/c/224755/11/specs/mitaka/approved/versioned-notification-api.rst at 687 In skimming the spec again: https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/versioned-notification-transformation-newton.html I get that legacy unversioned notifications in nova were kind of a mess and changing them without a version was icky even though they are internal to a deployment. Moving to using versioned notifications with well-defined objects, docs samples, all of that is very nice for nova *developers*, but what I've been struggling with is what benefit do the versioned notifications bring to nova *consumers*, besides we say you can only add new notifications if they are versioned. In other words, what motivation does a project that is currently consuming unversioned legacy notifications have to put the work in to switch to versioned notifications? The spec doesn't really get into that in much detail. I do remember an older thread with gordc where he was asking about schema payloads and such so a consumer could say implement processing a notification at version 1.0, but then if it gets a version 1.4 payload it could figure out how to backlevel the payload to the 1.0 version it understands - sort of like what we've talked about doing with os-vif between nova and neutron (again, for years). But that thread kind of died out from what I remember. But that seems like a nice benefit for upgrades, but I also figure it's probably very low priority on everyone's wishlist. Another way I've thought about the deprecation lately is that if we (nova devs) did the work to migrate ceilometer, then we'd have one major consumer and we could then deprecate the legacy notifications and change the default to 'versioned'. But like you said, no one is coming out of the woodwork to do that work. Anyway, I'm just having a hard time with the fact so much work has been put into this over the last few years that now everything is transformed but no one is using it, and I don't see many compelling reasons why a project would migrate, especially since the projects that are consuming nova's notifications are mostly working with a skeleton crew of maintainers. I'd really like some input from other project developers and operators here. -- Thanks, Matt From pshchelokovskyy at mirantis.com Thu Dec 6 18:02:25 2018 From: pshchelokovskyy at mirantis.com (Pavlo Shchelokovskyy) Date: Thu, 6 Dec 2018 20:02:25 +0200 Subject: [barbican][nova] booting instance from snapshot or unshelve with image signature verification enabled Message-ID: Hi all, I am looking at how Nova is integrated with Barbican and am wondering how the user workflow when booting instance from snapshot should work (including unshelving a shelved instance) when Nova is set to strictly verify Glance images' signatures. Currently Nova strips by default all signature-related image metadata of original image when creating snapshot and for good reason - as the hash of the snapshot is definitely not the same as that of the image it was booted from, the signature of the original image is no longer valid for snapshot. Effectively that means that when strict image signature validation is enabled in Nova, the user can no longer simply boot from that snapshot, and even less obvious, can not unshelve instances the same way as without signature validation enabled. So is it expected that user manually signs her instance snapshots or is there some automagic way to do it? Or is it a known issue / limitation? Unfortunately I couldn't find any existing bugs or mentions in docs on that. Best regards, -- Dr. Pavlo Shchelokovskyy Principal Software Engineer Mirantis Inc www.mirantis.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Dec 6 18:10:16 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 6 Dec 2018 12:10:16 -0600 Subject: [sdk] Establishing SDK Validation Baseline Message-ID: Hi everyone, We have spent some time working to get an idea of what official SDKs would look like. We had some sessions during the Berlin summit[0][1] and there was a lot of great feedback. Currently the following SDKs are generally considered usable for their respective language; there are others of course: openstacksdk (Python) gophercloud (Go) pkgcloud (JavaScript) openstack4j (Java) rust-openstack (Rust) fog-openstack (Ruby) php-opencloud (PHP) After many discussions it seems SDK validation essentially should be about confirming cloud state pre/post SDK interaction rather than API support. An example is that when I use pkgcloud and ask that a VM be created, does the VM exist, in the way I asked it exist, rather than are there specific API calls that are being hit along the way to creating my VM. I am putting this email out to keep the community informed of what has been discussed in this space but also and most importantly to get feedback and support for this work. It would be great to get a set of official and community SDKs, get them setup with CI testing for validation (not changing their current CI for unit/functional/acceptance testing; unless asked to help do this), and connect the results to the updated project navigator SDK section. A list of scenarios has been provided as a good starting point for cloud state checks.[2] Essentially the proposal is to deploy OpenStack from upstream (devstack or other), stand up a VM within the cloud, grab all the SDKs, run acceptance tests, report pass/fail results, update project navigator. Of course there are details to be worked out and I do have a few questions that I hope would help get everyone interested on the same page via this thread. 1. Does this make sense? 1. Would folks be interested in a SDK SIG or does it make more sense to request an item on the API SIG's agenda? 1. Bi-weekly discussions a good cadence? 1. Who is interested in tackling this together? [0] https://etherpad.openstack.org/p/BER-better-expose-what-we-produce [1] https://etherpad.openstack.org/p/BER-sdk-certification [2] https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu Dec 6 18:36:25 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 6 Dec 2018 13:36:25 -0500 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: thanks for bringing this up Melvin. On Thu, Dec 6, 2018 at 1:13 PM Melvin Hillsman wrote: > Essentially the proposal is to deploy OpenStack from upstream (devstack or other), stand up a VM within the cloud, grab all the SDKs, run acceptance tests, report pass/fail results, update project navigator. Of course there are details to be worked out and I do have a few questions that I hope would help get everyone interested on the same page via this thread. > > Does this make sense? > makes sense to me, and sounds like a good idea provided we have the people ready to maintain the testing infra and patches for this (which i assume we do). > Would folks be interested in a SDK SIG or does it make more sense to request an item on the API SIG's agenda? > i don't have a strong opinion either way, but i will point out that the API-SIG has migrated to office hours instead of a weekly meeting. if you expect that the proposed SDK work will have a strong cadence then it might make more practical sense to create a new SIG, or really even a working group until the objective of the testing is reached. the only reason i bring up working groups here is that there seems like a clearly stated goal for the initial part of this work. namely creating the testing and validation infrastructure described. it might make sense to form a working group until the initial work is complete and then move continued discussion under the API-SIG for organization. > Bi-weekly discussions a good cadence? > that sounds reasonable for a start, but i don't have a strong opinion here. > Who is interested in tackling this together? > if you need any help from API-SIG, please reach out. i would be willing to help with administrative/governance type stuff. peace o/ From paul.bourke at oracle.com Thu Dec 6 18:41:43 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Thu, 6 Dec 2018 10:41:43 -0800 (PST) Subject: [octavia] Routing between lb-mgmt-net and amphora Message-ID: Hi, This is mostly a follow on to the thread at[0], though due to the mailing list transition it was easier to start a new thread. I've been attempting to get Octavia setup according to the dev-quick-start guide[1], but have been struggling with the following piece: "Add appropriate routing to / from the ‘lb-mgmt-net’ such that egress is allowed, and the controller (to be created later) can talk to hosts on this network." In mranga's reply, they say: > -- Create an ovs port on br-int > -- Create a neutron port using the ovs port that you just created. > -- Assign the ip address of the neutron port to the ovs port > -- Use ip netns exec to assign a route in the router namespace of the LoadBalancer network. I have enough of an understanding of Neutron/OVS for this to mostly make sense, but not enough to actually put it into practice it seems. My environment: 3 x control nodes 2 x network nodes 1 x compute All nodes have two interfaces, eth0 being the management network - 192.168.5.0/24, and eth1 being used for the provider network. I then create the Octavia lb-mgmt-net on 172.18.2.0/24. I've read the devstack script[2] and have the following questions: * Should I add the OVS port to br-int on the compute, network nodes, or both? * What is the purpose of creating a neutron port in this scenario If anyone is able to explain this a bit further or can even point to some good material to flesh out the underlying concepts it would be much appreciated, I feel the 'Neutron 101' videos I've done so far are not quite getting me there :) Cheers, -Paul [0] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000544.html [1] https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html [2] https://github.com/openstack/octavia/blob/master/devstack/plugin.sh From duc.openstack at gmail.com Thu Dec 6 18:53:53 2018 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 6 Dec 2018 10:53:53 -0800 Subject: [senlin] Cancelling this week's meeting Message-ID: Everyone, The new odd week meeting time has just been approved but due to the short notice and also since there are no important updates since last week, I will cancel the Senlin meeting for this week. Next week the meeting will be on Friday at 530 UTC. Regards, Duc From chris at openstack.org Thu Dec 6 19:52:35 2018 From: chris at openstack.org (Chris Hoge) Date: Thu, 6 Dec 2018 11:52:35 -0800 Subject: [loci] How to add some agent to loci images In-Reply-To: References: Message-ID: <2ED125CF-8AA3-4D81-8C0F-FDF8ED1EF2F0@openstack.org> Except in a few instances, Loci aims to generically build OpenStack service images and has pretty robust scripts that will allow you to build the containers without modification of the project. In the instance of neutron-fwaas, you can easily build the service following the instructions in README.md[1] in the Loci repository. Just set `PROJECT=neutron-fwaas` and tag appripriately. The major caveat is the build for that project is not gate tested (although I've managed to complete a build of it in my own environment). You could do the same thing similarly for neutron-lbaas, but please be aware that for a number of reasons neutron-lbaas project has been deprecated[2], and you should instead prefer to use Octavia as the official replacement for it. Thanks, Chris [1] http://git.openstack.org/cgit/openstack/loci/tree/README.md [2] https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > On Dec 5, 2018, at 5:30 PM, SIRI KIM wrote: > > Hello, Jean. > > I tried to add lbaas-agent and fwaas-agent to official loci neutron image. > To pass openstack-helm gate test, I need lbaas-agent and fwaas-agent. > > I found your openstack source repository is > repository: 172.17.0.1:5000/loci/requirements > > Please let me know I can I add lbaas-agent and fwaas-agent to official loci neutron image. > > Thanks, > Siri From doug at doughellmann.com Thu Dec 6 20:43:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 06 Dec 2018 15:43:09 -0500 Subject: [tc] agenda for Technical Committee Meeting 6 Dec 2018 @ 1400 UTC In-Reply-To: References: Message-ID: Doug Hellmann writes: > TC Members, > > Our next meeting will be this Thursday, 6 Dec at 1400 UTC in > #openstack-tc. This email contains the agenda for the meeting, based on > the content of the wiki [0]. > > If you will not be able to attend, please include your name in the > "Apologies for Absence" section of the wiki page [0]. > > [0] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee > > > > * Follow up on past action items > > ** dhellmann complete liaison assignments using the random generator > > I have updated the team liaisons in the wiki [1]. Please review the > list of projects to which you are assigned. > > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker#Project_Teams > > ** tc-members review the chair duties document > > The draft document from [2] has been merged and is now available in > the governance repo as CHAIR.rst [3]. Please come prepared to discuss > any remaining questions about the list of chair duties. > > [2] https://etherpad.openstack.org/p/tc-chair-responsibilities > [3] http://git.openstack.org/cgit/openstack/governance/tree/CHAIR.rst > > * active initiatives > > ** keeping up with python 3 releases > > We are ready to approve Zane's resolution for a process for tracking > python 3 versions [4]. There is one wording update [5] that we should > prepare for approval as well. The next step will be to approve Sean's > patch describing the runtimes supported for Stein [6]. > > Please come prepared to discuss any issues with those patches so we > can resolve them and move forward. > > [4] https://review.openstack.org/613145 > [5] https://review.openstack.org/#/c/621461/1 > [6] https://review.openstack.org/#/c/611080/ > > * follow-up from Berlin Forum > > ** Vision for OpenStack clouds > > Zane has summarized the forum session on the mailing list [7], > including listing several potential updates to the vision based on > our discussion there. Please come prepared to discuss next steps for > making those changes. > > [7] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000431.html > > ** Train cycle goals > > I posted my summary of the forum session [8]. Each of the candidate > goals have work to be done before they could be selected, so we will > need to work with the sponsors and champions to see where enough > progress is made to let us choose from among the proposals. Lance has > agreed to lead the selection process for the Train goals, and will be > looking for someone to pair up with on that. > > [8] http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000055.html > > ** Other TC outcomes from Forum > > We had several other forum sessions, and should make sure we have a > good list of any promised actions that came from those > discussions. Please come prepared to discuss any sessions you > moderated -- having summaries on the mailing list before the meeting > would be very helpful. > > -- > Doug > The log and summary of this meeting are available in the usual place: Minutes: http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.html Minutes (text): http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.txt Log: http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.log.html -- Doug From doug at doughellmann.com Thu Dec 6 21:00:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 06 Dec 2018 16:00:19 -0500 Subject: [tc] Technical Committee status update for 6 December Message-ID: This is the (alegedly) weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == It has been 4 weeks since the last update email, in part due to the Summit, then a holiday, then my absense due to illness. Project updates: * Add tenks under Ironic: https://review.openstack.org/#/c/600411/ * Add os_placement role to OpenStack Ansible: https://review.openstack.org/#/c/615187/ * Retired openstack-ansible-os_monasca-ui https://review.openstack.org/#/c/617322/ * Chris Hoge was made the new Loci team PTL https://review.openstack.org/#/c/620370/ Other updates: * Thierry has been working on a series of changes on behalf of the release management team to record the release management style used for each deliverable listed in the governance repository. https://review.openstack.org/#/c/613268/ * Zane updated our guidelines for new projects to clarify the scoping requirements: https://review.openstack.org/#/c/614799/ * Zane also added a Vision for OpenStack Clouds document, describing what we see as the scope of OpenStack overall: https://review.openstack.org/#/c/592205/ * I copied some code we had in a couple of other places into the governance repository to make it easier for the release, goal, and election tools to consume the governance data using the new openstack-governance library from PyPI. https://review.openstack.org/#/c/614599/ * I started a document describing the responsibilities of the TC chair. https://review.openstack.org/#/c/618810/ == TC Meetings == In order to fulfill our obligations under the OpenStack Foundation bylaws, the TC needs to hold meetings at least once each quarter. We agreed to meet monthly, and to emphasize agenda items that help us move initiatives forward while leaving most of the discussion of those topics to the mailing list. The agendas for all of our meetings will be sent to the openstack-dev mailing list in advance, and links to the logs and summary will be sent as a follow up after the meeting. Our most recent meeting was held on 6 Dec 2018. * http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000696.html The next meeting will be 3 December @ 1400 UTC in #openstack-tc == Ongoing Discussions == We have several governance changes up for review related to deciding how we will manage future Python 3 upgrades (including adding 3.7 and possibly dropping 3.5 during Stein). These are close to being approved, with some final wording adjustments being made now. * Explicitly declare stein supported runtimes: https://review.openstack.org/#/c/611080/ * Resolution on keeping up with Python 3 releases: https://review.openstack.org/#/c/613145/ I proposed an update to our house rules to allow faster approval of release management metadata maintained in the governance repository. * https://review.openstack.org/#/c/622989/ Thierry and Chris have been working on a description of the role of the TC. * https://review.openstack.org/#/c/622400/ Ghanshyam has proposed an enhancement to the Vision for OpenStack Clouds to cover feature discovery. * https://review.openstack.org/#/c/621516/ Lance has proposed an update to the charter to clarify how we will handle PTL transitions in the middle of a development cycle. * https://review.openstack.org/#/c/620928/ == TC member actions/focus/discussions for the coming week(s) == We have several significant changes up for review. Please take time to consider all of the open patches, then comment and vote on them. It's very difficult to tell if we have reached consensus if everyone waits for the conversation to settle before commenting. == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: - 09:00 UTC on Tuesdays - 01:00 UTC on Wednesdays - 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. You will find channel logs with past conversations at http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-discuss at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. -- Doug From openstack at nemebean.com Thu Dec 6 21:06:25 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 6 Dec 2018 15:06:25 -0600 Subject: [tc] Technical Committee status update for 6 December In-Reply-To: References: Message-ID: On 12/6/18 3:00 PM, Doug Hellmann wrote: > > This is the (alegedly) weekly summary of work being done by the > Technical Committee members. The full list of active items is managed in > the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker > > We also track TC objectives for the cycle using StoryBoard at: > https://storyboard.openstack.org/#!/project/923 > > == Recent Activity == > > It has been 4 weeks since the last update email, in part due to the > Summit, then a holiday, then my absense due to illness. > > Project updates: > > * Add tenks under Ironic: https://review.openstack.org/#/c/600411/ > * Add os_placement role to OpenStack Ansible: https://review.openstack.org/#/c/615187/ > * Retired openstack-ansible-os_monasca-ui https://review.openstack.org/#/c/617322/ > * Chris Hoge was made the new Loci team PTL https://review.openstack.org/#/c/620370/ > > Other updates: > > * Thierry has been working on a series of changes on behalf of the > release management team to record the release management style used > for each deliverable listed in the governance > repository. https://review.openstack.org/#/c/613268/ > * Zane updated our guidelines for new projects to clarify the scoping > requirements: https://review.openstack.org/#/c/614799/ > * Zane also added a Vision for OpenStack Clouds document, describing > what we see as the scope of OpenStack overall: > https://review.openstack.org/#/c/592205/ > * I copied some code we had in a couple of other places into the > governance repository to make it easier for the release, goal, and > election tools to consume the governance data using the new > openstack-governance library from > PyPI. https://review.openstack.org/#/c/614599/ > * I started a document describing the responsibilities of the TC > chair. https://review.openstack.org/#/c/618810/ > > == TC Meetings == > > In order to fulfill our obligations under the OpenStack Foundation > bylaws, the TC needs to hold meetings at least once each quarter. We > agreed to meet monthly, and to emphasize agenda items that help us move > initiatives forward while leaving most of the discussion of those topics > to the mailing list. The agendas for all of our meetings will be sent to > the openstack-dev mailing list in advance, and links to the logs and > summary will be sent as a follow up after the meeting. > > Our most recent meeting was held on 6 Dec 2018. > > * http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000696.html > > The next meeting will be 3 December @ 1400 UTC in #openstack-tc *January? > > == Ongoing Discussions == > > We have several governance changes up for review related to deciding how > we will manage future Python 3 upgrades (including adding 3.7 and > possibly dropping 3.5 during Stein). These are close to being approved, > with some final wording adjustments being made now. > > * Explicitly declare stein supported runtimes: > https://review.openstack.org/#/c/611080/ > * Resolution on keeping up with Python 3 releases: > https://review.openstack.org/#/c/613145/ > > I proposed an update to our house rules to allow faster approval of > release management metadata maintained in the governance repository. > > * https://review.openstack.org/#/c/622989/ > > Thierry and Chris have been working on a description of the role of the > TC. > > * https://review.openstack.org/#/c/622400/ > > Ghanshyam has proposed an enhancement to the Vision for OpenStack Clouds > to cover feature discovery. > > * https://review.openstack.org/#/c/621516/ > > Lance has proposed an update to the charter to clarify how we will > handle PTL transitions in the middle of a development cycle. > > * https://review.openstack.org/#/c/620928/ > > == TC member actions/focus/discussions for the coming week(s) == > > We have several significant changes up for review. Please take time to > consider all of the open patches, then comment and vote on them. It's > very difficult to tell if we have reached consensus if everyone waits > for the conversation to settle before commenting. > > == Contacting the TC == > > The Technical Committee uses a series of weekly "office hour" time > slots for synchronous communication. We hope that by having several > such times scheduled, we will have more opportunities to engage > with members of the community from different timezones. > > Office hour times in #openstack-tc: > > - 09:00 UTC on Tuesdays > - 01:00 UTC on Wednesdays > - 15:00 UTC on Thursdays > > If you have something you would like the TC to discuss, you can add > it to our office hour conversation starter etherpad at: > https://etherpad.openstack.org/p/tc-office-hour-conversation-starters > > Many of us also run IRC bouncers which stay in #openstack-tc most > of the time, so please do not feel that you need to wait for an > office hour time to pose a question or offer a suggestion. You can > use the string "tc-members" to alert the members to your question. > > You will find channel logs with past conversations at > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/ > > If you expect your topic to require significant discussion or to need > input from members of the community other than the TC, please start a > mailing list discussion on openstack-discuss at lists.openstack.org and > use the subject tag "[tc]" to bring it to the attention of TC members. > From mranga at gmail.com Thu Dec 6 21:12:09 2018 From: mranga at gmail.com (M. Ranganathan) Date: Thu, 6 Dec 2018 16:12:09 -0500 Subject: [octavia] Routing between lb-mgmt-net and amphora In-Reply-To: References: Message-ID: HACK ALERT Disclaimer: My suggestion could be clumsy. On Thu, Dec 6, 2018 at 1:46 PM Paul Bourke wrote: > Hi, > > This is mostly a follow on to the thread at[0], though due to the mailing > list transition it was easier to start a new thread. > > I've been attempting to get Octavia setup according to the dev-quick-start > guide[1], but have been struggling with the following piece: > > "Add appropriate routing to / from the ‘lb-mgmt-net’ such that egress is > allowed, and the controller (to be created later) can talk to hosts on this > network." > > In mranga's reply, they say: > > > -- Create an ovs port on br-int > > -- Create a neutron port using the ovs port that you just created. > > -- Assign the ip address of the neutron port to the ovs port > > -- Use ip netns exec to assign a route in the router namespace of the > LoadBalancer network. > > I have enough of an understanding of Neutron/OVS for this to mostly make > sense, but not enough to actually put it into practice it seems. My > environment: > > 3 x control nodes > 2 x network nodes > 1 x compute > > All nodes have two interfaces, eth0 being the management network - > 192.168.5.0/24, and eth1 being used for the provider network. I then > create the Octavia lb-mgmt-net on 172.18.2.0/24. > > I've read the devstack script[2] and have the following questions: > > * Should I add the OVS port to br-int on the compute, network nodes, or > both? > I have only one controller which also functions as my network node. I added the port on the controller/network node. br-int is the place where the integration happens. You will find each network has an internal vlan tag associated with it. Use the tag assigned to your lb network when you create the ovs port. ovs-vsctl show will tell you more. * What is the purpose of creating a neutron port in this scenario > Just want to be sure Neutron knows about it and has an entry in its database so the address won't be used for something else. If you are using static addresses, for example you should not need this (I think). BTW the created port is DOWN. I am not sure why and I am not sure it matters. > If anyone is able to explain this a bit further or can even point to some > good material to flesh out the underlying concepts it would be much > appreciated, I feel the 'Neutron 101' videos I've done so far are not quite > getting me there :) > > Cheers, > -Paul > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000544.html > [1] > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html > [2] https://github.com/openstack/octavia/blob/master/devstack/plugin.sh > > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Dec 6 21:14:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 06 Dec 2018 16:14:17 -0500 Subject: [tc] Technical Committee status update for 6 December In-Reply-To: References: Message-ID: Ben Nemec writes: > On 12/6/18 3:00 PM, Doug Hellmann wrote: >> >> >> Our most recent meeting was held on 6 Dec 2018. >> >> * http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000696.html >> >> The next meeting will be 3 December @ 1400 UTC in #openstack-tc > > *January? Hey, look, someone reads these messages! Yes, January, 2019. Thanks for spotting that. Details and ICS file at http://eavesdrop.openstack.org/#Technical_Committee_Meeting -- Doug From jp.methot at planethoster.info Thu Dec 6 21:19:24 2018 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Thu, 6 Dec 2018 16:19:24 -0500 Subject: [ops] Disk QoS in Cinder and Nova flavour Message-ID: <1D2E0C27-525D-4262-A0D5-C56E0DEAC9AE@planethoster.info> Hi, This is something that’s not exactly clear in the documentation and so I thought I’d ask here. There are currently two ways to set disk IO limits, or QoS: through Cinder QoS values or through Nova flavours. My question is, what’s the difference exactly? I assume that Cinder QoS only applies on Cinder volume, but is the same true for flavour QoS? In other words, is QoS through flavour supposed to apply to both Cinder volumes and ephemeral storage or is it only for ephemeral storage? Additionally, I'm under the assumption that once you provision a block device of a certain type with QoS associated to it, it becomes impossible to modify the values afterwards. Is that correct? This is a strange behaviour, considering that technically, you should be able to change QoS when it’s set through flavour using a simple resize. Please correct me if I’m wrong. I feel that this could be better explained in the current documentation. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Dec 6 21:19:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 6 Dec 2018 21:19:44 +0000 Subject: [tc] Adapting office hours schedule to demand In-Reply-To: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> References: <20cbf1ae-eb09-5193-01cd-0f14fa674a51@openstack.org> Message-ID: <20181206211943.y6w365nm746r7ula@yuggoth.org> On 2018-12-04 16:45:20 +0100 (+0100), Thierry Carrez wrote: [...] > Should we: > > - Reduce office hours to one or two per week, possibly rotating times > > - Dump the whole idea and just encourage people to ask questions at any time > on #openstack-tc, and get asynchronous answers > > - Keep it as-is, it still has the side benefit of triggering spikes of TC > member activity > > Thoughts ? I'm fine keeping the schedule as-is (it hasn't been any particular inconvenience for me personally), but if there are times we think might work better for the community then I'm all for rearranging it into a more effective lineup too. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Thu Dec 6 21:48:33 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 6 Dec 2018 13:48:33 -0800 Subject: [dev][nova] time for another spec review day? In-Reply-To: References: Message-ID: On Wed, 5 Dec 2018 10:38:06 -0800, Melanie Witt wrote: > Our spec freeze is milestone 2 January 10 and I was thinking, because of > holiday time coming up, it might be a good idea to have another spec > review day ahead of the freeze early next year. I was thinking maybe > Tuesday next week December 11, to allow the most amount of time before > holiday PTO starts. > > Please let me know what you think. We discussed this in the nova meeting today and the consensus was NOT to have another spec review day. We have a lot in-flight already and people are busy with other things, so based on the input from the meeting and lack of response on this email, we will pass on having another spec review day. Best, -melanie From mriedemos at gmail.com Thu Dec 6 22:12:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 16:12:04 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <20181206062035.GB28275@sm-workstation> Message-ID: On 12/6/2018 9:15 AM, Lance Bragstad wrote: > Today in the TC meeting, we discussed the status of the three candidate > goals [0]. Ultimately, we as the TC, are wondering who would be willing > to drive the goal work. > > Having a champion step up early on will help us get answers to questions > about the feasibility of the goal, it's impact across OpenStack, among > other things that will help us, as a community, make an informed decision. > > Remember, championing a goal doesn't need to fall on a single > individual. With proper communication, work can be spread out to lighten > the load. > > What I'd like is to open this up to the community and see who would be > willing to drive the proposed goals. If you have any questions about > championing a goal, please don't hesitate to swing by #openstack-tc, or > you can ping me privately. Regarding the legacy/OSC one, there was a TODO from Berlin for Monty and I think dtantsur (volunteered by Monty) to do some documentation (or something) about what some of this would look like? If that isn't done, it might be good to get done first to help make an informed decision. Otherwise my recollection was the same as Doug's and that we need to be working on closing gaps in OSC regardless of what the underlying code is doing (using the SDK or e.g. python-novaclient). Once that is done we can start working on shifting the underlying stuff from legacy clients to SDK. To make that more digestible, I would also suggest that the goal could aim for some specific release of compatibility as a minimum, e.g. make sure OSC supports all compute API microversions up through Mitaka since lots of clouds are still running Mitaka*. Trying to scope this to "let's get parity up to what the server supported 2 years ago" rather than all of it and fail would be beneficial for projects with bigger gaps. *In fact, according to https://www.openstack.org/analytics Mitaka is the #1 release being used in deployments right now. -- Thanks, Matt From mriedemos at gmail.com Thu Dec 6 22:14:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 16:14:42 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <20181206062035.GB28275@sm-workstation> Message-ID: <064cb28c-bf7c-2fe4-eec8-653ef5278979@gmail.com> On 12/6/2018 9:30 AM, Jay Bryant wrote: > With that said, I know that there is a sizable gap between what OSC has > for Cinder and what is available for > python-cinderclient.  If we make this a goal we are doing to need good > organization and documentation of those > gaps and volunteers to help make this change happen. Get someone to start doing this: https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc It's not hard, doesn't take (too) long, and would give you an idea of what your target goal should be. That's why I keep harping on Mitaka as a goal for parity with the compute API. -- Thanks, Matt From lbragstad at gmail.com Thu Dec 6 22:49:30 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 6 Dec 2018 16:49:30 -0600 Subject: [tc][all] Train Community Goals In-Reply-To: References: <20181206062035.GB28275@sm-workstation> Message-ID: On Thu, Dec 6, 2018 at 11:29 AM Artem Goncharov wrote: > > > On Thu, Dec 6, 2018, 16:19 Lance Bragstad wrote: > >> Today in the TC meeting, we discussed the status of the three candidate >> goals [0]. Ultimately, we as the TC, are wondering who would be willing to >> drive the goal work. >> >> Having a champion step up early on will help us get answers to questions >> about the feasibility of the goal, it's impact across OpenStack, among >> other things that will help us, as a community, make an informed decision. >> >> Remember, championing a goal doesn't need to fall on a single individual. >> With proper communication, work can be spread out to lighten the load. >> >> What I'd like is to open this up to the community and see who would be >> willing to drive the proposed goals. If you have any questions about >> championing a goal, please don't hesitate to swing by #openstack-tc, or you >> can ping me privately. >> > > I was waiting for the start of switching osc "services" to SDK for quite a > while now. I am definitely interested and committed to support the real > coding work here. I would also like to volunteer driving the goal if noone > objects. > Thanks chiming in, Artem! As others in the thread have eluded to, this could very well be a multi-step effort. A big part of that is going to be figuring out where the different projects are in relation to the desired end state. Would you be open to starting some preliminary work to help collect some of that information? Matt's etherpad for the compute API gap analysis with OSC is a good example. > > >> [0] >> http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-12-06-14.00.log.html#l-104 >> >> On Thu, Dec 6, 2018 at 12:20 AM Sean McGinnis >> wrote: >> >>> > > >>> > > In other words, does #1 mean each python-clientlibrary's OSC plugin >>> is >>> > > ready to rock and roll, or we talking about everyone rewriting all >>> client >>> > > interactions in to openstacksdk, and porting existing OSC plugins >>> use that >>> > > different python sdk. >>> > >>> > We talked about those things as separate phases. IIRC, the first phase >>> > was to include ensuring that python-openstackclient has full feature >>> > coverage for non-admin operations for all microversions, using the >>> > existing python-${service}client library or SDK as is appropriate. The >>> > next phase was to ensure that the SDK has full feature coverage for all >>> > microversions. After that point we could update OSC to use the SDK and >>> > start deprecating the service-specific client libraries. >>> > >>> >>> That was my recollection as well. >>> >>> > > In other words, some projects could find it very easy or that they >>> are >>> > > already done, where as others could find themselves with a huge lift >>> that >>> > > is also dependent upon review bandwidth that is outside of their >>> control or >>> > > influence which puts such a goal at risk if we try and push too hard. >>> > > >>> > > -Julia >>> > > >>> >>> I do think there is still a lot of foundation work that needs to be done >>> before >>> we can make it a cycle goal to move more completely to osc. Before we get >>> there, I think we need to see more folks involved on the project to be >>> ready >>> for the increased attention. >>> >>> Right now, I would classify this goal as a "huge lift". >>> >>> Sean >>> >> > Artem > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu Dec 6 22:50:28 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 7 Dec 2018 09:50:28 +1100 Subject: [openstack-dev] [puppet] [stable] Deprecation of newton branches In-Reply-To: <3e084433-c965-3881-53ac-761606b5604b@binero.se> References: <2ae10ca2-8c77-ccbc-c009-a22c1d3cfd69@binero.se> <20181205050207.GA19462@thor.bakeyournoodle.com> <3e084433-c965-3881-53ac-761606b5604b@binero.se> Message-ID: <20181206225027.GA32339@thor.bakeyournoodle.com> On Thu, Dec 06, 2018 at 09:40:57AM +0100, Tobias Urdin wrote: > Hello Tony, > Yes that list is correct and complete, please go ahead. Done. I discovered I need to tweak this process which gives the PTLs more control so that's good :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mriedemos at gmail.com Thu Dec 6 22:55:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 16:55:49 -0600 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: On 12/6/2018 12:10 PM, Melvin Hillsman wrote: > After many discussions it seems SDK validation essentially should be > about confirming cloud state pre/post SDK interaction rather than API > support. An example is that when I use pkgcloud and ask that a VM be > created, does the VM exist, in the way I asked it exist, rather than are > there specific API calls that are being hit along the way to creating my VM. > > I am putting this email out to keep the community informed of what has > been discussed in this space but also and most importantly to get > feedback and support for this work. It would be great to get a set of > official and community SDKs, get them setup with CI testing for > validation (not changing their current CI for unit/functional/acceptance > testing; unless asked to help do this), and connect the results to the > updated project navigator SDK section. A list of scenarios has been > provided as a good starting point for cloud state checks.[2] > > Essentially the proposal is to deploy OpenStack from upstream (devstack > or other), stand up a VM within the cloud, grab all the SDKs, run > acceptance tests, report pass/fail results, update project navigator. Of > course there are details to be worked out and I do have a few questions > that I hope would help get everyone interested on the same page via this > thread. > > 1. Does this make sense? Makes sense to me. Validating the end result of some workflow makes sense to me. One thing I'm wondering about is what you'd start with for basic scenarios. It might make sense to start with some of the very basic kinds of things that openstack interoperability certification requires, like create a VM, attach a volume and port, and then delete it all. Then I guess you'd build from there? > > 2. Would folks be interested in a SDK SIG or does it make more sense to > request an item on the API SIG's agenda? No opinion. > > 3. Bi-weekly discussions a good cadence? Again no opinion but I wanted to say thanks for communicating this for others less involved in this space - it's good to know *something* is going on. > > 4. Who is interested in tackling this together? Again, no real opinion from me. :) Hopefully that other person that I just saw run away over there... -- Thanks, Matt From cboylan at sapwetik.org Thu Dec 6 23:16:01 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 06 Dec 2018 15:16:01 -0800 Subject: [infra] Update on test throughput and Zuul backlogs Message-ID: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Hello everyone, I was asked to write another one of these in the Nova meeting today so here goes. TripleO has done a good job of reducing resource consumption and now represents about 42% of the total resource usage for the last month down from over 50% when we first started tracking this info. Generating the report requires access to Zuul's scheduler logs so I've pasted a copy at http://paste.openstack.org/show/736797/. There is a change, https://review.openstack.org/#/c/616306/, to report this data via statsd which will allow anyone to generate it off of our graphite server once deployed. Another piece of exciting (good) news is that we've changed the way the Zuul resource allocation scheme prioritizes requests. In the check pipeline a change's relative priority is based on how many changes for that project are already in check and in the gate pipeline it is relative to the number of changes in the shared gate queue. What this means is that less active projects shouldn't need to wait as long for their changes to be tested, but more active projects like tripleo-heat-templates, nova, and neutron may see other changes being tested ahead of their changes. More details on this thread, http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000482.html. One side effect of this change is that our Zuul is now running more jobs per hour than in the past (because it can more quickly churn through changes for "cheap" projects). Unfortunately, this has increased the memory demands on the zuul-executors and we've found that we are using far more swap than we'd like. We'll be applying https://review.openstack.org/#/c/623245/ to reduce the amount of memory required by each job during the course of its run which we hope helps. We've also added one new executor with plans to add a second if this change doesn't help. All that said flaky tests are still an issue. One set of problems seems related to slower than expected/before test nodes in the BHS1 region. We've been debugging these with OVH (thank you amorin!) and think we've managed to make some improvements though so far the problems persist. Current theory is that we are acting as our own noisy neighbors starving the hypervisors of disk IO throughput. In order to test that we've halved the total number of resources we'll use there. More details at https://etherpad.openstack.org/p/bhs1-test-node-slowness including a list of e-r bugs that may be tied to this issue. One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug, https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. CentOS 7.6 released this last Monday. Fallout from that has included needing to update ansible playbooks that ensure the latest version of a centos distro package without setting become: yes. Previously the package was installed at the latest version on our images which ansible could verify without root privileges. Additionally golang is no longer a valid package on the base OS as it was on 7.5 (side note, this doesn't feel incredibly stable for users if anyone from rhel is listening). If your jobs depend on golang on centos and were getting that from the distro packages on 7.5 you'll need to find somewhere else to get golang now. With the distro updates come broken nested virt. Unfortunately nested virt continues to be a back and forth of working today, not working tomorrow. It seem that our test node kernels play a big impact on that then a few days later the various clouds apply new hypervisor kernel updates and things work again. If your jobs attempt to use nested virt and you've seen odd behavior from them (like reboots) recently this may be the cause. These are the big issues that affect large numbers of projects (or even all of them), but there are still many project specific problems floating around as well. Unfortunately I haven't had much time to help dig into those recently (see broader issues above), but I think it would be helpful if projects can do some of that digging themselves. Also, a friendly reminder that we try to provide in cloud region mirrors and caches for commonly used resources like distro packages, pypi packages, dockerhub images, and so on. If your jobs aren't using these and you find they fail occasionally due to the Internet being flaky we'll be happy to help you update the jobs to use the in region resources instead. We'll keep pushing to fix the broader issues and are more than happy to help debug failures you hit within your projects as well. Hopefully this was helpful despite its length. Clark From mrhillsman at gmail.com Thu Dec 6 23:39:04 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 6 Dec 2018 17:39:04 -0600 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 4:56 PM Matt Riedemann wrote: > On 12/6/2018 12:10 PM, Melvin Hillsman wrote: > > After many discussions it seems SDK validation essentially should be > > about confirming cloud state pre/post SDK interaction rather than API > > support. An example is that when I use pkgcloud and ask that a VM be > > created, does the VM exist, in the way I asked it exist, rather than are > > there specific API calls that are being hit along the way to creating my > VM. > > > > I am putting this email out to keep the community informed of what has > > been discussed in this space but also and most importantly to get > > feedback and support for this work. It would be great to get a set of > > official and community SDKs, get them setup with CI testing for > > validation (not changing their current CI for unit/functional/acceptance > > testing; unless asked to help do this), and connect the results to the > > updated project navigator SDK section. A list of scenarios has been > > provided as a good starting point for cloud state checks.[2] > > > > Essentially the proposal is to deploy OpenStack from upstream (devstack > > or other), stand up a VM within the cloud, grab all the SDKs, run > > acceptance tests, report pass/fail results, update project navigator. Of > > course there are details to be worked out and I do have a few questions > > that I hope would help get everyone interested on the same page via this > > thread. > > > > 1. Does this make sense? > > Makes sense to me. Validating the end result of some workflow makes > sense to me. One thing I'm wondering about is what you'd start with for > basic scenarios. It might make sense to start with some of the very > basic kinds of things that openstack interoperability certification > requires, like create a VM, attach a volume and port, and then delete it > all. Then I guess you'd build from there? > Yes sir the scenarios you mention are exactly what we have in the spreadsheet and want to add more. I did not think about it but maybe this ethercalc is better for everyone to collaborate on - https://ethercalc.org/q4nklltf21nz > > > > > 2. Would folks be interested in a SDK SIG or does it make more sense to > > request an item on the API SIG's agenda? > > No opinion. > > > > > 3. Bi-weekly discussions a good cadence? > > Again no opinion but I wanted to say thanks for communicating this for > others less involved in this space - it's good to know *something* is > going on. > > > > > 4. Who is interested in tackling this together? > > Again, no real opinion from me. :) Hopefully that other person that I > just saw run away over there... > > -- > > Thanks, > > Matt > > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrhillsman at gmail.com Thu Dec 6 23:46:36 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 6 Dec 2018 17:46:36 -0600 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 12:30 PM Artem Goncharov wrote: > > > On Thu, 6 Dec 2018, 19:12 Melvin Hillsman, wrote: > >> Hi everyone, >> >> We have spent some time working to get an idea of what official SDKs >> would look like. We had some sessions during the Berlin summit[0][1] and >> there was a lot of great feedback. >> >> Currently the following SDKs are generally considered usable for their >> respective language; there are others of course: >> >> openstacksdk (Python) >> gophercloud (Go) >> pkgcloud (JavaScript) >> openstack4j (Java) >> rust-openstack (Rust) >> fog-openstack (Ruby) >> php-opencloud (PHP) >> >> After many discussions it seems SDK validation essentially should be >> about confirming cloud state pre/post SDK interaction rather than API >> support. An example is that when I use pkgcloud and ask that a VM be >> created, does the VM exist, in the way I asked it exist, rather than are >> there specific API calls that are being hit along the way to creating my VM. >> >> I am putting this email out to keep the community informed of what has >> been discussed in this space but also and most importantly to get feedback >> and support for this work. It would be great to get a set of official and >> community SDKs, get them setup with CI testing for validation (not changing >> their current CI for unit/functional/acceptance testing; unless asked to >> help do this), and connect the results to the updated project navigator SDK >> section. A list of scenarios has been provided as a good starting point for >> cloud state checks.[2] >> >> Essentially the proposal is to deploy OpenStack from upstream (devstack >> or other), stand up a VM within the cloud, grab all the SDKs, run >> acceptance tests, report pass/fail results, update project navigator. Of >> course there are details to be worked out and I do have a few questions >> that I hope would help get everyone interested on the same page via this >> thread. >> >> >> 1. Does this make sense? >> >> > As a representative of a public cloud operator I say yes. It definitely > makes sense to let users know, which SDKs are certified to work to have > understanding what they can use. However then comes the question, that each > public operator has own "tricks" and possibly even extension services, > which make SDK behave not as expected. I do really see a need to provide a > regular way in each of those SDKs to add and (I really don't want this, but > need unfortunately) sometimes override services implementation if those are > changed in some way. > So the certification make sense, but the CI question probably goes in the > direction of OpenLab > Yes this is definitely something to consider; public cloud tricks and extensions. An idea that was floated is to have an upstream cloud to run against that passes all interop powered programs to run the SDKs against. Even though SDKs provide more coverage an SDK that works against a service that is certified should work in any cloud that is certified? > >> 1. Would folks be interested in a SDK SIG or does it make more sense >> to request an item on the API SIG's agenda? >> >> Both has pro and cons, but I am neutral here > > Same, just wanted to get it out there and hopefully get API SIG members feedback so as not to hijack any of their agendas or creep on their scope without consent. > > >> 1. Bi-weekly discussions a good cadence? >> >> Yes. The regular communication is a requirement. > > > >> 1. Who is interested in tackling this together? >> >> > I do, but not alone > Haha, agreed, but we definitely should work on it and be consistent even if the initial set of folks is small. We have a large community to lean on and should rather than simply going at it alone. > > >> >> >> [0] https://etherpad.openstack.org/p/BER-better-expose-what-we-produce >> [1] https://etherpad.openstack.org/p/BER-sdk-certification >> [2] >> https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM >> >> >> -- >> Kind regards, >> >> Melvin Hillsman >> mrhillsman at gmail.com >> mobile: (832) 264-2646 >> > > Regards, > Artem Goncharov > gtema > OpenTelekomCloud > >> -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Dec 6 23:50:30 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 6 Dec 2018 17:50:30 -0600 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: On 12/6/2018 5:16 PM, Clark Boylan wrote: > I was asked to write another one of these in the Nova meeting today so here goes. Thanks Clark, this is really helpful. > > One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug,https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. That was split off from this: https://bugs.launchpad.net/nova/+bug/1807044 But yeah a couple of issues Dan and I are digging into. Another thing I noticed in one of these nova-api start timeout failures in ovh-bhs1 was uwsgi seems to just stall for 26 seconds here: http://logs.openstack.org/01/619701/5/gate/tempest-slow/2bb461b/controller/logs/screen-n-api.txt.gz#_Dec_05_20_13_23_060958 I pushed a patch to enable uwsgi debug logging: https://review.openstack.org/#/c/623265/ But of course I didn't (1) get a recreate or (2) seem to see any additional debug logging from uwsgi. If someone else knows how to enable that please let me know. > > These are the big issues that affect large numbers of projects (or even all of them), but there are still many project specific problems floating around as well. Unfortunately I haven't had much time to help dig into those recently (see broader issues above), but I think it would be helpful if projects can do some of that digging themselves. Also, a friendly reminder that we try to provide in cloud region mirrors and caches for commonly used resources like distro packages, pypi packages, dockerhub images, and so on. If your jobs aren't using these and you find they fail occasionally due to the Internet being flaky we'll be happy to help you update the jobs to use the in region resources instead. I'm not sure if this query is valid anymore: http://status.openstack.org/elastic-recheck/#1783405 If it is, then we still have some tempest tests that aren't marked as slow but are contributing to job timeouts outside the tempest-slow job. I know the last time this came up, the QA team had a report of the slowest non-slow tests - can we get another one of those now? Another thing is, are there particular voting jobs that have a failure rate over 50% and are resetting the gate? If we do, we should consider making them non-voting while project teams work on fixing the issues. Because I've had approved patches for days now taking 13+ hours just to fail, which is pretty unsustainable. > > We'll keep pushing to fix the broader issues and are more than happy to help debug failures you hit within your projects as well. > > Hopefully this was helpful despite its length. Again, thank you Clark for taking the time to write up this summary - it's extremely useful. -- Thanks, Matt From melwittt at gmail.com Fri Dec 7 00:01:28 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 6 Dec 2018 16:01:28 -0800 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: On Thu, 06 Dec 2018 15:16:01 -0800, Clark Boylan wrote: [snip] > One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug, https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. [snip] > These are the big issues that affect large numbers of projects (or even all of them), but there are still many project specific problems floating around as well. Unfortunately I haven't had much time to help dig into those recently (see broader issues above), but I think it would be helpful if projects can do some of that digging themselves. [snip] FYI for interested people, we are working on some nova-specific problems in the following patches/series: https://review.openstack.org/623282 https://review.openstack.org/623246 https://review.openstack.org/623265 > We'll keep pushing to fix the broader issues and are more than happy to help debug failures you hit within your projects as well. Thanks for the excellent write-up. It's a nice window into what's going on in the gate, the work the infra team is doing, and letting us know how we can help. Best, -melanie From melwittt at gmail.com Fri Dec 7 00:04:44 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 6 Dec 2018 16:04:44 -0800 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: On Thu, 6 Dec 2018 16:01:28 -0800, Melanie Witt wrote: > On Thu, 06 Dec 2018 15:16:01 -0800, Clark Boylan wrote: > > [snip] > >> One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug, https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. > > [snip] > >> These are the big issues that affect large numbers of projects (or even all of them), but there are still many project specific problems floating around as well. Unfortunately I haven't had much time to help dig into those recently (see broader issues above), but I think it would be helpful if projects can do some of that digging themselves. > > [snip] > > FYI for interested people, we are working on some nova-specific problems > in the following patches/series: > > https://review.openstack.org/623282 > https://review.openstack.org/623246 > https://review.openstack.org/623265 > >> We'll keep pushing to fix the broader issues and are more than happy to help debug failures you hit within your projects as well. > > Thanks for the excellent write-up. It's a nice window into what's going > on in the gate, the work the infra team is doing, and letting us know > how we can help. Bah, didn't see Matt's reply by the time I hit send. Apologies for the [less detailed] replication. -melanie From mrhillsman at gmail.com Fri Dec 7 00:09:47 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 6 Dec 2018 18:09:47 -0600 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: On Thu, Dec 6, 2018 at 12:36 PM Michael McCune wrote: > thanks for bringing this up Melvin. > > On Thu, Dec 6, 2018 at 1:13 PM Melvin Hillsman > wrote: > > Essentially the proposal is to deploy OpenStack from upstream (devstack > or other), stand up a VM within the cloud, grab all the SDKs, run > acceptance tests, report pass/fail results, update project navigator. Of > course there are details to be worked out and I do have a few questions > that I hope would help get everyone interested on the same page via this > thread. > > > > Does this make sense? > > > > makes sense to me, and sounds like a good idea provided we have the > people ready to maintain the testing infra and patches for this (which > i assume we do). > I think we have a good base to build on, should get started, keep everyone informed, and we can continue to work to recruit more interested folks. I think if we work especially hard to streamline the entire process so it is as automated as possible. > > > Would folks be interested in a SDK SIG or does it make more sense to > request an item on the API SIG's agenda? > > > > i don't have a strong opinion either way, but i will point out that > the API-SIG has migrated to office hours instead of a weekly meeting. > if you expect that the proposed SDK work will have a strong cadence > then it might make more practical sense to create a new SIG, or really > even a working group until the objective of the testing is reached. > > the only reason i bring up working groups here is that there seems > like a clearly stated goal for the initial part of this work. namely > creating the testing and validation infrastructure described. it might > make sense to form a working group until the initial work is complete > and then move continued discussion under the API-SIG for organization. > Working group actually makes a lot more sense, thanks for the suggestion, I agree with you; anyone else? > > > Bi-weekly discussions a good cadence? > > > > that sounds reasonable for a start, but i don't have a strong opinion here. > > > Who is interested in tackling this together? > > > > if you need any help from API-SIG, please reach out. i would be > willing to help with administrative/governance type stuff. > > peace o/ > -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shiriul.lol at gmail.com Fri Dec 7 01:21:45 2018 From: shiriul.lol at gmail.com (SIRI KIM) Date: Fri, 7 Dec 2018 10:21:45 +0900 Subject: [loci][openstack-helm] How to add some agent to loci images In-Reply-To: <2ED125CF-8AA3-4D81-8C0F-FDF8ED1EF2F0@openstack.org> References: <2ED125CF-8AA3-4D81-8C0F-FDF8ED1EF2F0@openstack.org> Message-ID: I have added [openstack-helm] since this might be cross-project issue. our objetive: add neutron-lbaas & neutron-fwaas chart to openstack-helm upstream. problem: we need this loci image to push lbaas and fwaas chart into upstream repo. In order to pass osh gating, we need to have neutron-lbaas & neutron-fwaas agent image available for openstack-helm project. My question was not about how to build loci image locally, but what would be the best way to make these two images (lbaas and fwaas) available for openstack-helm project. Please kindly guide me here. :) thanks PS. This link is added openstack-helm neutron-lbaas. https://review.openstack.org/#/c/609299/ 2018년 12월 7일 (금) 오전 4:54, Chris Hoge 님이 작성: > Except in a few instances, Loci aims to generically build OpenStack > service images and has pretty robust scripts that will allow you > to build the containers without modification of the project. In the > instance of neutron-fwaas, you can easily build the service following the > instructions in README.md[1] in the Loci repository. Just set > `PROJECT=neutron-fwaas` and tag appripriately. The major caveat is the > build for that project is not gate tested (although I've managed to > complete a build of it in my own environment). > > You could do the same thing similarly for neutron-lbaas, but please be > aware that for a number of reasons neutron-lbaas project has been > deprecated[2], and you should instead prefer to use Octavia as the > official replacement for it. > > Thanks, > Chris > > [1] http://git.openstack.org/cgit/openstack/loci/tree/README.md > [2] https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation > > > On Dec 5, 2018, at 5:30 PM, SIRI KIM wrote: > > > > Hello, Jean. > > > > I tried to add lbaas-agent and fwaas-agent to official loci neutron > image. > > To pass openstack-helm gate test, I need lbaas-agent and fwaas-agent. > > > > I found your openstack source repository is > > repository: 172.17.0.1:5000/loci/requirements > > > > Please let me know I can I add lbaas-agent and fwaas-agent to official > loci neutron image. > > > > Thanks, > > Siri > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Dec 7 03:16:55 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 7 Dec 2018 14:16:55 +1100 Subject: [tc] Technical Committee status update for 6 December In-Reply-To: References: Message-ID: <20181207031655.GB32339@thor.bakeyournoodle.com> On Thu, Dec 06, 2018 at 04:00:19PM -0500, Doug Hellmann wrote: > > This is the (alegedly) weekly summary of work being done by the > Technical Committee members. The full list of active items is managed in > the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker > > We also track TC objectives for the cycle using StoryBoard at: > https://storyboard.openstack.org/#!/project/923 > > == Recent Activity == > > It has been 4 weeks since the last update email, in part due to the > Summit, then a holiday, then my absense due to illness. > > Project updates: > > * Add tenks under Ironic: https://review.openstack.org/#/c/600411/ > * Add os_placement role to OpenStack Ansible: https://review.openstack.org/#/c/615187/ > * Retired openstack-ansible-os_monasca-ui https://review.openstack.org/#/c/617322/ > * Chris Hoge was made the new Loci team PTL https://review.openstack.org/#/c/620370/ > > Other updates: > > * Thierry has been working on a series of changes on behalf of the > release management team to record the release management style used > for each deliverable listed in the governance > repository. https://review.openstack.org/#/c/613268/ > * Zane updated our guidelines for new projects to clarify the scoping > requirements: https://review.openstack.org/#/c/614799/ > * Zane also added a Vision for OpenStack Clouds document, describing > what we see as the scope of OpenStack overall: > https://review.openstack.org/#/c/592205/ > * I copied some code we had in a couple of other places into the > governance repository to make it easier for the release, goal, and > election tools to consume the governance data using the new > openstack-governance library from > PyPI. https://review.openstack.org/#/c/614599/ I took a quick stab at this, and I think we'll need a feature in that library to be able to use it in the election repo. So I'll start with that. Patches incoming Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dangtrinhnt at gmail.com Fri Dec 7 07:30:40 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Fri, 7 Dec 2018 16:30:40 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> Message-ID: Hi Clark, Outputting the exec logs does help. And, I found the problem for the functional tests to fail, it because ES cannot be started because of the way the functional sets up the new version of ES server (5.x). I'm working on updating it. Many thanks, On Tue, Dec 4, 2018 at 1:39 AM Clark Boylan wrote: > On Mon, Dec 3, 2018, at 7:28 AM, Trinh Nguyen wrote: > > Hello, > > > > Currently, [1] fails tox py27 tests on Zuul check for just updating the > log > > text. The tests are successful at local dev env. Just wondering there is > > any new change at Zuul CI? > > > > [1] https://review.openstack.org/#/c/619162/ > > > > Reading the exceptions [2] and the test setup code [3] it appears that > elasticsearch isn't responding on its http port and is thus treated as > having not started. With the info we currently have it is hard to say why. > Instead of redirecting exec logs to /dev/null [4] maybe we can capture that > data? Also probably worth grabbing the elasticsearch daemon log as well. > > Without that information it is hard to say why this happened. I am not > aware of any changes in the CI system that would cause this, but we do > rebuild our test node images daily. > > [2] > http://logs.openstack.org/62/619162/5/check/openstack-tox-py27/9ce318d/job-output.txt.gz#_2018-11-27_05_32_48_854289 > [3] > https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n868 > [4] > https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n851 > > Clark > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Fri Dec 7 08:42:42 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 07 Dec 2018 08:42:42 +0000 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: <1544171839.tqkuyzpbd8.tristanC@fedora> On December 6, 2018 11:16 pm, Clark Boylan wrote: > Additionally golang is no longer a valid package on the base OS as it was on 7.5. > According to the release note, golang is now shipped as part of the SCL. See this how-to for the install instructions: http://www.karan.org/blog/2018/12/06/using-go-toolset-on-centos-linux-7-x86_64/ Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From balazs.gibizer at ericsson.com Fri Dec 7 08:48:44 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Fri, 7 Dec 2018 08:48:44 +0000 Subject: [nova] What to do with the legacy notification interface? In-Reply-To: <1466d5db-316f-34a3-9f08-a5931ab6ce86@gmail.com> References: <1544114506.15837.0@smtp.office365.com> <1466d5db-316f-34a3-9f08-a5931ab6ce86@gmail.com> Message-ID: <1544172519.19737.0@smtp.office365.com> On Thu, Dec 6, 2018 at 6:35 PM, Matt Riedemann wrote: > On 12/6/2018 10:41 AM, Balázs Gibizer wrote: >> Hi, >> >> Last decision from the Denver PTG [1] was to >> * finish the remaining transformations [2] >> * change the default value of notification_format conf option from >> 'both' to 'versioned' >> * communicate that the legacy notification interface has been >> deprecated, unsupported but the code will not be removed >> >> Since the PTG the last two transformation patches [2] has been >> approved >> and the deprecation patch has been prepared [3]. However recently we >> got a bug report [4] where the performance impact of emiting both >> legacy and versioned notification caused unnecessary load on the >> message bus. (Btw this performance impact was raised in the original >> spec [5]). The original reason we emited both type of notifications >> was >> to keep backward compatibility with consumers only understanding >> legacy >> notifications. It worked so well that most of the consumers still >> depend on the legacy notifications (detailed status is in [1]). >> >> I see three options to move forward: >> >> A) Do nothing. By default, emit both type of notifications. This >> means >> that deployers seeing the performance impact of [4] needs to >> reconfigure nova. Also this is agains our decision to move away from >> supporting legacy notifications. >> >> B) Follow the plan from the PTG and change the default value of the >> config to emit only versioned notifications. This solves the >> immediate >> effect of the bug [4]. This is following our original plan. BUT this >> is >> backward incompatible change which in the current situation means >> most >> of the deployments (e.g. those, deploying notification consumer >> modules >> like ceilometer) need to change this config to 'unversioned' or >> 'both'. >> >> There was discussion about helping out notification consumers to >> convert their code using the new versioned interface but I have no >> spare time at the moment and I failed to drum up resources internally >> for this work. Also I don't see others volunteering for such work. >> >> C) Change the default to 'legacy'. This solves the bug [4]. BUT >> sends a >> really mixed message. As our original goal was to stop supporting the >> legacy notification interface of nova. >> Matt stated on IRC recently, there is no real bug inflow for the >> legacy >> code. Also looking at changes in the code I see that last cycle we >> had >> couple of blueprints extending the new versioned notifications with >> extra fields but that inflow stopped in this cycle too. So we most >> probably can keep the legacy code in place and supported forever >> without too much pain and too much divergence from the versioned >> interface. >> >> -- >> >> Personally, I spent enough time implementing and reviewing the >> versioned interface that I'm biased towards option B). But I do >> understand that our original goals setting in 2015 was made in such >> environment that changed significantly in the past cycles. So I can >> accept option C) as well if we can agree. >> >> Cheers, >> gibi >> >> [1]https://etherpad.openstack.org/p/nova-ptg-stein L765 >> [2] >> https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-stein+status:open >> [3]https://review.openstack.org/#/c/603079/ >> [4]https://bugs.launchpad.net/nova/+bug/1805659 >> [5] >> https://review.openstack.org/#/c/224755/11/specs/mitaka/approved/versioned-notification-api.rst at 687 > > In skimming the spec again: > > https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/versioned-notification-transformation-newton.html > > I get that legacy unversioned notifications in nova were kind of a > mess and changing them without a version was icky even though they > are internal to a deployment. Moving to using versioned notifications > with well-defined objects, docs samples, all of that is very nice for > nova *developers*, but what I've been struggling with is what benefit > do the versioned notifications bring to nova *consumers*, besides we > say you can only add new notifications if they are versioned. Funnily, orignially I only wanted to add one new (legacy) notification (service.update) to nova back in 2015 and that requirement discussion led to the birth of the versioned interface. But besides that, I think, discovering what notifications exists and what data is provided in any given notifications are someting that is lot easier now with the versioned interface for (new) consumers that don't want to read nova code. They just open the docs and the samples. Sure this is not a good reason for a consumer that did the legwork already and built something top of the legacy interface. Are there new notification consumers apparing around nova? I don't think so. I guess we were naive enough three years ago to think that there will be new notification consumers to justify this work. > > In other words, what motivation does a project that is currently > consuming unversioned legacy notifications have to put the work in to > switch to versioned notifications? The spec doesn't really get into > that in much detail. > > I do remember an older thread with gordc where he was asking about > schema payloads and such so a consumer could say implement processing > a notification at version 1.0, but then if it gets a version 1.4 > payload it could figure out how to backlevel the payload to the 1.0 > version it understands - sort of like what we've talked about doing > with os-vif between nova and neutron (again, for years). But that > thread kind of died out from what I remember. But that seems like a > nice benefit for upgrades, but I also figure it's probably very low > priority on everyone's wishlist. I agree. I don't see any benefit for well establised legacy notification consumer besides the discoverability of change in the payload schema. But that schema also does not change too much. Cheers, gibi > > Another way I've thought about the deprecation lately is that if we > (nova devs) did the work to migrate ceilometer, then we'd have one > major consumer and we could then deprecate the legacy notifications > and change the default to 'versioned'. But like you said, no one is > coming out of the woodwork to do that work. > > Anyway, I'm just having a hard time with the fact so much work has > been put into this over the last few years that now everything is > transformed but no one is using it, and I don't see many compelling > reasons why a project would migrate, especially since the projects > that are consuming nova's notifications are mostly working with a > skeleton crew of maintainers. > > I'd really like some input from other project developers and > operators here. > > -- > > Thanks, > > Matt > From dtantsur at redhat.com Fri Dec 7 12:10:39 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 7 Dec 2018 13:10:39 +0100 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: On 12/7/18 1:09 AM, Melvin Hillsman wrote: > > > On Thu, Dec 6, 2018 at 12:36 PM Michael McCune > wrote: > > thanks for bringing this up Melvin. > > On Thu, Dec 6, 2018 at 1:13 PM Melvin Hillsman > wrote: > > Essentially the proposal is to deploy OpenStack from upstream (devstack > or other), stand up a VM within the cloud, grab all the SDKs, run acceptance > tests, report pass/fail results, update project navigator. Of course there > are details to be worked out and I do have a few questions that I hope would > help get everyone interested on the same page via this thread. > > > > Does this make sense? > > > > makes sense to me, and sounds like a good idea provided we have the > people ready to maintain the testing infra and patches for this (which > i assume we do). > > > I think we have a good base to build on, should get started, keep everyone > informed, and we can continue to work to recruit more interested folks. I think > if we work especially hard to streamline the entire process so it is as > automated as possible. > > > > Would folks be interested in a SDK SIG or does it make more sense to > request an item on the API SIG's agenda? > > > > i don't have a strong opinion either way, but i will point out that > the API-SIG has migrated to office hours instead of a weekly meeting. > if you expect that the proposed SDK work will have a strong cadence > then it might make more practical sense to create a new SIG, or really > even a working group until the objective of the testing is reached. > > the only reason i bring up working groups here is that there seems > like a clearly stated goal for the initial part of this work. namely > creating the testing and validation infrastructure described. it might > make sense to form a working group until the initial work is complete > and then move continued discussion under the API-SIG for organization. > > > Working group actually makes a lot more sense, thanks for the suggestion, I > agree with you; anyone else? I disagree :) I think back around Dublin we agreed that SDKs are in scope of API SIG, since they help people consume our API. And given that the API SIG is barely alive, it may be give it a chance to become active again. > > > > Bi-weekly discussions a good cadence? > > > > that sounds reasonable for a start, but i don't have a strong opinion here. > > > Who is interested in tackling this together? > > > > if you need any help from API-SIG, please reach out. i would be > willing to help with administrative/governance type stuff. As an author of a young SDK, I'm certainly in, be it within or outside of the API SIG. Dmitry > > > peace o/ > > > > -- > Kind regards, > > Melvin Hillsman > mrhillsman at gmail.com > mobile: (832) 264-2646 From cdent+os at anticdent.org Fri Dec 7 12:58:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 7 Dec 2018 12:58:46 +0000 (GMT) Subject: [placement] update 18-49 Message-ID: HTML: https://anticdent.org/placement-update-18-49.html This will be the last placement update of the year. I'll be travelling next Friday and after that we'll be deep in the December lull. I'll catch us up next on January 4th. # Most Important As last week, progress continues on the work in ansible, puppet/tripleo, kolla, loci to package placement up and establish upgrade processes. All of these things need review (see below). Work on GPU reshaping in virt drivers is getting close. # What's Changed * The perfload jobs which used to run in the nova-next job now has its [own job](https://review.openstack.org/#/c/619248/), running on each change. This may be of general interest because it runs placement "live" but without devstack. Results in job runs that are less than 4 minutes. * We've decided to go ahead with the simple os-resource-classes idea, so a repo is [being created](https://review.openstack.org/#/c/621666/). # Slow Reviews (Reviews which need additional attention because they have unresolved questions.) * Set root_provider_id in the database This has some indecision because it does a data migration within schema migrations. For this particular case this is safe and quick, but there's concern that it softens a potentially useful boundary between schema and data migrations. # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17. -2. * [In progress placement bugs](https://goo.gl/vzGGDQ) 13. -1 ## Interesting Bugs (Bugs that are sneaky and interesting and need someone to pick them up.) * placement/objects/resource_provider.py missing test coverage for several methods This is likely the result of the extraction. Tests in nova's test_servers and friends probably covered some of this stuff, but now we need placement-specific tests. * maximum recursion possible while setting aggregates in placement This can only happen under very heavy load with a very low number of placement processes, but the code that fails should probably change anyway: it's a potentially infinite loop with no safety breakout. # Specs Spec freeze is milestone 2, the week of January 7th. There was going to be a spec review sprint next week but it was agreed that people are already sufficiently busy. This will certainly mean that some of these specs do not get accepted for this cycle. None of the specs listed last week have merged. * Account for host agg allocation ratio in placement (Still in rocky/) * Add subtree filter for GET /resource_providers * Resource provider - request group mapping in allocation candidate * VMware: place instances on resource pool (still in rocky/) * Standardize CPU resource tracking * Allow overcommit of dedicated CPU (Has an alternative which changes allocations to a float) * Modelling passthrough devices for report to placement * Nova Cyborg interaction specification. * supporting virtual NVDIMM devices * Spec: Support filtering by forbidden aggregate * Proposes NUMA topology with RPs * Count quota based on resource class * Adds spec for instance live resize * Provider config YAML file * Propose counting quota usage from placement and API database * Resource modeling in cyborg. * Support filtering of allocation_candidates by forbidden aggregates # Main Themes ## Making Nested Useful Progress continues on gpu-reshaping for libvirt and xen: * Also making use of nested is bandwidth-resource-provider: * Eric's in the process of doing lots of cleanups to how often the ProviderTree in the resource tracker is checked against placement, and a variety of other "let's make this more right" changes in the same neighborhood: * Stack at: ## Extraction The [extraction etherpad](https://etherpad.openstack.org/p/placement-extract-stein-4) is starting to contain more strikethrough text than not. Progress is being made. The main tasks are the reshaper work mentioned above and the work to get deployment tools operating with an extracted placement: * [TripleO](https://review.openstack.org/#/q/topic:tripleo-placement-extraction) * [OpenStack Ansible](https://review.openstack.org/#/q/project:openstack/openstack-ansible-os_placement) * [Kolla](https://review.openstack.org/#/c/613589/) * [Kolla Ansible](https://review.openstack.org/#/c/613629/) * [Loci](https://review.openstack.org/#/c/617273/) Documentation tuneups: * Release-notes: This is blocked until we refactor the release notes to reflect _now_ better. * The main remaining task here is participating in [openstack-manuals](https://docs.openstack.org/doc-contrib-guide/doc-index.html). The functional tests in nova that use [extracted placement](https://review.openstack.org/#/c/617941/) are working but not yet merged. A child of that patch [removes the placement code](https://review.openstack.org/#/c/618215/). Further work will be required to tune up the various pieces of documentation in nova that reference placement. # Other There are currently only 8 [open changes](https://review.openstack.org/#/q/project:openstack/placement+status:open) in placement itself. Most of the time critical work is happening elsewhere (notably the deployment tool changes listed above). Of those placement changes the [database-related](https://review.openstack.org/#/q/owner:nakamura.tetsuro%2540lab.ntt.co.jp+status:open+project:openstack/placement) ones from Tetsuro are the most important. Outside of placement: * Improve handling of default allocation ratios * Neutron minimum bandwidth implementation * Add OWNERSHIP $SERVICE traits * zun: Use placement for unified resource management * Blazar using the placement-api * Tenks doing some node management, with a bit of optional placement. * Sync placement database to the current version (in grenade) * WIP: add Placement aggregates tests (in tempest) # End In case it hasn't been clear: things being listed here is an explicit invitation (even plea) for _you_ to help out by reviewing or fixing. Thank you. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Fri Dec 7 13:35:56 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 7 Dec 2018 14:35:56 +0100 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: Message-ID: <380a044d-8185-a312-2f7e-40bd663cf49c@openstack.org> Melvin Hillsman wrote: > We have spent some time working to get an idea of what official SDKs > would look like. We had some sessions during the Berlin summit[0][1] and > there was a lot of great feedback. > > Currently the following SDKs are generally considered usable for their > respective language; there are others of course: > > openstacksdk (Python) > gophercloud (Go) > pkgcloud (JavaScript) > openstack4j (Java) > rust-openstack (Rust) > fog-openstack (Ruby) > php-opencloud (PHP) As a sidenote, we also want to show those SDKs on openstack.org/software alongside openstacksdk, so having validation that they actually do what they are supposed to do is really critical. > After many discussions it seems SDK validation essentially should be > about confirming cloud state pre/post SDK interaction rather than API > support. An example is that when I use pkgcloud and ask that a VM be > created, does the VM exist, in the way I asked it exist, rather than are > there specific API calls that are being hit along the way to creating my VM. > > I am putting this email out to keep the community informed of what has > been discussed in this space but also and most importantly to get > feedback and support for this work. It would be great to get a set of > official and community SDKs, get them setup with CI testing for > validation (not changing their current CI for unit/functional/acceptance > testing; unless asked to help do this), and connect the results to the > updated project navigator SDK section. A list of scenarios has been > provided as a good starting point for cloud state checks.[2] > > Essentially the proposal is to deploy OpenStack from upstream (devstack > or other), stand up a VM within the cloud, grab all the SDKs, run > acceptance tests, report pass/fail results, update project navigator. Of > course there are details to be worked out and I do have a few questions > that I hope would help get everyone interested on the same page via this > thread. > > 1. Does this make sense? Yes. > 2. Would folks be interested in a SDK SIG or does it make more sense to > request an item on the API SIG's agenda? As others mentioned, there is lots of overlap between API SIG and SDK work, but that would be a stretch of the API SIG mission, which they might or might not be interested in doing. If they'd rather have the groups separated, I still think the SIG group format applies better than a "workgroup" as caring about SDKs and their quality will be an ongoing effort, it's not just about setting up tests and then forgetting about them. So I'd encourage the formation of a SDK SIG if that work can't be included in the existing API SIG. > 3. Bi-weekly discussions a good cadence? No strong opinion, but bi-weekly sounds good. > 4. Who is interested in tackling this together? As mentioned above, I'll be involved in the promotion of the result of that effort, by making sure the openstack.org/software pages evolve a way to list "verified" SDKs alongside the one that is produced locally. -- Thierry Carrez (ttx) From msm at redhat.com Fri Dec 7 14:52:49 2018 From: msm at redhat.com (Michael McCune) Date: Fri, 7 Dec 2018 09:52:49 -0500 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: <380a044d-8185-a312-2f7e-40bd663cf49c@openstack.org> References: <380a044d-8185-a312-2f7e-40bd663cf49c@openstack.org> Message-ID: On Fri, Dec 7, 2018 at 8:40 AM Thierry Carrez wrote: > > Melvin Hillsman wrote: > > 2. Would folks be interested in a SDK SIG or does it make more sense to > > request an item on the API SIG's agenda? > > As others mentioned, there is lots of overlap between API SIG and SDK > work, but that would be a stretch of the API SIG mission, which they > might or might not be interested in doing. > i don't necessarily have an objection to modifying the API SIG mission to be more inclusive of the SDK work. i think this is something we had actually hoped for after discussions a few cycles ago about how we could be welcoming to the operators and users with SDK specific issues. i think for this expansion/inclusion to be optimally successful we will need to consider re-starting a regular meeting and also getting some new blood on the API SIG core team. we are currently at 3 cores (Ed Leafe, Dmitry Tantsur, and myself) and the reasoning for our meetings evolving into office hours was that it was just the three of us (and Chris Dent usually) coming to those meetings. ideally with new work being done around the SDKs these meetings would become more active again. peace o/ From paul.bourke at oracle.com Fri Dec 7 14:56:28 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Fri, 7 Dec 2018 14:56:28 +0000 Subject: [octavia] Routing between lb-mgmt-net and amphora In-Reply-To: References: Message-ID: <6a861285-b25b-87d6-15fe-a48cdf0f08da@oracle.com> Ok, so in my case, I've booted a test cirros VM on lb-mgmt-net, it gets assigned an IP of 172.18.2.126. The goal is to be able to ping this IP directly from the control plane. I create a port in neutron: http://paste.openstack.org/show/736821/ I log onto the network node that's hosting the dhcp namespace for this node: http://paste.openstack.org/show/736823/ I then run the following command lifted from devstack, with my port info subbed in: # ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 external-ids:skip_cleanup=true Here's the output of 'ovs-vsctl show' at this point: http://paste.openstack.org/show/736826/ Note that the tap device for the VM (tap6440048f-d2) has tag 3. However, if I try to add 'tag=3' to the above add-port command it just assigns the port the dead tag 4095. So at the point I have a new interface created, o-hm0, with a status of DOWN. It's on br-int, but I can't ping the instance at 172.18.2.126. I also assume I need to add a static route of some form on the node, though no attempts so far have resulted in being able to ping. Would be very grateful if you could revise these commands and let know if they deviate from what you're doing. -Paul On 06/12/2018 21:12, M. Ranganathan wrote: > HACK ALERT Disclaimer: My suggestion could be clumsy. > > On Thu, Dec 6, 2018 at 1:46 PM Paul Bourke > wrote: > > Hi, > > This is mostly a follow on to the thread at[0], though due to the > mailing list transition it was easier to start a new thread. > > I've been attempting to get Octavia setup according to the > dev-quick-start guide[1], but have been struggling with the > following piece: > > "Add appropriate routing to / from the ‘lb-mgmt-net’ such that > egress is allowed, and the controller (to be created later) can talk > to hosts on this network." > > In mranga's reply, they say: > > > -- Create an ovs port  on br-int > > -- Create a neutron port using the ovs port that you just created. > > -- Assign the ip address of the neutron port to the ovs port > > -- Use ip netns exec to assign a route in the router namespace of > the LoadBalancer network. > > I have enough of an understanding of Neutron/OVS for this to mostly > make sense, but not enough to actually put it into practice it > seems. My environment: > > 3 x control nodes > 2 x network nodes > 1 x compute > > All nodes have two interfaces, eth0 being the management network - > 192.168.5.0/24 , and eth1 being used for the > provider network. I then create the Octavia lb-mgmt-net on > 172.18.2.0/24 . > > I've read the devstack script[2] and have the following questions: > > * Should I add the OVS port to br-int on the compute, network nodes, > or both? > > > I have only one controller which also functions as my network node. I > added the port on the controller/network  node. br-int is the place > where the integration happens. You will find each network has an > internal vlan tag associated with it. Use the tag assigned to your lb > network when you create the ovs port. > > ovs-vsctl show will tell you more. > > > * What is the purpose of creating a neutron port in this scenario > > > Just want to be sure Neutron knows about it and has an entry in its > database so the address won't be used for something else. If you are > using static addresses, for example you should not need this (I think). > > BTW the created port is DOWN. I am not sure why and I am not sure it > matters. > > > If anyone is able to explain this a bit further or can even point to > some good material to flesh out the underlying concepts it would be > much appreciated, I feel the 'Neutron 101' videos I've done so far > are not quite getting me there :) > > Cheers, > -Paul > > [0] > http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000544.html > [1] > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html > [2] https://github.com/openstack/octavia/blob/master/devstack/plugin.sh > > > > -- > M. Ranganathan > From juliaashleykreger at gmail.com Fri Dec 7 16:02:08 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 7 Dec 2018 08:02:08 -0800 Subject: [ironic] Time to discuss clean/deploy steps In-Reply-To: References: Message-ID: Following up from our discussion: Present: dtantsur rloo mgoddard jaypipes TheJulia stendukler iurygregoy Consensuses: * The "holding" state seems generally useful to operators and we should implement that, likely as an RFE since the needful actions and steps to be reached with a running ramdisk are fairly clear. * An API may make sense for some users, but the caveats and issues are fairly expansive. Largely because for good usability, we would need to cache, and that cache may no longer be valid or not valid for even the next run. * We need to revise our documentation to be a little more clear regarding cleaning steps so it is easier to find and understand, since we already document our default steps. Vendor hardware manager modules should do the same, provide documentation on the steps, and caveats to use. ** The same essentially applies to deploy steps, we will need to ensure that is properly documented to make everyone's lives easier. * We agreed that the team likely does not need to implement this API at this time. This is largely due to all of these possible caveats and exception handling that would be needed to provide a full picture of the available clean/deploy step universe for third party hardware managers. * We agreed it was useful to finally talk via a higher bandwidth medium since we've not been able to reach consensus on this functionality via irc or review. Action Items: * TheJulia to look at the documentation and revise it over the holidays to try and be a little more clear and concise about cleaning and steps. * TheJulia, or whomever beats her to it, to update the spec to basically represent the above consensuses and change the target folder to backlog instead of approved and not-implemented. Spec will be free for anyone who wishes to implement the feature, however the team On Tue, Dec 4, 2018 at 1:37 PM Julia Kreger wrote: > All, > > I've looked at the doodle poll results and it looks like the best > available time is 3:00 PM UTC on Friday December 7th. > > I suggest we use bluejeans[2] as that has worked fairly well for us thus > far. The specification documented related to the discussion can be found in > review[3]. > > Thanks, > > -Julia > > [1] https://doodle.com/poll/yan4wyvztf7mpq46 > [2] https://bluejeans.com/u/jkreger/ > [3] https://review.openstack.org/#/c/606199/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Dec 7 16:07:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Dec 2018 10:07:49 -0600 Subject: [nova] Moving nova-cells-v1 job to the experimental queue Message-ID: <2f4fad9f-38de-f966-8ef6-5184acf0ac7d@gmail.com> I have changes up which move the nova-cells-v1 CI job to the experimental queue: https://review.openstack.org/#/q/topic:bug/1807407 We continue to see intermittent failures in that job and given the deprecated nature of cells v1, CERN is using cells v2, the job no longer runs nova-network, and its flakiness (plus the general state of the gate lately), it is time to move this further out of our daily lives and down the road to its eventual removal. Keeping it as an option to run on-demand in the experimental queue at least allows us to leverage it if needed should some major cells v1 fix be needed, but that is doubtful (we haven't had a cells v1 specific bug fix in a long time). -- Thanks, Matt From colleen at gazlene.net Fri Dec 7 16:11:17 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 07 Dec 2018 17:11:17 +0100 Subject: [dev][keystone] Keystone Team Update - Week of 3 December 2018 Message-ID: <1544199077.1398908.1602216128.4FBE9319@webmail.messagingengine.com> # Keystone Team Update - Week of 3 December 2018 ## News ### New Outreachy Interns The keystone team is lucky enough to have two Outreachy interns this round: welcome to Erus (erus) and Islam (imus)! Erus will be helping us with our federation implementation and Islam will be working on improving our API unit tests. We're happy to have you! ### Keystone as IdP Proxy (Broker) Morgan started working on a diagram to explain the proposed architecture for keystone as an identity provider proxy/broker[1][2]. It is a good starting point for understanding the general idea of what the proposed design is. Please have a look at it and bring your questions to Morgan. [1] https://usercontent.irccloud-cdn.com/file/Au4e3DXb/Keystone%20IDP%20(initial)%20Diagram.png [2] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-12-04-16.00.log.html#l-247 ## Open Specs Stein specs: https://bit.ly/2Pi6dGj Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 25 changes this week. ## Changes that need Attention Search query: https://bit.ly/2RLApdA There are 91 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs This week we opened 5 new bugs and closed 4. Bugs opened (5) Bug #1806713 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1806713 Bug #1806762 (keystone:Medium) opened by Lance Bragstad https://bugs.launchpad.net/keystone/+bug/1806762 Bug #1806195 (keystone:Undecided) opened by nigel cook https://bugs.launchpad.net/keystone/+bug/1806195 Bug #1806377 (keystone:Undecided) opened by kirandevraaj https://bugs.launchpad.net/keystone/+bug/1806377 Bug #1807184 (oslo.policy:Medium) opened by Brian Rosmaita https://bugs.launchpad.net/oslo.policy/+bug/1807184 Bugs fixed (4) Bug #1806109 (keystoneauth:Medium) fixed by Monty Taylor https://bugs.launchpad.net/keystoneauth/+bug/1806109 Bug #1800259 (oslo.policy:Undecided) fixed by Lance Bragstad https://bugs.launchpad.net/oslo.policy/+bug/1800259 Bug #1803722 (oslo.policy:Undecided) fixed by Corey Bryant https://bugs.launchpad.net/oslo.policy/+bug/1803722 Bug #1804073 (oslo.policy:Undecided) fixed by John Dennis https://bugs.launchpad.net/oslo.policy/+bug/1804073 ## Milestone Outlook https://releases.openstack.org/stein/schedule.html We have just a month left until our spec freeze. A month after that is our feature proposal freeze, so if you are planning significant feature work, best to get started on it soon. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From dangtrinhnt at gmail.com Fri Dec 7 16:17:29 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 8 Dec 2018 01:17:29 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> Message-ID: Hi again, Just wonder how the image for searchlight test was set up? Which user is used for running ElasticSearch? Is there any way to indicate the user that will run the test? Can I do it with [1]? Based on the output of [2] I can see there are some permission issue of JDK if I run the functional tests with the stack user on my dev environment. [1] https://git.openstack.org/cgit/openstack/searchlight/tree/tools/test-setup.sh [2] https://review.openstack.org/#/c/622871/3/searchlight/tests/functional/__init__.py Thanks, On Fri, Dec 7, 2018 at 4:30 PM Trinh Nguyen wrote: > Hi Clark, > > Outputting the exec logs does help. And, I found the problem for the > functional tests to fail, it because ES cannot be started because of the > way the functional sets up the new version of ES server (5.x). I'm working > on updating it. > > Many thanks, > > On Tue, Dec 4, 2018 at 1:39 AM Clark Boylan wrote: > >> On Mon, Dec 3, 2018, at 7:28 AM, Trinh Nguyen wrote: >> > Hello, >> > >> > Currently, [1] fails tox py27 tests on Zuul check for just updating the >> log >> > text. The tests are successful at local dev env. Just wondering there is >> > any new change at Zuul CI? >> > >> > [1] https://review.openstack.org/#/c/619162/ >> > >> >> Reading the exceptions [2] and the test setup code [3] it appears that >> elasticsearch isn't responding on its http port and is thus treated as >> having not started. With the info we currently have it is hard to say why. >> Instead of redirecting exec logs to /dev/null [4] maybe we can capture that >> data? Also probably worth grabbing the elasticsearch daemon log as well. >> >> Without that information it is hard to say why this happened. I am not >> aware of any changes in the CI system that would cause this, but we do >> rebuild our test node images daily. >> >> [2] >> http://logs.openstack.org/62/619162/5/check/openstack-tox-py27/9ce318d/job-output.txt.gz#_2018-11-27_05_32_48_854289 >> [3] >> https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n868 >> [4] >> https://git.openstack.org/cgit/openstack/searchlight/tree/searchlight/tests/functional/__init__.py#n851 >> >> Clark >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Dec 7 17:08:59 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Dec 2018 11:08:59 -0600 Subject: [placement] update 18-49 - aggregate allocation ratios In-Reply-To: References: Message-ID: <6cce1dff-3c26-683a-3529-ab955076a024@gmail.com> On 12/7/2018 6:58 AM, Chris Dent wrote: > * >   Account for host agg allocation ratio in placement >   (Still in rocky/) Given https://bugs.launchpad.net/nova/+bug/1804125 and the discussion with the operator in there, I've been thinking about this again lately. We will soon [1] at least have the restriction documented but the comment from the operator in that bug is painful: """ What's more distressing is that this appears to have produced a schism between the intended, documented functions of Nova scheduler and the actual operation of those functions on several consecutive releases of OpenStack. If the Aggregate* filters are no longer functional, and are no longer intended to be so, then I would think they should reasonably have been removed from the documentation and from the project so that deployers wouldn't expect to rely on them. """ With https://review.openstack.org/#/q/topic:bp/initial-allocation-ratios we at least have some sanity in nova-compute and you can control the allocation ratios per-compute (resource provider) either via nova config (the CERN use case) or the placement API using RBAC (the mgagne scenario, with placement RBAC added in Rocky). What is missing is something user-friendly for those that want to control allocation ratios in aggregate from the API. In Dublin we said we'd write an osc-placement CLI to help with this: https://etherpad.openstack.org/p/nova-ptg-rocky-placement ~L37 But that didn't happen unfortunately. It doesn't mean we couldn't still easily add that. That solution does require tooling changes from deployers though. The other alternative is Jay's spec which is to have nova-api mirror/proxy allocation ratio information from the compute host aggregates API to the placement API. Since Rocky the compute API already mirrors aggregate information to placement, so this would be building on that to also set allocation ratio information on each resource provider within said aggregate in placement. Part of me doesn't like that proxy work given our stance on no more proxies [2] but on the other hand we definitely regressed our own compute API (and scheduler) in Ocata, so it seems on us to provide the most user-friendly (no upgrade impact) way to solve that. Either way we go, at this point, doesn't it mean we can deprecate the Aggregate* filters since they are essentially useless when using the FilterScheduler and placement (remember the CachingScheduler is gone now)? [1] https://review.openstack.org/#/q/Ifaf596a8572637f843f47daf5adce394b0365676 [2] https://docs.openstack.org/nova/latest/contributor/project-scope.html#api-scope -- Thanks, Matt From melwittt at gmail.com Fri Dec 7 17:17:04 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 7 Dec 2018 09:17:04 -0800 Subject: [nova] slides now available for project update and project onboarding sessions Message-ID: <26c5fcb5-1b11-b4b5-2727-22bd9daf7118@gmail.com> Howdy all, This is just a FYI that slides are now attached to the summit events for the Nova - Project Update and the Nova - Project Onboarding sessions: https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22861/nova-project-update https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22860/nova-project-onboarding Cheers, -melanie From senrique at redhat.com Fri Dec 7 17:55:10 2018 From: senrique at redhat.com (Sofia Enriquez) Date: Fri, 7 Dec 2018 14:55:10 -0300 Subject: [cinder] [tempest] Ideas for test cases Message-ID: Hello cinder guys, I'm working on increasing the coverage of Cinder Tempest [1]. Since I'm relatively new using cinder retype+migrate functionality, I'm looking for possible ideas of test cases. Thanks, Sofi [1]: https://review.openstack.org/#/c/614022/ -- Sofia Enriquez Associate Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Dec 7 18:16:12 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 07 Dec 2018 10:16:12 -0800 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> Message-ID: <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> On Fri, Dec 7, 2018, at 8:17 AM, Trinh Nguyen wrote: > Hi again, > Just wonder how the image for searchlight test was set up? Which user is > used for running ElasticSearch? Is there any way to indicate the user that > will run the test? Can I do it with [1]? Based on the output of [2] I can > see there are some permission issue of JDK if I run the functional tests > with the stack user on my dev environment. > > [1] > https://git.openstack.org/cgit/openstack/searchlight/tree/tools/test-setup.sh > [2] > https://review.openstack.org/#/c/622871/3/searchlight/tests/functional/__init__.py > The unittest jobs run as the Zuul user. This user has sudo access when test-setup.sh runs, but then we remove sudo access when tox is run. This is important as we are trying to help ensure that you can run tox locally without it making system level changes. When your test setup script runs `dpkg -i` this package install may start running an elasticsearch instance. It depends on how the package is set up. This daemon would run under the user the package has configured for that service. When you run an elasticsearch process from your test suite this will run as the zuul user. Hope this helps. I also left a couple of comments on change 622871. Clark From fungi at yuggoth.org Fri Dec 7 19:09:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 7 Dec 2018 19:09:26 +0000 Subject: On trust and risk, Australia's Assistance and Access Bill Message-ID: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> I've seen concern expressed in OpenStack and other free/libre open source software communities over the recent passage of the "Assistance and Access Bill 2018" by the Australian Parliament, and just want to say that I appreciate the trust relationships we've all built with our colleagues in many countries, including Australia. As someone who doesn't particularly agree with many of the laws passed in his own country, while I'm not going to encourage civil disobedience, I do respect that many have shown preference for it over compelled compromise of our community's established trust. I, for one, don't wish to return to the "bad old days" of the crypto wars, when major projects like OpenBSD refused contributions from citizens and residents of the USA. It's bad for project morale, excludes valuable input from people with a variety of perspectives, and it's just downright inefficient too. The unfortunate truth is that anyone can be pressured at any time to derail, backdoor or otherwise compromise software and systems. A new law in one country doesn't change that. There are frequent news stories about government agencies installing covert interfaces in enterprise and consumer electronic devices alike through compulsion of those involved in their programming, manufacture and distribution. There's evidence of major standards bodies being sidetracked and steered into unwittingly approving flawed specifications which influential actors already know ways to circumvent. Over the course of my career I've had to make personal choices regarding installation and maintenance of legally-mandated systems for spying on customers and users. All we can ever hope for is that the relationships, systems and workflows we create are as resistant as possible to these sorts of outside influences. Sure, ejecting people from important or sensitive positions within the project based on their nationality might be a way to send a message to a particular government, but the problem is bigger than just one country and we'd really all need to be removed from our posts for pretty much the same reasons. This robust community of trust and acceptance we've fostered is not a risk, it's another line of defense against erosion of our ideals and principles. Entrenched concepts like open design and public review help to shield us from these situations, and while there is no perfect protection it seems to me that secret compromise under our many watchful eyes is a much harder task than doing so behind the closed doors of proprietary systems development. I really appreciate all the Australians who toil tirelessly to make OpenStack better, and am proud to call them friends and colleagues. I certainly don't want them to feel any need to resign from their valuable work because they're worried the rest of us can no longer trust them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From msm at redhat.com Fri Dec 7 19:12:35 2018 From: msm at redhat.com (Michael McCune) Date: Fri, 7 Dec 2018 14:12:35 -0500 Subject: [sdk] Establishing SDK Validation Baseline In-Reply-To: References: <380a044d-8185-a312-2f7e-40bd663cf49c@openstack.org> Message-ID: hey folks, i realized after some discussion with the other API SIG cores that i might not have been clear enough in my responses. for the record, i _do not_ have an objection to bringing the SDK work into the mission of the API SIG. further, i think it does make good sense to rally all these concerns in one place and will reduce the confusion that coalesces around new SIG formations. i do maintain that we will need to do some work and bring on fresh bodies into the SIG to ensure the best outcomes. for transparency sake, here is a link[0] to the chat we had in #openstack-sdk among the API SIG cores. peace o/ [0]: http://eavesdrop.openstack.org/irclogs/%23openstack-sdks/%23openstack-sdks.2018-12-07.log.html#t2018-12-07T16:06:43 From juliaashleykreger at gmail.com Fri Dec 7 19:20:29 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 7 Dec 2018 11:20:29 -0800 Subject: On trust and risk, Australia's Assistance and Access Bill In-Reply-To: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> References: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> Message-ID: Very well said! Thank you Jeremy! On Fri, Dec 7, 2018 at 11:14 AM Jeremy Stanley wrote: > I've seen concern expressed in OpenStack and other free/libre open > source software communities over the recent passage of the > "Assistance and Access Bill 2018" by the Australian Parliament, and > just want to say that I appreciate the trust relationships we've all > built with our colleagues in many countries, including Australia. As > someone who doesn't particularly agree with many of the laws passed > in his own country, while I'm not going to encourage civil > disobedience, I do respect that many have shown preference for it > over compelled compromise of our community's established trust. I, > for one, don't wish to return to the "bad old days" of the crypto > wars, when major projects like OpenBSD refused contributions from > citizens and residents of the USA. It's bad for project morale, > excludes valuable input from people with a variety of perspectives, > and it's just downright inefficient too. > > The unfortunate truth is that anyone can be pressured at any time to > derail, backdoor or otherwise compromise software and systems. A new > law in one country doesn't change that. There are frequent news > stories about government agencies installing covert interfaces in > enterprise and consumer electronic devices alike through compulsion > of those involved in their programming, manufacture and > distribution. There's evidence of major standards bodies being > sidetracked and steered into unwittingly approving flawed > specifications which influential actors already know ways to > circumvent. Over the course of my career I've had to make personal > choices regarding installation and maintenance of legally-mandated > systems for spying on customers and users. All we can ever hope for > is that the relationships, systems and workflows we create are as > resistant as possible to these sorts of outside influences. > > Sure, ejecting people from important or sensitive positions within > the project based on their nationality might be a way to send a > message to a particular government, but the problem is bigger than > just one country and we'd really all need to be removed from our > posts for pretty much the same reasons. This robust community of > trust and acceptance we've fostered is not a risk, it's another line > of defense against erosion of our ideals and principles. Entrenched > concepts like open design and public review help to shield us from > these situations, and while there is no perfect protection it seems > to me that secret compromise under our many watchful eyes is a much > harder task than doing so behind the closed doors of proprietary > systems development. > > I really appreciate all the Australians who toil tirelessly to make > OpenStack better, and am proud to call them friends and colleagues. > I certainly don't want them to feel any need to resign from their > valuable work because they're worried the rest of us can no longer > trust them. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Fri Dec 7 19:23:23 2018 From: msm at redhat.com (Michael McCune) Date: Fri, 7 Dec 2018 14:23:23 -0500 Subject: On trust and risk, Australia's Assistance and Access Bill In-Reply-To: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> References: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> Message-ID: On Fri, Dec 7, 2018 at 2:12 PM Jeremy Stanley wrote: > > I've seen concern expressed in OpenStack and other free/libre open > source software communities over the recent passage of the > "Assistance and Access Bill 2018" by the Australian Parliament, and > just want to say that I appreciate the trust relationships we've all > built with our colleagues in many countries, including Australia. As > someone who doesn't particularly agree with many of the laws passed > in his own country, while I'm not going to encourage civil > disobedience, I do respect that many have shown preference for it > over compelled compromise of our community's established trust. I, > for one, don't wish to return to the "bad old days" of the crypto > wars, when major projects like OpenBSD refused contributions from > citizens and residents of the USA. It's bad for project morale, > excludes valuable input from people with a variety of perspectives, > and it's just downright inefficient too. > > The unfortunate truth is that anyone can be pressured at any time to > derail, backdoor or otherwise compromise software and systems. A new > law in one country doesn't change that. There are frequent news > stories about government agencies installing covert interfaces in > enterprise and consumer electronic devices alike through compulsion > of those involved in their programming, manufacture and > distribution. There's evidence of major standards bodies being > sidetracked and steered into unwittingly approving flawed > specifications which influential actors already know ways to > circumvent. Over the course of my career I've had to make personal > choices regarding installation and maintenance of legally-mandated > systems for spying on customers and users. All we can ever hope for > is that the relationships, systems and workflows we create are as > resistant as possible to these sorts of outside influences. > > Sure, ejecting people from important or sensitive positions within > the project based on their nationality might be a way to send a > message to a particular government, but the problem is bigger than > just one country and we'd really all need to be removed from our > posts for pretty much the same reasons. This robust community of > trust and acceptance we've fostered is not a risk, it's another line > of defense against erosion of our ideals and principles. Entrenched > concepts like open design and public review help to shield us from > these situations, and while there is no perfect protection it seems > to me that secret compromise under our many watchful eyes is a much > harder task than doing so behind the closed doors of proprietary > systems development. > > I really appreciate all the Australians who toil tirelessly to make > OpenStack better, and am proud to call them friends and colleagues. > I certainly don't want them to feel any need to resign from their > valuable work because they're worried the rest of us can no longer > trust them. > -- > Jeremy Stanley ++ well said. thank you for stating this so eloquently. peace o/ From mranga at gmail.com Fri Dec 7 19:45:12 2018 From: mranga at gmail.com (M. Ranganathan) Date: Fri, 7 Dec 2018 14:45:12 -0500 Subject: [octavia] Routing between lb-mgmt-net and amphora In-Reply-To: <6a861285-b25b-87d6-15fe-a48cdf0f08da@oracle.com> References: <6a861285-b25b-87d6-15fe-a48cdf0f08da@oracle.com> Message-ID: Here is a gist of what I did (which again could be completely the wrong way to proceed). Basically, you create a port for your network and then run the script to emit the configuration commands you need to run *examine the output before running it please* Feel free to re-use or better yet, please come up with a way to do the equivalent thing using just openstack command line (without scripts) and share it. https://gist.github.com/ranganathanm/6fcd94ad0d568d00156cff08b055c4b0 Hope this helps On Fri, Dec 7, 2018 at 9:58 AM Paul Bourke wrote: > Ok, so in my case, I've booted a test cirros VM on lb-mgmt-net, it gets > assigned an IP of 172.18.2.126. The goal is to be able to ping this IP > directly from the control plane. > > I create a port in neutron: http://paste.openstack.org/show/736821/ > > I log onto the network node that's hosting the dhcp namespace for this > node: http://paste.openstack.org/show/736823/ > > I then run the following command lifted from devstack, with my port info > subbed in: > > # ovs-vsctl -- --may-exist add-port ${OVS_BRIDGE:-br-int} o-hm0 -- set > Interface o-hm0 type=internal -- set Interface o-hm0 > external-ids:iface-status=active -- set Interface o-hm0 > external-ids:attached-mac=$MGMT_PORT_MAC -- set Interface o-hm0 > external-ids:iface-id=$MGMT_PORT_ID -- set Interface o-hm0 > external-ids:skip_cleanup=true > > Here's the output of 'ovs-vsctl show' at this point: > http://paste.openstack.org/show/736826/ > > Note that the tap device for the VM (tap6440048f-d2) has tag 3. However, > if I try to add 'tag=3' to the above add-port command it just assigns > the port the dead tag 4095. > > So at the point I have a new interface created, o-hm0, with a status of > DOWN. It's on br-int, but I can't ping the instance at 172.18.2.126. I > also assume I need to add a static route of some form on the node, > though no attempts so far have resulted in being able to ping. > > Would be very grateful if you could revise these commands and let know > if they deviate from what you're doing. > > -Paul > > On 06/12/2018 21:12, M. Ranganathan wrote: > > HACK ALERT Disclaimer: My suggestion could be clumsy. > > > > On Thu, Dec 6, 2018 at 1:46 PM Paul Bourke > > wrote: > > > > Hi, > > > > This is mostly a follow on to the thread at[0], though due to the > > mailing list transition it was easier to start a new thread. > > > > I've been attempting to get Octavia setup according to the > > dev-quick-start guide[1], but have been struggling with the > > following piece: > > > > "Add appropriate routing to / from the ‘lb-mgmt-net’ such that > > egress is allowed, and the controller (to be created later) can talk > > to hosts on this network." > > > > In mranga's reply, they say: > > > > > -- Create an ovs port on br-int > > > -- Create a neutron port using the ovs port that you just created. > > > -- Assign the ip address of the neutron port to the ovs port > > > -- Use ip netns exec to assign a route in the router namespace of > > the LoadBalancer network. > > > > I have enough of an understanding of Neutron/OVS for this to mostly > > make sense, but not enough to actually put it into practice it > > seems. My environment: > > > > 3 x control nodes > > 2 x network nodes > > 1 x compute > > > > All nodes have two interfaces, eth0 being the management network - > > 192.168.5.0/24 , and eth1 being used for the > > provider network. I then create the Octavia lb-mgmt-net on > > 172.18.2.0/24 . > > > > I've read the devstack script[2] and have the following questions: > > > > * Should I add the OVS port to br-int on the compute, network nodes, > > or both? > > > > > > I have only one controller which also functions as my network node. I > > added the port on the controller/network node. br-int is the place > > where the integration happens. You will find each network has an > > internal vlan tag associated with it. Use the tag assigned to your lb > > network when you create the ovs port. > > > > ovs-vsctl show will tell you more. > > > > > > * What is the purpose of creating a neutron port in this scenario > > > > > > Just want to be sure Neutron knows about it and has an entry in its > > database so the address won't be used for something else. If you are > > using static addresses, for example you should not need this (I think). > > > > BTW the created port is DOWN. I am not sure why and I am not sure it > > matters. > > > > > > If anyone is able to explain this a bit further or can even point to > > some good material to flesh out the underlying concepts it would be > > much appreciated, I feel the 'Neutron 101' videos I've done so far > > are not quite getting me there :) > > > > Cheers, > > -Paul > > > > [0] > > > http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000544.html > > [1] > > > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html > > [2] > https://github.com/openstack/octavia/blob/master/devstack/plugin.sh > > > > > > > > -- > > M. Ranganathan > > > -- M. Ranganathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Dec 7 21:01:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 7 Dec 2018 15:01:55 -0600 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <87bm62z4av.fsf@meyer.lemoncheese.net> References: <87bm62z4av.fsf@meyer.lemoncheese.net> Message-ID: <43a151f1-e690-96e7-05b5-7561e34033b2@gmail.com> On 12/3/2018 3:30 PM, James E. Blair wrote: > Since some larger projects consume the bulk of cloud resources in our > system, this can be especially frustrating for smaller projects. To be > sure, it impacts everyone, but while larger projects receive a > continuous stream of results (even if delayed) smaller projects may wait > hours before seeing results on a single change. > > In order to help all projects maintain a minimal velocity, we've begun > dynamically prioritizing node requests based on the number of changes a > project has in a given pipeline. FWIW, and maybe this is happening across the board right now, but it's taking probably ~16 hours to get results on nova changes right now, which becomes increasingly frustrating when they finally get a node, tests run and then the job times out or something because the node is slow (or some other known race test failure). Is there any way to determine or somehow track how long a change has been queued up before and take that into consideration when it's re-enqueued? Like take this change: https://review.openstack.org/#/c/620154/ That took about 3 days to merge with constant rechecks from the time it was approved. It would be cool if there was a way to say, from within 50 queued nova changes (using the example in the original email), let's say zuul knew that 10 of those 50 have already gone through one or more times and weigh those differently so when they do get queued up, they are higher in the queue than maybe something that is just going through it's first time. -- Thanks, Matt From kennelson11 at gmail.com Fri Dec 7 21:55:32 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 7 Dec 2018 13:55:32 -0800 Subject: [dev] How to develop changes in a series In-Reply-To: References: <1CC272501B5BC543A05DB90AA509DED527475067@ORSMSX162.amr.corp.intel.com> <20181205195227.4j3rkpinlgts3ujv@yuggoth.org> Message-ID: Thanks for mentioning the contributor guide! I'll happily review any patches you have for that section. I'm sure Ildiko would be happy to as well. -Kendall (diablo_rojo) On Wed, Dec 5, 2018 at 12:41 PM William M Edmonds wrote: > Jeremy Stanley wrote on 12/05/2018 02:52:28 PM: > > On 2018-12-05 14:48:37 -0500 (-0500), William M Edmonds wrote: > > > Eric Fried wrote on 12/05/2018 12:18:37 PM: > > > > > > > > > > > > > But I want to edit 1b2c453, while leaving ebb3505 properly stacked on > > > > top of it. Here I use a tool called `git restack` (run `pip install > > > > git-restack` to install it). > > > > > > It's worth noting that you can just use `git rebase` [1], you don't > have to > > > use git-restack. This is why later you're using `git rebase > --continue`, > > > because git-restack is actually using rebase under the covers. > > > > > > [1] https://stackoverflow.com/questions/1186535/how-to-modify-a- > > specified-commit > > > > You can, however what git-restack does for you is figure out which > > commit to rebase on top of so that you don't inadvertently rebase > > your stack of changes onto a newer branch state and then make things > > harder on reviewers. > > -- > > Jeremy Stanley > > Ah, that's good to know. > > Also, found this existing documentation [2] if someone wants to propose an > update or link from another location. Note that it doesn't currently > mention git-restack, just rebase. > > [2] > https://docs.openstack.org/contributors/code-and-documentation/patch-best-practices.html#how-to-handle-chains > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Dec 7 21:58:38 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 7 Dec 2018 13:58:38 -0800 Subject: [ops][docs] The Contributor Guide: Ops Feedback Session Summary In-Reply-To: References: <5938a4e0-3eb1-5448-3af5-f56bf3452e9c@suse.com> Message-ID: Thanks Doug! -Kendall (diablo_rojo) On Mon, Dec 3, 2018 at 12:01 PM Doug Hellmann wrote: > Doug Hellmann writes: > > > Andreas Jaeger writes: > > > >> On 11/27/18 4:38 PM, Kendall Nelson wrote: > >>> Hello! > >>> > >>> > >>> For the long version, feel free to look over the etherpad[1]. > >>> > >>> > >>> It should be noted that this session was in relation to the operator > >>> section of the contributor guide, not the operations guide, though they > >>> should be closely related and link to one another. > >>> > >>> > >>> Basically the changes requested can be boiled down to two types of > >>> changes: cosmetic and missing content. > >>> > >>> > >>> Cosmetic Changes: > >>> > >>> * > >>> > >>> Timestamps so people can know when the last change was made to a > >>> given doc (dhellmann volunteered to help here)[2] > >>> > >>> * > >>> > >>> Floating bug report button and some mechanism for auto populating > >>> which page a bug is on so that the reader doesn’t have to know what > >>> rst file in what repo has the issue to file a bug[3] > >> > >> This is something probably for openstackdocstheme to have it > >> everywhere. > > > > Yes, that was the idea. We already have some code that pulls the > > timestamp from git in the governance repo, so I was going to move that > > over to the theme for better reuse. > > > > -- > > Doug > > > > The patch to add this to the theme is in > https://review.openstack.org/#/c/621690/ > > -- > Doug > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Fri Dec 7 22:53:27 2018 From: corvus at inaugust.com (James E. Blair) Date: Fri, 07 Dec 2018 14:53:27 -0800 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <43a151f1-e690-96e7-05b5-7561e34033b2@gmail.com> (Matt Riedemann's message of "Fri, 7 Dec 2018 15:01:55 -0600") References: <87bm62z4av.fsf@meyer.lemoncheese.net> <43a151f1-e690-96e7-05b5-7561e34033b2@gmail.com> Message-ID: <87tvjpj6e0.fsf@meyer.lemoncheese.net> Matt Riedemann writes: > On 12/3/2018 3:30 PM, James E. Blair wrote: >> Since some larger projects consume the bulk of cloud resources in our >> system, this can be especially frustrating for smaller projects. To be >> sure, it impacts everyone, but while larger projects receive a >> continuous stream of results (even if delayed) smaller projects may wait >> hours before seeing results on a single change. >> >> In order to help all projects maintain a minimal velocity, we've begun >> dynamically prioritizing node requests based on the number of changes a >> project has in a given pipeline. > > FWIW, and maybe this is happening across the board right now, but it's > taking probably ~16 hours to get results on nova changes right now, > which becomes increasingly frustrating when they finally get a node, > tests run and then the job times out or something because the node is > slow (or some other known race test failure). > > Is there any way to determine or somehow track how long a change has > been queued up before and take that into consideration when it's > re-enqueued? Like take this change: > > https://review.openstack.org/#/c/620154/ > > That took about 3 days to merge with constant rechecks from the time > it was approved. It would be cool if there was a way to say, from > within 50 queued nova changes (using the example in the original > email), let's say zuul knew that 10 of those 50 have already gone > through one or more times and weigh those differently so when they do > get queued up, they are higher in the queue than maybe something that > is just going through it's first time. This suggestion would be difficult to implement, but also, I think it runs counter to some of the ideas that have been put into place in the past. In particular, the idea of clean-check was to make it harder to merge changes with gate failures (under the assumption that they are more likely to introduce racy tests). This might make it easier to recheck-bash bad changes in (along with good). Anyway, we chatted in IRC a bit and came up with another tweak, which is to group projects together in the check pipeline when setting this priority. We already to in gate, but currently, every project in the system gets equal footing in check for their first change. The change under discussion would group all tripleo projects together, and all the integrated projects together, so that the first change for a tripleo project had the same priority as the first change for an integrated project, and a puppet project, etc. The intent is to further reduce the priority "boost" that projects with lots of repos have. The idea is still to try to find a simple and automated way of more fairly distributing our resources. If this doesn't work, we can always return to the previous strict FIFO method. However, given the extreme delays we're seeing across the board, I'm trying to avoid the necessity of actually allocating quota to projects. If we can't make this work, and we aren't able to reduce utilization by improving the reliability of tests (which, by *far* would be the most effective thing to do -- please work with Clark on that), we may have to start talking about that. -Jim From pete.vandergiessen at canonical.com Fri Dec 7 23:05:29 2018 From: pete.vandergiessen at canonical.com (Pete Vander Giessen) Date: Fri, 7 Dec 2018 18:05:29 -0500 Subject: [dev] quick report on microstack Message-ID: Hi All, On Tue, Nov 20, 2018, 8:30 AM Chris Morgan Mark Shuttleworth introduced 'microstack' at the conference last week. > It's a snap based openstack install suitable for getting a quick instance > up for playing with. > ... On Tue Nov 20 14:37:35 UTC 2018 Melvin Hillsman wrote: > I tried it as well, as a one off it seems useful, but after about an hour > and looking at some of the issues on GitHub it is definitely in need of > some TLC. Restarting and reconfiguring is a pickle to say the least. For > example, if you want to access it from other than localhost. I do however > like the simplicity of it out the gate. Thank you both for taking a look and raising bugs :-) We've spent our time after returning from the U.S. Thanksgiving Break hammering bits and pieces into shape. I just pushed a release to the candidate channel that should fix some of the hiccups on install and restart. (Instances you've spun up will still be stopped after a system reboot, but you should be able to start them up manually and continue using them without any issues.) You can install it with: sudo snap install microstack --classic --candidate And if you've already got microstack installed, you can do: sudo snap refresh microstack @Chris Morgan: I left a comment on the bug relating to theme switching that you raised (https://github.com/CanonicalLtd/microstack/issues/39). It looks like something is broken in the pathing for the scss around the roboto font face, but I'm not sure whether it's an issue with the way that the snap lays out the files (a missing symlink, maybe?), or if it's a more general issue when installing from a Rocky release tarball. We're troubleshooting, but if you'd weigh in with an opinion, I'd appreciate it. Thanks again! ~ PeteVG -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Sat Dec 8 00:10:23 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Sat, 8 Dec 2018 09:10:23 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> Message-ID: Thanks, Clark. On Sat, Dec 8, 2018 at 3:16 AM Clark Boylan wrote: > On Fri, Dec 7, 2018, at 8:17 AM, Trinh Nguyen wrote: > > Hi again, > > Just wonder how the image for searchlight test was set up? Which user is > > used for running ElasticSearch? Is there any way to indicate the user > that > > will run the test? Can I do it with [1]? Based on the output of [2] I can > > see there are some permission issue of JDK if I run the functional tests > > with the stack user on my dev environment. > > > > [1] > > > https://git.openstack.org/cgit/openstack/searchlight/tree/tools/test-setup.sh > > [2] > > > https://review.openstack.org/#/c/622871/3/searchlight/tests/functional/__init__.py > > > > The unittest jobs run as the Zuul user. This user has sudo access when > test-setup.sh runs, but then we remove sudo access when tox is run. This is > important as we are trying to help ensure that you can run tox locally > without it making system level changes. > > When your test setup script runs `dpkg -i` this package install may start > running an elasticsearch instance. It depends on how the package is set up. > This daemon would run under the user the package has configured for that > service. When you run an elasticsearch process from your test suite this > will run as the zuul user. > > Hope this helps. I also left a couple of comments on change 622871. > > Clark > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bino at jogjacamp.co.id Sat Dec 8 01:42:48 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Sat, 8 Dec 2018 08:42:48 +0700 Subject: [Neutron] Message-ID: Dear All. I have no problem configuring network via Hosrizon-dasboard. I start playing with python for some task. I got succsess in creating network. I create a router, with one interface connected to existing 'ext-network' .. success. But I fail when I try to add a port to that router for connecting to existing internal network. Here is part of my python shell. -------------------- body_value = { 'port': { 'admin_state_up': True, 'device_owner': 'network:router_interface', 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'binding:host_id': 'rocky-controller.mynet.net', 'binding:profile': {}, 'binding:vnic_type': 'normal', 'fixed_ips': [{ 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254' }], }} response = nt.create_port(body=body_value) response {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': 'network:router_interface', 'revision_number': 1, 'port_security_enabled': False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', 'binding:vnic_type': 'normal'}} -------------------- 'status' always 'DOWN'. Kindly please give me some clue to fix this problem Note : Actualy I post same question on stackexchange : https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down Sincerely -bino- -------------- next part -------------- An HTML attachment was scrubbed... URL: From soheil.ir08 at gmail.com Sat Dec 8 08:07:51 2018 From: soheil.ir08 at gmail.com (Soheil Pourbafrani) Date: Sat, 8 Dec 2018 11:37:51 +0330 Subject: [OpenStack][Neutron] WARNING neutron.pecan_wsgi.controllers.root Message-ID: Hi, I defined an external flat network and the Instances' network functionality is OK but the neutron logs continusly on the file server.log: *2018-12-04 03:21:22.206 2218 WARNING neutron.pecan_wsgi.controllers.root [req-0ac40043-ec93-490d-8770-5f4c49a11ea6 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] No controller found for: floatingips - returning response code 404: PecanNotFound* 2018-12-04 03:21:22.208 2218 INFO neutron.pecan_wsgi.hooks.translation [req-0ac40043-ec93-490d-8770-5f4c49a11ea6 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] GET failed (client error): The resource could not be found. 2018-12-04 03:21:22.209 2218 INFO neutron.wsgi [req-0ac40043-ec93-490d-8770-5f4c49a11ea6 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.31 "GET /v2.0/floatingips?fixed_ip_address=192.168.0.205&port_id=15251089-950a-4a4b-998b-e7412350b8bc HTTP/1.1" status: 404 len: 309 time: 0.0077560 2018-12-04 03:21:22.284 2218 INFO neutron.wsgi [req-1177939c-e304-439a-a6bf-1a3d216a2336 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.31 "GET /v2.0/subnets?id=9771f7f2-12f8-46bd-aca3-f1d26e0b9768 HTTP/1.1" status: 200 len: 835 time: 0.0708778 2018-12-04 03:21:22.368 2218 INFO neutron.wsgi [req-12dce42d-f6a0-4deb-902d-c46d95621013 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.31 "GET /v2.0/ports?network_id=a90d7d71-1ff4-4a98-b2b9-68adaea7d1c4&device_owner=network%3Adhcp HTTP/1.1" status: 200 len: 1085 time: 0.0782819 2018-12-04 03:21:22.569 2218 INFO neutron.wsgi [req-e73d9ad4-4e70-4fb6-91a5-9bd02c606284 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.31 "GET /v2.0/networks/a90d7d71-1ff4-4a98-b2b9-68adaea7d1c4?fields=segments HTTP/1.1" status: 200 len: 212 time: 0.1933858 2018-12-04 03:21:22.751 2218 INFO neutron.wsgi [req-79921bc1-7f1a-4eee-aeca-3b660aab0522 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.31 "GET /v2.0/networks/a90d7d71-1ff4-4a98-b2b9-68adaea7d1c4?fields=provider%3Aphysical_network&fields=provider%3Anetwork_type HTTP/1.1" status: 200 len: 281 time: 0.1775169 2018-12-04 03:21:34.957 2218 INFO neutron.wsgi [req-50877132-2c57-4220-9689-5c739d415aa8 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.32 "GET /v2.0/ports?tenant_id=8cd6e51308894e119d2ad90ed15e71d2&device_id=882901a0-e518-4b4b-bf08-e998c4c0c875 HTTP/1.1" status: 200 len: 1094 time: 0.1096959 2018-12-04 03:21:35.116 2218 INFO neutron.wsgi [req-0305018f-b085-46c2-86cd-1769432cda91 539929ca1549436cb4c4171e037e8df7 0a26c316d0d143229f7420cf7fa35bdc - default default] 192.168.0.32 "GET /v2.0/networks?id=a90d7d71-1ff4-4a98-b2b9-68adaea7d1c4 HTTP/1.1" status: 200 len: 877 time: 0.1516771 I checked the neutron.conf and there was not any web_framework attribute under the [Default] section. >From OpenStack forum, someone suggest to set *web_framework = legacy *in neutron.conf. I did but it had no effect and neutron still is logging the so-called warning. Is this a bug or I miss something in configuration? I use the OpenStack Rocky. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Dec 8 14:42:35 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sat, 8 Dec 2018 15:42:35 +0100 Subject: [Neutron] In-Reply-To: References: Message-ID: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> Hi, You shouldn’t create port with router as device owner. If You want to connect port or subnet to router, there is proper method for that: https://developer.openstack.org/api-ref/network/v2/?expanded=add-interface-to-router-detail#add-interface-to-router — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Bino Oetomo w dniu 08.12.2018, o godz. 02:42: > > Dear All. > > I have no problem configuring network via Hosrizon-dasboard. > > I start playing with python for some task. > I got succsess in creating network. > I create a router, with one interface connected to existing 'ext-network' .. success. > > But I fail when I try to add a port to that router for connecting to existing internal network. > > Here is part of my python shell. > > -------------------- > body_value = { > > > 'port': { > > > 'admin_state_up': True, > > > 'device_owner': 'network:router_interface', > > > 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', > > > 'name': 'Bino-net-01-02', > > > 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > > > 'binding:host_id': 'rocky-controller.mynet.net', > > > 'binding:profile': {}, > > > 'binding:vnic_type': 'normal', > > > 'fixed_ips': [{ > > > 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > > > 'ip_address': '192.168.202.254' > > > }], > > > } > } > > > response > = nt.create_port(body=body_value) > > response > > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': 'network:router_interface', 'revision_number': 1, 'port_security_enabled': False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', 'binding:vnic_type': 'normal'}} > > -------------------- > 'status' always 'DOWN'. > > Kindly please give me some clue to fix this problem > > Note : Actualy I post same question on stackexchange : https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down > > Sincerely > -bino- From mriedemos at gmail.com Sat Dec 8 18:28:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 8 Dec 2018 12:28:47 -0600 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: <79ccf3b2-aded-966b-a7bd-ec98be79f177@gmail.com> On 12/6/2018 5:16 PM, Clark Boylan wrote: > All that said flaky tests are still an issue. One set of problems seems related to slower than expected/before test nodes in the BHS1 region. We've been debugging these with OVH (thank you amorin!) and think we've managed to make some improvements though so far the problems persist. Current theory is that we are acting as our own noisy neighbors starving the hypervisors of disk IO throughput. In order to test that we've halved the total number of resources we'll use there. More details athttps://etherpad.openstack.org/p/bhs1-test-node-slowness including a list of e-r bugs that may be tied to this issue. > > One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug,https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. Here are a couple of fixes for recently fingerprinted gate bugs: https://review.openstack.org/#/c/623669/ https://review.openstack.org/#/c/623597/ Those are in grenade and devstack respectively so we'll need some QA cores. -- Thanks, Matt From gmann at ghanshyammann.com Sun Dec 9 11:22:36 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Dec 2018 20:22:36 +0900 Subject: [cinder] [tempest] Ideas for test cases In-Reply-To: References: Message-ID: <16792b46bae.b1b7956b187049.263161708726828786@ghanshyammann.com> Thanks Sofia, There are few existing test cases in Tempest which cover retype and migration cases. Check if that cover your cases or you can extend those where you can reuse the code. - https://github.com/openstack/tempest/blob/e6c330892fbc8ae790384d554dd6d5c2668d8d24/tempest/api/volume/admin/test_volume_retype.py - https://github.com/openstack/tempest/blob/837726a9ede64e33d0def018da24e146dd6b5af3/tempest/scenario/test_volume_migrate_attached.py -gmann ---- On Sat, 08 Dec 2018 02:55:10 +0900 Sofia Enriquez wrote ---- > Hello cinder guys, > > I'm working on increasing the coverage of Cinder Tempest [1]. > > Since I'm relatively new using cinder retype+migrate functionality, I'm looking for possible ideas of test cases. > Thanks,Sofi > > [1]: https://review.openstack.org/#/c/614022/ > > -- > Sofia Enriquez > Associate Software Engineer > > From ervikrant06 at gmail.com Sat Dec 8 11:51:46 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Sat, 8 Dec 2018 17:21:46 +0530 Subject: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed In-Reply-To: References: Message-ID: Hello Team, Any help on this issue? Thanks & Regards, Vikrant Aggarwal On Tue, Dec 4, 2018 at 6:49 PM Vikrant Aggarwal wrote: > Hello Team, > > Any help on this issue? > > Thanks & Regards, > Vikrant Aggarwal > > > On Fri, Nov 30, 2018 at 9:13 AM Vikrant Aggarwal > wrote: > >> Hi Feilong, >> >> Thanks for your reply. >> >> Kindly find the below outputs. >> >> [root at packstack1 ~]# rpm -qa | grep -i magnum >> python-magnum-7.0.1-1.el7.noarch >> openstack-magnum-conductor-7.0.1-1.el7.noarch >> openstack-magnum-ui-5.0.1-1.el7.noarch >> openstack-magnum-api-7.0.1-1.el7.noarch >> puppet-magnum-13.3.1-1.el7.noarch >> python2-magnumclient-2.10.0-1.el7.noarch >> openstack-magnum-common-7.0.1-1.el7.noarch >> >> [root at packstack1 ~]# rpm -qa | grep -i heat >> openstack-heat-ui-1.4.0-1.el7.noarch >> openstack-heat-api-cfn-11.0.0-1.el7.noarch >> openstack-heat-engine-11.0.0-1.el7.noarch >> puppet-heat-13.3.1-1.el7.noarch >> python2-heatclient-1.16.1-1.el7.noarch >> openstack-heat-api-11.0.0-1.el7.noarch >> openstack-heat-common-11.0.0-1.el7.noarch >> >> Thanks & Regards, >> Vikrant Aggarwal >> >> >> On Fri, Nov 30, 2018 at 2:44 AM Feilong Wang >> wrote: >> >>> Hi Vikrant, >>> >>> Before we dig more, it would be nice if you can let us know the version >>> of your Magnum and Heat. Cheers. >>> >>> >>> On 30/11/18 12:12 AM, Vikrant Aggarwal wrote: >>> >>> Hello Team, >>> >>> Trying to deploy on K8 on fedora atomic. >>> >>> Here is the output of cluster template: >>> ~~~ >>> [root at packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum >>> cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57 >>> WARNING: The magnum client is deprecated and will be removed in a future >>> release. >>> Use the OpenStack client to avoid seeing this message. >>> +-----------------------+--------------------------------------+ >>> | Property | Value | >>> +-----------------------+--------------------------------------+ >>> | insecure_registry | - | >>> | labels | {} | >>> | updated_at | - | >>> | floating_ip_enabled | True | >>> | fixed_subnet | - | >>> | master_flavor_id | - | >>> | user_id | 203617849df9490084dde1897b28eb53 | >>> | uuid | 16eb91f7-18fe-4ce3-98db-c732603f2e57 | >>> | no_proxy | - | >>> | https_proxy | - | >>> | tls_disabled | False | >>> | keypair_id | kubernetes | >>> | project_id | 45a6706c831c42d5bf2da928573382b1 | >>> | public | False | >>> | http_proxy | - | >>> | docker_volume_size | 10 | >>> | server_type | vm | >>> | external_network_id | external1 | >>> | cluster_distro | fedora-atomic | >>> | image_id | f5954340-f042-4de3-819e-a3b359591770 | >>> | volume_driver | - | >>> | registry_enabled | False | >>> | docker_storage_driver | devicemapper | >>> | apiserver_port | - | >>> | name | coe-k8s-template | >>> | created_at | 2018-11-28T12:58:21+00:00 | >>> | network_driver | flannel | >>> | fixed_network | - | >>> | coe | kubernetes | >>> | flavor_id | m1.small | >>> | master_lb_enabled | False | >>> | dns_nameserver | 8.8.8.8 | >>> +-----------------------+--------------------------------------+ >>> ~~~ >>> Found couple of issues in the logs of VM started by magnum. >>> >>> - etcd was not getting started because of incorrect permission on file >>> "/etc/etcd/certs/server.key". This file is owned by root by default have >>> 0440 as permission. Changed the permission to 0444 so that etcd can read >>> the file. After that etcd started successfully. >>> >>> - etcd DB doesn't contain anything: >>> >>> [root at kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r >>> [root at kube-cluster1-qobaagdob75g-master-0 ~]# >>> >>> - Flanneld is stuck in activating status. >>> ~~~ >>> [root at kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld >>> ● flanneld.service - Flanneld overlay address etcd agent >>> Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; >>> vendor preset: disabled) >>> Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago >>> Main PID: 6491 (flanneld) >>> Tasks: 6 (limit: 4915) >>> Memory: 4.7M >>> CPU: 53ms >>> CGroup: /system.slice/flanneld.service >>> └─6491 /usr/bin/flanneld -etcd-endpoints= >>> http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network >>> >>> Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:44.569376 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:45.584532 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:46.646255 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:47.673062 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:48.686919 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:49.709136 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:50.729548 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:51 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:51.749425 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:52 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:52.776612 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> Nov 29 11:05:53 kube-cluster1-qobaagdob75g-master-0.novalocal >>> flanneld[6491]: E1129 11:05:53.790418 6491 network.go:102] failed to >>> retrieve network config: 100: Key not found (/atomic.io) [3] >>> ~~~ >>> >>> - Continuously in the jouralctl logs following messages are printed. >>> >>> ~~~ >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> kube-apiserver[6888]: F1129 11:06:39.338416 6888 server.go:269] Invalid >>> Authorization Config: Unknown authorization mode Node specified >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> systemd[1]: kube-apiserver.service: Main process exited, code=exited, >>> status=255/n/a >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> kube-scheduler[2540]: E1129 11:06:39.408272 2540 reflector.go:199] >>> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed >>> to list *api.Node: Get >>> http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp >>> 127.0.0.1:8080: getsockopt: connection refused >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> kube-scheduler[2540]: E1129 11:06:39.444737 2540 reflector.go:199] >>> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed >>> to list *api.Pod: Get >>> http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: >>> dial tcp 127.0.0.1:8080: getsockopt: connection refused >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> kube-scheduler[2540]: E1129 11:06:39.445793 2540 reflector.go:199] >>> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed >>> to list *api.PersistentVolume: Get >>> http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial >>> tcp 127.0.0.1:8080: getsockopt: connection refused >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal audit[1]: >>> SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 >>> subj=system_u:system_r:init_t:s0 msg='unit=kube-apiserver comm="systemd" >>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> systemd[1]: Failed to start Kubernetes API Server. >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> systemd[1]: kube-apiserver.service: Unit entered failed state. >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> systemd[1]: kube-apiserver.service: Failed with result 'exit-code'. >>> Nov 29 11:06:39 kube-cluster1-qobaagdob75g-master-0.novalocal >>> kube-scheduler[2540]: E1129 11:06:39.611699 2540 reflector.go:199] >>> k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed >>> to list *extensions.ReplicaSet: Get >>> http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: >>> dial tcp 127.0.0.1:8080: getsockopt: connection refused >>> ~~~ >>> >>> Any help on above issue is highly appreciated. >>> >>> Thanks & Regards, >>> Vikrant Aggarwal >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> -- >>> Cheers & Best regards, >>> Feilong Wang (王飞龙) >>> -------------------------------------------------------------------------- >>> Senior Cloud Software Engineer >>> Tel: +64-48032246 >>> Email: flwang at catalyst.net.nz >>> Catalyst IT Limited >>> Level 6, Catalyst House, 150 Willis Street, Wellington >>> -------------------------------------------------------------------------- >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bino at jogjacamp.co.id Sat Dec 8 23:49:07 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Sun, 9 Dec 2018 06:49:07 +0700 Subject: [Neutron] In-Reply-To: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> References: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> Message-ID: Dear Kaplonski sir. Thankyou for your help. Actualy, the one that I paste is not my first try. It's the result after I blindly trying add more and more parameters into body. Ok, I will try your sugestion and make a call using very minimal parameters per https://docs.openstack.org/ocata/user-guide/sdk-neutron-apis.html#create-router-and-add-port-to-subnet body_value = {'port': { 'admin_state_up': True, 'device_id': router_device_id, 'name': 'port1', 'network_id': network_id, }} I'll back to this list. Sincerely -bino- On Sat, Dec 8, 2018 at 9:42 PM Slawomir Kaplonski wrote: > Hi, > > You shouldn’t create port with router as device owner. If You want to > connect port or subnet to router, there is proper method for that: > https://developer.openstack.org/api-ref/network/v2/?expanded=add-interface-to-router-detail#add-interface-to-router > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Bino Oetomo w dniu > 08.12.2018, o godz. 02:42: > > > > Dear All. > > > > I have no problem configuring network via Hosrizon-dasboard. > > > > I start playing with python for some task. > > I got succsess in creating network. > > I create a router, with one interface connected to existing > 'ext-network' .. success. > > > > But I fail when I try to add a port to that router for connecting to > existing internal network. > > > > Here is part of my python shell. > > > > -------------------- > > body_value = { > > > > > > 'port': { > > > > > > 'admin_state_up': True, > > > > > > 'device_owner': 'network:router_interface', > > > > > > 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', > > > > > > 'name': 'Bino-net-01-02', > > > > > > 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > > > > > > 'binding:host_id': 'rocky-controller.mynet.net', > > > > > > 'binding:profile': {}, > > > > > > 'binding:vnic_type': 'normal', > > > > > > 'fixed_ips': [{ > > > > > > 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > > > > > > 'ip_address': '192.168.202.254' > > > > > > }], > > > > > > } > > } > > > > > > response > > = nt.create_port(body=body_value) > > > > response > > > > > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], > 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': > 'network:router_interface', 'revision_number': 1, 'port_security_enabled': > False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': > 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], > 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], > 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', > 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': > 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', > 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', > 'description': '', 'tags': [], 'device_id': > 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', > 'admin_state_up': True, 'network_id': > 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', > 'binding:vnic_type': 'normal'}} > > > > -------------------- > > 'status' always 'DOWN'. > > > > Kindly please give me some clue to fix this problem > > > > Note : Actualy I post same question on stackexchange : > https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down > > > > Sincerely > > -bino- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Dec 9 13:57:54 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Dec 2018 22:57:54 +0900 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> Message-ID: <1679342979a.e9c69432187953.2468583431880373982@ghanshyammann.com> ---- On Fri, 07 Dec 2018 08:50:30 +0900 Matt Riedemann wrote ---- > On 12/6/2018 5:16 PM, Clark Boylan wrote: > > I was asked to write another one of these in the Nova meeting today so here goes. > > Thanks Clark, this is really helpful. > > > > > One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug,https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. > > That was split off from this: > > https://bugs.launchpad.net/nova/+bug/1807044 > > But yeah a couple of issues Dan and I are digging into. > > Another thing I noticed in one of these nova-api start timeout failures > in ovh-bhs1 was uwsgi seems to just stall for 26 seconds here: > > http://logs.openstack.org/01/619701/5/gate/tempest-slow/2bb461b/controller/logs/screen-n-api.txt.gz#_Dec_05_20_13_23_060958 > > I pushed a patch to enable uwsgi debug logging: > > https://review.openstack.org/#/c/623265/ > > But of course I didn't (1) get a recreate or (2) seem to see any > additional debug logging from uwsgi. If someone else knows how to enable > that please let me know. > > > > > These are the big issues that affect large numbers of projects (or even all of them), but there are still many project specific problems floating around as well. Unfortunately I haven't had much time to help dig into those recently (see broader issues above), but I think it would be helpful if projects can do some of that digging themselves. Also, a friendly reminder that we try to provide in cloud region mirrors and caches for commonly used resources like distro packages, pypi packages, dockerhub images, and so on. If your jobs aren't using these and you find they fail occasionally due to the Internet being flaky we'll be happy to help you update the jobs to use the in region resources instead. > > I'm not sure if this query is valid anymore: > > http://status.openstack.org/elastic-recheck/#1783405 > > If it is, then we still have some tempest tests that aren't marked as > slow but are contributing to job timeouts outside the tempest-slow job. > I know the last time this came up, the QA team had a report of the > slowest non-slow tests - can we get another one of those now? This seems still valid query. 7 fails in 24 hrs / 302 fails in 10 days. I did some more catagorization for this query with build_name and found failure are- tempest-full or tempest-full-py3 - ~50% tempest-all - 2 % tempest-slow - 2% rest all is in all other jobs. I proposed to modify the query to exclude the tempest-all and tempest-slow job which runs all slow tests also. - https://review.openstack.org/#/c/623949/ On doing another round of marking slow tests, I will check if we can get more specific slow tests which are slow consistantly. -gmann > > Another thing is, are there particular voting jobs that have a failure > rate over 50% and are resetting the gate? If we do, we should consider > making them non-voting while project teams work on fixing the issues. > Because I've had approved patches for days now taking 13+ hours just to > fail, which is pretty unsustainable. > > > > > We'll keep pushing to fix the broader issues and are more than happy to help debug failures you hit within your projects as well. > > > > Hopefully this was helpful despite its length. > > Again, thank you Clark for taking the time to write up this summary - > it's extremely useful. > > -- > > Thanks, > > Matt > > From gmann at ghanshyammann.com Sun Dec 9 14:05:29 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Dec 2018 23:05:29 +0900 Subject: [infra] Update on test throughput and Zuul backlogs In-Reply-To: <79ccf3b2-aded-966b-a7bd-ec98be79f177@gmail.com> References: <1544138161.3416170.1601467632.09A4549E@webmail.messagingengine.com> <79ccf3b2-aded-966b-a7bd-ec98be79f177@gmail.com> Message-ID: <16793498862.125d42988187984.1060993771189928648@ghanshyammann.com> ---- On Sun, 09 Dec 2018 03:28:47 +0900 Matt Riedemann wrote ---- > On 12/6/2018 5:16 PM, Clark Boylan wrote: > > All that said flaky tests are still an issue. One set of problems seems related to slower than expected/before test nodes in the BHS1 region. We've been debugging these with OVH (thank you amorin!) and think we've managed to make some improvements though so far the problems persist. Current theory is that we are acting as our own noisy neighbors starving the hypervisors of disk IO throughput. In order to test that we've halved the total number of resources we'll use there. More details athttps://etherpad.openstack.org/p/bhs1-test-node-slowness including a list of e-r bugs that may be tied to this issue. > > > > One thing to keep in mind is that while the test nodes are slower than we'd like, they have also exposed some situations where our software is less efficient than we'd like. At least one bug,https://bugs.launchpad.net/nova/+bug/1807219, has been identified through this. I would encourage people debugging these slow tests to look to see if this exposes a deficiency in our software that can be fixed. > > Here are a couple of fixes for recently fingerprinted gate bugs: > > https://review.openstack.org/#/c/623669/ > > https://review.openstack.org/#/c/623597/ > > Those are in grenade and devstack respectively so we'll need some QA cores. Done. grenade one is merged and devstack is in the queue. -gmann > > -- > > Thanks, > > Matt > > From gmann at ghanshyammann.com Sun Dec 9 14:14:37 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Dec 2018 23:14:37 +0900 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <87tvjpj6e0.fsf@meyer.lemoncheese.net> References: <87bm62z4av.fsf@meyer.lemoncheese.net> <43a151f1-e690-96e7-05b5-7561e34033b2@gmail.com> <87tvjpj6e0.fsf@meyer.lemoncheese.net> Message-ID: <1679351e6fd.cc0838f2188020.4216860630159968715@ghanshyammann.com> ---- On Sat, 08 Dec 2018 07:53:27 +0900 James E. Blair wrote ---- > Matt Riedemann writes: > > > On 12/3/2018 3:30 PM, James E. Blair wrote: > >> Since some larger projects consume the bulk of cloud resources in our > >> system, this can be especially frustrating for smaller projects. To be > >> sure, it impacts everyone, but while larger projects receive a > >> continuous stream of results (even if delayed) smaller projects may wait > >> hours before seeing results on a single change. > >> > >> In order to help all projects maintain a minimal velocity, we've begun > >> dynamically prioritizing node requests based on the number of changes a > >> project has in a given pipeline. > > > > FWIW, and maybe this is happening across the board right now, but it's > > taking probably ~16 hours to get results on nova changes right now, > > which becomes increasingly frustrating when they finally get a node, > > tests run and then the job times out or something because the node is > > slow (or some other known race test failure). > > > > Is there any way to determine or somehow track how long a change has > > been queued up before and take that into consideration when it's > > re-enqueued? Like take this change: > > > > https://review.openstack.org/#/c/620154/ > > > > That took about 3 days to merge with constant rechecks from the time > > it was approved. It would be cool if there was a way to say, from > > within 50 queued nova changes (using the example in the original > > email), let's say zuul knew that 10 of those 50 have already gone > > through one or more times and weigh those differently so when they do > > get queued up, they are higher in the queue than maybe something that > > is just going through it's first time. > > This suggestion would be difficult to implement, but also, I think it > runs counter to some of the ideas that have been put into place > in the past. In particular, the idea of clean-check was to make it > harder to merge changes with gate failures (under the assumption that > they are more likely to introduce racy tests). This might make it > easier to recheck-bash bad changes in (along with good). > > Anyway, we chatted in IRC a bit and came up with another tweak, which is > to group projects together in the check pipeline when setting this > priority. We already to in gate, but currently, every project in the > system gets equal footing in check for their first change. The change > under discussion would group all tripleo projects together, and all the > integrated projects together, so that the first change for a tripleo > project had the same priority as the first change for an integrated > project, and a puppet project, etc. > > The intent is to further reduce the priority "boost" that projects with > lots of repos have. > > The idea is still to try to find a simple and automated way of more > fairly distributing our resources. If this doesn't work, we can always > return to the previous strict FIFO method. However, given the extreme > delays we're seeing across the board, I'm trying to avoid the necessity > of actually allocating quota to projects. If we can't make this work, > and we aren't able to reduce utilization by improving the reliability of > tests (which, by *far* would be the most effective thing to do -- please > work with Clark on that), we may have to start talking about that. > > -Jim We can optimize the node by removing the job from running queue on the first failure hit instead of full run and then release the node. This is a trade-off with getting the all failure once and fix them all together but I am not sure if that is the case all time. For example- if any change has pep8 error then, no need to run integration tests jobs there. This at least can save nodes at some extent. -gmann > > From fungi at yuggoth.org Sun Dec 9 14:25:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 9 Dec 2018 14:25:57 +0000 Subject: [infra] A change to Zuul's queuing behavior In-Reply-To: <1679351e6fd.cc0838f2188020.4216860630159968715@ghanshyammann.com> References: <87bm62z4av.fsf@meyer.lemoncheese.net> <43a151f1-e690-96e7-05b5-7561e34033b2@gmail.com> <87tvjpj6e0.fsf@meyer.lemoncheese.net> <1679351e6fd.cc0838f2188020.4216860630159968715@ghanshyammann.com> Message-ID: <20181209142556.lg2wkpoj2xfvkdio@yuggoth.org> On 2018-12-09 23:14:37 +0900 (+0900), Ghanshyam Mann wrote: [...] > We can optimize the node by removing the job from running queue on > the first failure hit instead of full run and then release the > node. This is a trade-off with getting the all failure once and > fix them all together but I am not sure if that is the case all > time. For example- if any change has pep8 error then, no need to > run integration tests jobs there. This at least can save nodes at > some extent. I can recall plenty of times where I've pushed a change which failed pep8 on some non-semantic whitespace complaint and also had unit test or integration test failures. In those cases it's quite obvious that the pep8 failure reason couldn't have been the reason for the other failed jobs so seeing them all saved me wasting time on additional patches and waiting for more rounds of results. For that matter, a lot of my time as a developer (or even as a reviewer) is saved by seeing which clusters of jobs fail for a given change. For example, if I see all unit test jobs fail but integration test jobs pass I can quickly infer that there may be issues with a unit test that's being modified and spend less time fumbling around in the dark with various logs. It's possible we can save some CI resource consumption with such a trade-off, but doing so comes at the expense of developer and reviewer time so we have to make sure it's worthwhile. There was a point in the past where we did something similar (only run other jobs if a canary linter job passed), and there are good reasons why we didn't continue it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zilhazur.rahman at brilliant.com.bd Sun Dec 9 16:15:37 2018 From: zilhazur.rahman at brilliant.com.bd (Zilhazur Rahman) Date: Sun, 9 Dec 2018 22:15:37 +0600 (BDT) Subject: [Neutron][ovs-dpdk] Message-ID: <889103638.17479349.1544372137196.JavaMail.zimbra@brilliant.com.bd> Hi I am writing for the first time to the mailing list of openstack, I am trying to deploy ovs-dpdk to have better traffic throughput for NFV. Could you please share any tutorial link for standard ovs-dpdk deployment. On the other hand, openstack site says this " Expect performance degradation of services using tap devices: these devices do not support DPDK. Example services include DVR, FWaaS, or LBaaS. " but I need to have LBaaS and DVR ( to have direct external connectivity on compute) , what could be done in this case? Regards Zilhaz From bino at jogjacamp.co.id Mon Dec 10 02:04:10 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Mon, 10 Dec 2018 09:04:10 +0700 Subject: [Neutron] In-Reply-To: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> References: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> Message-ID: Dear All, As suggested by Slawek Kaplonski, I tried this. ----------- >>> myrouter {'status': 'ACTIVE', 'external_gateway_info': {'network_id': 'd10dd06a-0425-49eb-a8ba-85abf55ac0f5', 'enable_snat': True, 'external_fixed_ips': [{'subnet_id': '4639e018-1cc1-49cc-89d4-4cad49bd4b89', 'ip_address': '10.10.10.2'}]}, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'description': '', 'tags': [], 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:24:41Z', 'admin_state_up': True, 'distributed': False, 'updated_at': '2018-12-07T07:58:02Z', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'flavor_id': None, 'revision_number': 10, 'routes': [], 'ha': False, 'id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01'} ```>>> mynetworks=[n for n in netlist['networks'] if n['name'].startswith('Bino')] >>> mynetworks [{'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 7, 'port_security_enabled': True, 'mtu': 1500, 'id': '942675f6-fd5e-4bb2-ba43-487be992ff4e', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['5373bc35-a90f-4793-a912-801920e47769'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:50:27Z', 'provider:segmentation_id': 2037, 'name': 'Bino-net-01', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:28:00Z', 'provider:network_type': 'vlan'}, {'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 6, 'port_security_enabled': True, 'mtu': 1500, 'id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['c71a86a3-f9a8-4e60-828e-5d6f87e58ac9'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:49:59Z', 'provider:segmentation_id': 2002, 'name': 'Bino-net-02', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:29:52Z', 'provider:network_type': 'vlan'}] >>> print([n['name'] for n in mynetworks]) ['Bino-net-01', 'Bino-net-02'] >>> mynetwork=mynetworks[1] >>> mynetwork['name'] 'Bino-net-02' >>> body_value = { ... 'port': { ... 'admin_state_up': True, ... 'device_id': myrouter['id'], ... 'name': 'Bino-rtr-01-02', ... 'network_id': mynetwork['id'], ... } ... } >>> response = nt.create_port(body=body_value) >>> response {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-10T01:44:10Z', 'device_owner': '', 'revision_number': 1, 'binding:profile': {}, 'port_security_enabled': True, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.6'}], 'id': 'a2a337af-3a37-4a9a-a4f1-ffaacdf9f881', 'security_groups': ['4bed540c-266d-4cc2-8225-3e02ccd89ff1'], 'binding:vif_details': {}, 'binding:vif_type': 'unbound', 'mac_address': 'fa:16:3e:05:aa:b5', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': '', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-10T01:44:10Z', 'binding:vnic_type': 'normal'}} >>> response['port']['status'] 'DOWN' ------ Looks like the port status still 'DOWN'. Sincerely -bino- On Sat, Dec 8, 2018 at 9:42 PM Slawomir Kaplonski wrote: > Hi, > > You shouldn’t create port with router as device owner. If You want to > connect port or subnet to router, there is proper method for that: > https://developer.openstack.org/api-ref/network/v2/?expanded=add-interface-to-router-detail#add-interface-to-router > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Bino Oetomo w dniu > 08.12.2018, o godz. 02:42: > > > > Dear All. > > > > I have no problem configuring network via Hosrizon-dasboard. > > > > I start playing with python for some task. > > I got succsess in creating network. > > I create a router, with one interface connected to existing > 'ext-network' .. success. > > > > But I fail when I try to add a port to that router for connecting to > existing internal network. > > > > Here is part of my python shell. > > > > -------------------- > > body_value = { > > > > > > 'port': { > > > > > > 'admin_state_up': True, > > > > > > 'device_owner': 'network:router_interface', > > > > > > 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', > > > > > > 'name': 'Bino-net-01-02', > > > > > > 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > > > > > > 'binding:host_id': 'rocky-controller.mynet.net', > > > > > > 'binding:profile': {}, > > > > > > 'binding:vnic_type': 'normal', > > > > > > 'fixed_ips': [{ > > > > > > 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > > > > > > 'ip_address': '192.168.202.254' > > > > > > }], > > > > > > } > > } > > > > > > response > > = nt.create_port(body=body_value) > > > > response > > > > > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], > 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': > 'network:router_interface', 'revision_number': 1, 'port_security_enabled': > False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': > 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], > 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], > 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', > 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': > 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', > 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', > 'description': '', 'tags': [], 'device_id': > 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', > 'admin_state_up': True, 'network_id': > 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', > 'binding:vnic_type': 'normal'}} > > > > -------------------- > > 'status' always 'DOWN'. > > > > Kindly please give me some clue to fix this problem > > > > Note : Actualy I post same question on stackexchange : > https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down > > > > Sincerely > > -bino- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ervikrant06 at gmail.com Mon Dec 10 02:54:27 2018 From: ervikrant06 at gmail.com (Vikrant Aggarwal) Date: Mon, 10 Dec 2018 08:24:27 +0530 Subject: [neutron] [octavia] Manual deployment step for octavia on packstack Message-ID: Hello Team, Do we have the steps documented somewhere to install octavia manually like we have for zun [1]? I have done the openstack deployment using packstack and now I want to install the octavia manually on it. I have done the following steps: # groupadd --system octavia # useradd --home-dir "/var/lib/octavia" --create-home --system --shell /bin/false -g octavia octavia # cd /var/lib/octavia/ # git clone https://github.com/openstack/octavia.git # chown -R octavia:octavia * # pip install -r requirements.txt # python setup.py install # openstack user create --domain default --password-prompt octavia # openstack role add --project service --user octavia admin # openstack service create --name octavia --description "Octavia Service" " Octavia Load Balancing Servic" # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" public http://10.121.19.50:9876/v1 # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" admin http://10.121.19.50:9876/v1 # openstack endpoint create --region RegionOne "Octavia Load Balancing Servic" internal http://10.121.19.50:9876/v1 Made the following changes in the configuration file. [root at packstack1 octavia(keystone_admin)]# diff etc/octavia.conf /etc/ octavia/octavia.conf 20,21c20,21 < # bind_host = 127.0.0.1 < # bind_port = 9876 --- > bind_host = 10.121.19.50 > bind_port = 9876 38c38 < # api_v2_enabled = True --- > # api_v2_enabled = False 64c64 < # connection = mysql+pymysql:// --- > connection = mysql+pymysql://octavia:octavia at 10.121.19.50/octavia 109c109 < # www_authenticate_uri = https://localhost:5000/v3 --- > www_authenticate_uri = https://10.121.19.50:5000/v3 111,114c111,114 < # auth_url = https://localhost:5000/v3 < # username = octavia < # password = password < # project_name = service --- > auth_url = https://10.121.19.50:35357/v3 > username = octavia > password = octavia > project_name = service 117,118c117,118 < # project_domain_name = Default < # user_domain_name = Default --- > project_domain_name = default > user_domain_name = default Generated the certificates using the script and copy the following certificates in octavia: [root at packstack1 octavia(keystone_admin)]# cd /etc/octavia/ [root at packstack1 octavia(keystone_admin)]# ls -lhrt total 28K -rw-r--r--. 1 octavia octavia 14K Dec 4 05:50 octavia.conf -rw-r--r--. 1 octavia octavia 1.7K Dec 4 05:55 client.key -rw-r--r--. 1 octavia octavia 989 Dec 4 05:55 client.csr -rw-r--r--. 1 octavia octavia 1.7K Dec 4 05:55 client.pem Can anyone please guide me about the further configuration? [1] https://docs.openstack.org/zun/latest/install/controller-install.html Thanks & Regards, Vikrant Aggarwal -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Mon Dec 10 05:09:22 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 10 Dec 2018 16:09:22 +1100 Subject: [Release-job-failures] Tag of openstack/freezer failed In-Reply-To: References: Message-ID: <20181210050922.GC32339@thor.bakeyournoodle.com> On Mon, Dec 10, 2018 at 04:49:47AM +0000, zuul at openstack.org wrote: > Build failed. > > - publish-openstack-releasenotes-python3 http://logs.openstack.org/31/31314e5e707bfe4933e1046dd0a18f1daa8cca6c/tag/publish-openstack-releasenotes-python3/bc14e14/ : POST_FAILURE in 2m 31s Looking at the logs I think this failed in create-afs-token[1]. If I Understand correctly this means the releasenotes weren't published but they will upon the next successful publish run. Yours Tony. [1] http://logs.openstack.org/31/31314e5e707bfe4933e1046dd0a18f1daa8cca6c/tag/publish-openstack-releasenotes-python3/bc14e14/job-output.txt.gz#_2018-12-10_04_49_23_454869 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From bino at jogjacamp.co.id Mon Dec 10 05:13:34 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Mon, 10 Dec 2018 12:13:34 +0700 Subject: Add port to router using python Message-ID: Dear All. My openstack installation version is 'rocky'. I tried to create a new port and add it to existing router. Got no error messages, but the port status always 'DOWN'. -------------------------------------- >>> myrouter {'status': 'ACTIVE', 'external_gateway_info': {'network_id': 'd10dd06a-0425-49eb-a8ba-85abf55ac0f5', 'enable_snat': True, 'external_fixed_ips': [{'subnet_id': '4639e018-1cc1-49cc-89d4-4cad49bd4b89', 'ip_address': '10.10.10.2'}]}, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'description': '', 'tags': [], 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:24:41Z', 'admin_state_up': True, 'distributed': False, 'updated_at': '2018-12-07T07:58:02Z', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'flavor_id': None, 'revision_number': 10, 'routes': [], 'ha': False, 'id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01'} ```>>> mynetworks=[n for n in netlist['networks'] if n['name'].startswith('Bino')] >>> mynetworks [{'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 7, 'port_security_enabled': True, 'mtu': 1500, 'id': '942675f6-fd5e-4bb2-ba43-487be992ff4e', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['5373bc35-a90f-4793-a912-801920e47769'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:50:27Z', 'provider:segmentation_id': 2037, 'name': 'Bino-net-01', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:28:00Z', 'provider:network_type': 'vlan'}, {'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 6, 'port_security_enabled': True, 'mtu': 1500, 'id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['c71a86a3-f9a8-4e60-828e-5d6f87e58ac9'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:49:59Z', 'provider:segmentation_id': 2002, 'name': 'Bino-net-02', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:29:52Z', 'provider:network_type': 'vlan'}] >>> print([n['name'] for n in mynetworks]) ['Bino-net-01', 'Bino-net-02'] >>> mynetwork=mynetworks[1] >>> mynetwork['name'] 'Bino-net-02' >>> body_value = { ... 'port': { ... 'admin_state_up': True, ... 'device_id': myrouter['id'], ... 'name': 'Bino-rtr-01-02', ... 'network_id': mynetwork['id'], ... } ... } >>> response = nt.create_port(body=body_value) >>> response {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-10T01:44:10Z', 'device_owner': '', 'revision_number': 1, 'binding:profile': {}, 'port_security_enabled': True, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.6'}], 'id': 'a2a337af-3a37-4a9a-a4f1-ffaacdf9f881', 'security_groups': ['4bed540c-266d-4cc2-8225-3e02ccd89ff1'], 'binding:vif_details': {}, 'binding:vif_type': 'unbound', 'mac_address': 'fa:16:3e:05:aa:b5', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': '', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-10T01:44:10Z', 'binding:vnic_type': 'normal'}} >>> response['port']['status'] 'DOWN' -------------------------------------- Kindly please give me some clues to fic this problem Sincerely -bino- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bino at jogjacamp.co.id Mon Dec 10 05:19:00 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Mon, 10 Dec 2018 12:19:00 +0700 Subject: apologize Message-ID: Dear All. I apologize for my double post. I just realize after I write the second one. The first one is posted before I joint the list, it's http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000745.html And the second one is : http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000760.html Sincerely -bino- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Dec 10 07:21:34 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 Dec 2018 16:21:34 +0900 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> Message-ID: <16796fe1b6c.c3e4e009192453.8132097322747769095@ghanshyammann.com> ---- On Tue, 02 Oct 2018 00:28:51 +0900 Matt Riedemann wrote ---- > On 10/1/2018 8:37 AM, Ghanshyam Mann wrote: > > +1 on adding multiattach on integrated job. It is always good to cover more features in integrate-gate instead of separate jobs. These tests does not take much time, it should be ok to add in tempest-full [1]. We should make only really slow test as 'slow' otherwise it should be fine to run in tempest-full. > > > > I thought adding tempest-slow on cinder was merged but it is not[2] > > > > [1]http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653 > > [2]https://review.openstack.org/#/c/591354/2 > > Actually it will be enabled in both tempest-full and tempest-slow, > because there is also a multiattach test marked as 'slow': > TestMultiAttachVolumeSwap. > > I'll push patches today. While reviewing your patch and checking multiattach slow test on stable branch as part of tempest-slow job, I found that tempest-slow (tempest-multinode-full) job does not run on nova stable branches (even nova .zuul.yaml has that job to run for stable branch) which we can say bug on Tempest side because tempest-slow job definition is to run only on master [1]. I am trying to enable that for all stable branches[2]. I am getting few failure on tempest-slow (tempest-multinode-full) for stable branches which might take time to fix and till then let's keep nova-multiattach on stable branches and remove only for master. [1] https://github.com/openstack/tempest/blob/a32467c4c515dff325e6b4b5ce7af24a0b7a9961/.zuul.yaml#L270 [2] https://review.openstack.org/#/q/topic:tempest-multinode-slow-stable+(status:open+OR+status:merged) -gmann > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From skaplons at redhat.com Mon Dec 10 08:01:10 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 10 Dec 2018 09:01:10 +0100 Subject: [Neutron] In-Reply-To: References: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> Message-ID: <67369879-1E4B-4B84-9744-ED094F499F21@redhat.com> Hi, If You want to attach port/subnet to router, please don’t do this with „create_port()” method only - it’s not enough. You should use add_interface_to_router() method if You are using Openstack SDK: https://github.com/openstack/openstacksdk/blob/master/openstack/network/v2/_proxy.py#L2441 or add_interface_router() method from neutron client: https://github.com/openstack/python-neutronclient/blob/d8cb1472c867d2a308e26abea0b0a01f1d6629a1/neutronclient/v2_0/client.py#L918 — Slawek Kaplonski Senior software engineer Red Hat > Wiadomość napisana przez Bino Oetomo w dniu 10.12.2018, o godz. 03:04: > > Dear All, > > As suggested by Slawek Kaplonski, I tried this. > > ----------- > > >>> myrouter > {'status': 'ACTIVE', 'external_gateway_info': {'network_id': 'd10dd06a-0425-49eb-a8ba-85abf55ac0f5', 'enable_snat': True, 'external_fixed_ips': [{'subnet_id': '4639e018-1cc1-49cc-89d4-4cad49bd4b89', 'ip_address': '10.10.10.2'}]}, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'description': '', 'tags': [], 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:24:41Z', 'admin_state_up': True, 'distributed': False, 'updated_at': '2018-12-07T07:58:02Z', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'flavor_id': None, 'revision_number': 10, 'routes': [], 'ha': False, 'id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01'} > > ```>>> mynetworks=[n for n in netlist['networks'] if n['name'].startswith('Bino')] > >>> mynetworks > [{'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 7, 'port_security_enabled': True, 'mtu': 1500, 'id': '942675f6-fd5e-4bb2-ba43-487be992ff4e', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['5373bc35-a90f-4793-a912-801920e47769'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:50:27Z', 'provider:segmentation_id': 2037, 'name': 'Bino-net-01', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:28:00Z', 'provider:network_type': 'vlan'}, {'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, 'revision_number': 6, 'port_security_enabled': True, 'mtu': 1500, 'id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'router:external': False, 'availability_zone_hints': [], 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': ['c71a86a3-f9a8-4e60-828e-5d6f87e58ac9'], 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:49:59Z', 'provider:segmentation_id': 2002, 'name': 'Bino-net-02', 'admin_state_up': True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:29:52Z', 'provider:network_type': 'vlan'}] > >>> print([n['name'] for n in mynetworks]) > ['Bino-net-01', 'Bino-net-02'] > >>> mynetwork=mynetworks[1] > >>> mynetwork['name'] > 'Bino-net-02' > >>> body_value = { > ... 'port': { > ... 'admin_state_up': True, > ... 'device_id': myrouter['id'], > ... 'name': 'Bino-rtr-01-02', > ... 'network_id': mynetwork['id'], > ... } > ... } > >>> response = nt.create_port(body=body_value) > >>> response > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-10T01:44:10Z', 'device_owner': '', 'revision_number': 1, 'binding:profile': {}, 'port_security_enabled': True, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.6'}], 'id': 'a2a337af-3a37-4a9a-a4f1-ffaacdf9f881', 'security_groups': ['4bed540c-266d-4cc2-8225-3e02ccd89ff1'], 'binding:vif_details': {}, 'binding:vif_type': 'unbound', 'mac_address': 'fa:16:3e:05:aa:b5', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': '', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-10T01:44:10Z', 'binding:vnic_type': 'normal'}} > >>> response['port']['status'] > 'DOWN' > ------ > > Looks like the port status still 'DOWN'. > > Sincerely > -bino- > > > On Sat, Dec 8, 2018 at 9:42 PM Slawomir Kaplonski wrote: > Hi, > > You shouldn’t create port with router as device owner. If You want to connect port or subnet to router, there is proper method for that: https://developer.openstack.org/api-ref/network/v2/?expanded=add-interface-to-router-detail#add-interface-to-router > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Bino Oetomo w dniu 08.12.2018, o godz. 02:42: > > > > Dear All. > > > > I have no problem configuring network via Hosrizon-dasboard. > > > > I start playing with python for some task. > > I got succsess in creating network. > > I create a router, with one interface connected to existing 'ext-network' .. success. > > > > But I fail when I try to add a port to that router for connecting to existing internal network. > > > > Here is part of my python shell. > > > > -------------------- > > body_value = { > > > > > > 'port': { > > > > > > 'admin_state_up': True, > > > > > > 'device_owner': 'network:router_interface', > > > > > > 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', > > > > > > 'name': 'Bino-net-01-02', > > > > > > 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > > > > > > 'binding:host_id': 'rocky-controller.mynet.net', > > > > > > 'binding:profile': {}, > > > > > > 'binding:vnic_type': 'normal', > > > > > > 'fixed_ips': [{ > > > > > > 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > > > > > > 'ip_address': '192.168.202.254' > > > > > > }], > > > > > > } > > } > > > > > > response > > = nt.create_port(body=body_value) > > > > response > > > > > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': 'network:router_interface', 'revision_number': 1, 'port_security_enabled': False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', 'description': '', 'tags': [], 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', 'admin_state_up': True, 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', 'binding:vnic_type': 'normal'}} > > > > -------------------- > > 'status' always 'DOWN'. > > > > Kindly please give me some clue to fix this problem > > > > Note : Actualy I post same question on stackexchange : https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down > > > > Sincerely > > -bino- > From bino at jogjacamp.co.id Mon Dec 10 08:57:44 2018 From: bino at jogjacamp.co.id (Bino Oetomo) Date: Mon, 10 Dec 2018 15:57:44 +0700 Subject: [Neutron] In-Reply-To: <67369879-1E4B-4B84-9744-ED094F499F21@redhat.com> References: <05CDC301-26EF-4554-B595-9FB950BD8731@redhat.com> <67369879-1E4B-4B84-9744-ED094F499F21@redhat.com> Message-ID: Dear Sir. You are right. It works like a charm I realy appreciate your help Sincerely -bino- On Mon, Dec 10, 2018 at 3:02 PM Slawomir Kaplonski wrote: > Hi, > > If You want to attach port/subnet to router, please don’t do this with > „create_port()” method only - it’s not enough. > You should use add_interface_to_router() method if You are using Openstack > SDK: > https://github.com/openstack/openstacksdk/blob/master/openstack/network/v2/_proxy.py#L2441 > or add_interface_router() method from neutron client: > https://github.com/openstack/python-neutronclient/blob/d8cb1472c867d2a308e26abea0b0a01f1d6629a1/neutronclient/v2_0/client.py#L918 > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > Wiadomość napisana przez Bino Oetomo w dniu > 10.12.2018, o godz. 03:04: > > > > Dear All, > > > > As suggested by Slawek Kaplonski, I tried this. > > > > ----------- > > > > >>> myrouter > > {'status': 'ACTIVE', 'external_gateway_info': {'network_id': > 'd10dd06a-0425-49eb-a8ba-85abf55ac0f5', 'enable_snat': True, > 'external_fixed_ips': [{'subnet_id': > '4639e018-1cc1-49cc-89d4-4cad49bd4b89', 'ip_address': '10.10.10.2'}]}, > 'availability_zone_hints': [], 'availability_zones': ['nova'], > 'description': '', 'tags': [], 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:24:41Z', > 'admin_state_up': True, 'distributed': False, 'updated_at': > '2018-12-07T07:58:02Z', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', > 'flavor_id': None, 'revision_number': 10, 'routes': [], 'ha': False, 'id': > 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01'} > > > > ```>>> mynetworks=[n for n in netlist['networks'] if > n['name'].startswith('Bino')] > > >>> mynetworks > > [{'provider:physical_network': 'viavlan', 'ipv6_address_scope': None, > 'revision_number': 7, 'port_security_enabled': True, 'mtu': 1500, 'id': > '942675f6-fd5e-4bb2-ba43-487be992ff4e', 'router:external': False, > 'availability_zone_hints': [], 'availability_zones': ['nova'], > 'ipv4_address_scope': None, 'shared': False, 'project_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'ACTIVE', 'subnets': > ['5373bc35-a90f-4793-a912-801920e47769'], 'description': '', 'tags': [], > 'updated_at': '2018-12-07T07:50:27Z', 'provider:segmentation_id': 2037, > 'name': 'Bino-net-01', 'admin_state_up': True, 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T07:28:00Z', > 'provider:network_type': 'vlan'}, {'provider:physical_network': 'viavlan', > 'ipv6_address_scope': None, 'revision_number': 6, 'port_security_enabled': > True, 'mtu': 1500, 'id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > 'router:external': False, 'availability_zone_hints': [], > 'availability_zones': ['nova'], 'ipv4_address_scope': None, 'shared': > False, 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': > 'ACTIVE', 'subnets': ['c71a86a3-f9a8-4e60-828e-5d6f87e58ac9'], > 'description': '', 'tags': [], 'updated_at': '2018-12-07T07:49:59Z', > 'provider:segmentation_id': 2002, 'name': 'Bino-net-02', 'admin_state_up': > True, 'tenant_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': > '2018-12-07T07:29:52Z', 'provider:network_type': 'vlan'}] > > >>> print([n['name'] for n in mynetworks]) > > ['Bino-net-01', 'Bino-net-02'] > > >>> mynetwork=mynetworks[1] > > >>> mynetwork['name'] > > 'Bino-net-02' > > >>> body_value = { > > ... 'port': { > > ... 'admin_state_up': True, > > ... 'device_id': myrouter['id'], > > ... 'name': 'Bino-rtr-01-02', > > ... 'network_id': mynetwork['id'], > > ... } > > ... } > > >>> response = nt.create_port(body=body_value) > > >>> response > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], > 'updated_at': '2018-12-10T01:44:10Z', 'device_owner': '', > 'revision_number': 1, 'binding:profile': {}, 'port_security_enabled': True, > 'fixed_ips': [{'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > 'ip_address': '192.168.202.6'}], 'id': > 'a2a337af-3a37-4a9a-a4f1-ffaacdf9f881', 'security_groups': > ['4bed540c-266d-4cc2-8225-3e02ccd89ff1'], 'binding:vif_details': {}, > 'binding:vif_type': 'unbound', 'mac_address': 'fa:16:3e:05:aa:b5', > 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', 'status': 'DOWN', > 'binding:host_id': '', 'description': '', 'tags': [], 'device_id': > 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-rtr-01-02', > 'admin_state_up': True, 'network_id': > 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-10T01:44:10Z', > 'binding:vnic_type': 'normal'}} > > >>> response['port']['status'] > > 'DOWN' > > ------ > > > > Looks like the port status still 'DOWN'. > > > > Sincerely > > -bino- > > > > > > On Sat, Dec 8, 2018 at 9:42 PM Slawomir Kaplonski > wrote: > > Hi, > > > > You shouldn’t create port with router as device owner. If You want to > connect port or subnet to router, there is proper method for that: > https://developer.openstack.org/api-ref/network/v2/?expanded=add-interface-to-router-detail#add-interface-to-router > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > Wiadomość napisana przez Bino Oetomo w dniu > 08.12.2018, o godz. 02:42: > > > > > > Dear All. > > > > > > I have no problem configuring network via Hosrizon-dasboard. > > > > > > I start playing with python for some task. > > > I got succsess in creating network. > > > I create a router, with one interface connected to existing > 'ext-network' .. success. > > > > > > But I fail when I try to add a port to that router for connecting to > existing internal network. > > > > > > Here is part of my python shell. > > > > > > -------------------- > > > body_value = { > > > > > > > > > 'port': { > > > > > > > > > 'admin_state_up': True, > > > > > > > > > 'device_owner': 'network:router_interface', > > > > > > > > > 'device_id': 'a616dcc0-1f72-4424-9494-4d13b42445ee', > > > > > > > > > 'name': 'Bino-net-01-02', > > > > > > > > > 'network_id': 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', > > > > > > > > > 'binding:host_id': 'rocky-controller.mynet.net', > > > > > > > > > 'binding:profile': {}, > > > > > > > > > 'binding:vnic_type': 'normal', > > > > > > > > > 'fixed_ips': [{ > > > > > > > > > 'subnet_id': 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', > > > > > > > > > 'ip_address': '192.168.202.254' > > > > > > > > > }], > > > > > > > > > } > > > } > > > > > > > > > response > > > = nt.create_port(body=body_value) > > > > > > response > > > > > > > > > {'port': {'allowed_address_pairs': [], 'extra_dhcp_opts': [], > 'updated_at': '2018-12-07T08:10:24Z', 'device_owner': > 'network:router_interface', 'revision_number': 1, 'port_security_enabled': > False, 'binding:profile': {}, 'fixed_ips': [{'subnet_id': > 'c71a86a3-f9a8-4e60-828e-5d6f87e58ac9', 'ip_address': '192.168.202.254'}], > 'id': 'd02eb0f0-663f-423f-af4e-c969ccb9dc25', 'security_groups': [], > 'binding:vif_details': {'port_filter': True, 'datapath_type': 'system', > 'ovs_hybrid_plug': True}, 'binding:vif_type': 'ovs', 'mac_address': > 'fa:16:3e:e2:9d:8f', 'project_id': 'c0b89f614b5a457cb5acef8fe8c2b320', > 'status': 'DOWN', 'binding:host_id': 'rocky-controller.mynet.net', > 'description': '', 'tags': [], 'device_id': > 'a616dcc0-1f72-4424-9494-4d13b42445ee', 'name': 'Bino-net-01-02', > 'admin_state_up': True, 'network_id': > 'dfc8ed54-106d-48d0-8b45-cbd3cf0fbb79', 'tenant_id': > 'c0b89f614b5a457cb5acef8fe8c2b320', 'created_at': '2018-12-07T08:10:24Z', > 'binding:vnic_type': 'normal'}} > > > > > > -------------------- > > > 'status' always 'DOWN'. > > > > > > Kindly please give me some clue to fix this problem > > > > > > Note : Actualy I post same question on stackexchange : > https://stackoverflow.com/questions/53665795/openstack-python-neutronclient-creating-port-but-down > > > > > > Sincerely > > > -bino- > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From honjo.rikimaru at po.ntt-tx.co.jp Mon Dec 10 09:23:26 2018 From: honjo.rikimaru at po.ntt-tx.co.jp (Rikimaru Honjo) Date: Mon, 10 Dec 2018 18:23:26 +0900 Subject: [horizon]Javascript files contains "conf" in their file name Message-ID: Hello, I have a question about configuration of horizon. There are some javascript files contains "conf" in their file name in horizon repository.[1] Are they configuration files like local_settings.py for panels implemented in Angular? Sometimes, should I edit these files if I'd like to customize my horizon? [1] e.g. horizon/static/framework/conf/conf.js, openstack_dashboard/static/app/core/conf/conf.module.js Best Regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikimaru at po.ntt-tx.co.jp From ifatafekn at gmail.com Mon Dec 10 10:28:01 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Mon, 10 Dec 2018 12:28:01 +0200 Subject: [vitrage] Removing the static-physical datasource Message-ID: Hi, Just wanted to give you a heads-up that we are about to remove the static-physical datasource [1][2] that was deprecated in Queens [3]. Please use the static datasource [4] instead. Thanks, Ifat [1] https://review.openstack.org/#/c/624033/ [2] https://review.openstack.org/#/c/624031/ [3] https://docs.openstack.org/releasenotes/vitrage/queens.html [4] https://docs.openstack.org/vitrage/latest/contributor/static-config.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon Dec 10 11:05:17 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 10 Dec 2018 12:05:17 +0100 Subject: [loci][openstack-helm] How to add some agent to loci images In-Reply-To: References: <2ED125CF-8AA3-4D81-8C0F-FDF8ED1EF2F0@openstack.org> Message-ID: <824f452a87980106abca82287074083c8bb69201.camel@evrard.me> > > > On Fri, 2018-12-07 at 10:21 +0900, SIRI KIM wrote: > > our objetive: add neutron-lbaas & neutron-fwaas chart to openstack- > helm > upstream. > problem: we need this loci image to push lbaas and fwaas chart into > upstream repo. In order to pass osh gating, we need to have neutron- > lbaas & > > > neutron-fwaas agent image available for openstack-helm project. Yeah this seems more an OSH issue on how to properly leverage's LOCI feature (and publishing a series of images) than a request to LOCI directly I'd say :) I am currently refactoring how OSH is building LOCI images, but the idea of it would be pretty generic. It would be extensible by having a few extra environment variables before calling the build script (which basically do a series of docker builds). In those environment variables you could pass which projects to build, which variables to pass to LOCI (for example, "build me an octavia image with x"), what would be the registry of your choice. The result would be an updated/published image of your choice (on your dockerhub's user for example), which could then be used in your chart. Note: I don't think it's a good idea to build lbaas nowadays, as it's kinda the past :) Regards, Jean-Philippe Evrard (evrardjp) From smooney at redhat.com Mon Dec 10 12:20:03 2018 From: smooney at redhat.com (Sean Mooney) Date: Mon, 10 Dec 2018 12:20:03 +0000 Subject: [Neutron][ovs-dpdk] In-Reply-To: <889103638.17479349.1544372137196.JavaMail.zimbra@brilliant.com.bd> References: <889103638.17479349.1544372137196.JavaMail.zimbra@brilliant.com.bd> Message-ID: <4cd66cc1f4352baceb7516e432c9d872a5d40d72.camel@redhat.com> On Sun, 2018-12-09 at 22:15 +0600, Zilhazur Rahman wrote: > Hi > > I am writing for the first time to the mailing list of openstack, I am trying to deploy ovs-dpdk to have better > traffic throughput for NFV. Could you please share any tutorial link for standard ovs-dpdk deployment. On the other > hand, openstack site says this " Expect performance degradation of services using tap devices: these devices do not > support DPDK. Example services include DVR, FWaaS, or LBaaS. " but I need to have LBaaS and DVR ( to have direct > external connectivity on compute) , what could be done in this case? LBaaS perfromance will only be impacted in the legacy model where the loadblancer was network nodes using the linux kernel. if you are usign LBaas in the default mode where it creates a nova vm and run haproxy or another loadblancer on it then it will preform well provided the vm uses a hugepage backed flavor. if you do not use hugepages you will have no network connectivity. DVR is tricker as ther is really no way to avoid the fact it does routing in the kernel which not only disables all dpdk accleration but causes all dpdk acllerated port to experice a performance degrdation as it is more costly to service non dpdk interface therefor consuming more cpu resouces that would have been used to service the dpdk cores. using an sdn conttoler like ovn is one way to avoid this over head as it will route using openflow. i know that some people in neutron were looking at enableing openflow routing in ml2 ovs but i dont know what the state of that is. in terms of installation juju and triplo both have support for ovs-dpdk. kolla-ansible also has some support but i have not been maintaining it for the last 14 months or so as such i dont know how stable it is. from a netron perspective the only thing you need to change in the neutron configs is the ovs datapath to netdev in the ovs section of /etc/neutron/plugins/ml2/ml2_conf.ini on the nova side you shoule create a seperate host aggreage and new flavor with hugepages. from an ovs perspecitve if you are doing this by hand there are some docs directly form ovs to compile and install it from source https://github.com/openvswitch/ovs/blob/master/Documentation/intro/install/dpdk.rst but i would generally recommend installing it from your distro it will reduce teh performace a little but it will be better tested and easier to maintian. the main things to know is that ovs-dpdk will also require hugepages, you will have to configure a pmd core mask for the dpdk threads to use and all bridge must be of datapath type netdev with patch ports interconnecting them. finally if you want to use vxlan or other tunnels you must assign the tunnel endpoint ip to the ovs bridge with the dpdk phyical interface, otherwise your tunneled traffic will not be acclerated. > > > Regards > Zilhaz > From joshua.hesketh at gmail.com Mon Dec 10 12:32:27 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Mon, 10 Dec 2018 23:32:27 +1100 Subject: On trust and risk, Australia's Assistance and Access Bill In-Reply-To: References: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> Message-ID: Thank you all for your support. It is a difficult and unfortunate state of affairs for Australia. To me this highlights the need and strength of open source and I am proud to be a part of this community. Cheers, Josh On Sat, Dec 8, 2018 at 6:26 AM Michael McCune wrote: > On Fri, Dec 7, 2018 at 2:12 PM Jeremy Stanley wrote: > > > > I've seen concern expressed in OpenStack and other free/libre open > > source software communities over the recent passage of the > > "Assistance and Access Bill 2018" by the Australian Parliament, and > > just want to say that I appreciate the trust relationships we've all > > built with our colleagues in many countries, including Australia. As > > someone who doesn't particularly agree with many of the laws passed > > in his own country, while I'm not going to encourage civil > > disobedience, I do respect that many have shown preference for it > > over compelled compromise of our community's established trust. I, > > for one, don't wish to return to the "bad old days" of the crypto > > wars, when major projects like OpenBSD refused contributions from > > citizens and residents of the USA. It's bad for project morale, > > excludes valuable input from people with a variety of perspectives, > > and it's just downright inefficient too. > > > > The unfortunate truth is that anyone can be pressured at any time to > > derail, backdoor or otherwise compromise software and systems. A new > > law in one country doesn't change that. There are frequent news > > stories about government agencies installing covert interfaces in > > enterprise and consumer electronic devices alike through compulsion > > of those involved in their programming, manufacture and > > distribution. There's evidence of major standards bodies being > > sidetracked and steered into unwittingly approving flawed > > specifications which influential actors already know ways to > > circumvent. Over the course of my career I've had to make personal > > choices regarding installation and maintenance of legally-mandated > > systems for spying on customers and users. All we can ever hope for > > is that the relationships, systems and workflows we create are as > > resistant as possible to these sorts of outside influences. > > > > Sure, ejecting people from important or sensitive positions within > > the project based on their nationality might be a way to send a > > message to a particular government, but the problem is bigger than > > just one country and we'd really all need to be removed from our > > posts for pretty much the same reasons. This robust community of > > trust and acceptance we've fostered is not a risk, it's another line > > of defense against erosion of our ideals and principles. Entrenched > > concepts like open design and public review help to shield us from > > these situations, and while there is no perfect protection it seems > > to me that secret compromise under our many watchful eyes is a much > > harder task than doing so behind the closed doors of proprietary > > systems development. > > > > I really appreciate all the Australians who toil tirelessly to make > > OpenStack better, and am proud to call them friends and colleagues. > > I certainly don't want them to feel any need to resign from their > > valuable work because they're worried the rest of us can no longer > > trust them. > > -- > > Jeremy Stanley > > ++ > > well said. thank you for stating this so eloquently. > > peace o/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Dec 10 13:49:47 2018 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 10 Dec 2018 14:49:47 +0100 Subject: [neutron] bug deputy report for week of December 3 Message-ID: Hi Neutrinos, As I was the bug deputy, here comes the report for last week. (undecided) https://bugs.launchpad.net/neutron/+bug/1807153 Race condition in metering agent when creating iptable managers for router namespaces. In need of attention from experts of metering and dvr. haleyb volunteered in irc. https://bugs.launchpad.net/neutron/+bug/1807157 Metering doesn't work for DVR routers on compute nodes. In need of attention from experts of metering and dvr. haleyb volunteered in irc. (critical) https://bugs.launchpad.net/neutron/+bug/1807239 Race condition with DPDK + trunk ports when instance port is deleted then quickly recreated. Fix proposed: https://review.openstack.org/623275 (high) none (medium) https://bugs.launchpad.net/neutron/+bug/1806770 DHCP Agent should not release DHCP lease when client ID is not set on port. Fix proposed: https://review.openstack.org/623066 https://bugs.launchpad.net/neutron/+bug/1807396 With many VMs on the same tenant, the L3 ip neigh add is too slow. Fix proposed: https://review.openstack.org/581360. Change owner is asking for help with (ie. takeover of) the change. (low) https://bugs.launchpad.net/neutron/+bug/1805991 IP Route: subnet's host_houtes attribute and router's routes accept the invalidate subnets. Fix proposed: https://review.openstack.org/623420 https://bugs.launchpad.net/neutron/+bug/1806032 neutron doesn't prevent the network update from external to internal when floatingIPs present. First fix proposal was abandoned. Bence Romsics will propose another fix. https://bugs.launchpad.net/neutron/+bug/1807128 Table name in "add_ip_rule" can be a string. Fix proposed: https://review.openstack.org/623182 https://bugs.launchpad.net/neutron/+bug/1807421 Open vSwitch hardware offloading in neutron updates. Fishing for a new contributor over doc improvement. :-) https://bugs.launchpad.net/neutron/+bug/1805132 bulk creation of security group rules fails StaleDataError. Confirmed, but not yet analyzed. (incomplete) https://bugs.launchpad.net/neutron/+bug/1807382 DNSMASQ wrong addresses allocated after changing DHCP Clients between Neutron vRouters NET https://bugs.launchpad.net/neutron/+bug/1807483 networksegments table in neutron can not be cleared automatically https://bugs.launchpad.net/neutron/+bug/1807673 Networking (neutron) concepts in neutron (rfe) https://bugs.launchpad.net/neutron/+bug/1806052 Changing segmentation_id of existing network should be allowed https://bugs.launchpad.net/neutron/+bug/1806316 Add RPC query API to l2pop for FDB resync https://bugs.launchpad.net/neutron/+bug/1806390 Distributed DHCP agent Cheers, Bence Romsics irc: rubasov From fungi at yuggoth.org Mon Dec 10 14:01:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Dec 2018 14:01:56 +0000 Subject: [Release-job-failures] Tag of openstack/freezer failed In-Reply-To: <20181210050922.GC32339@thor.bakeyournoodle.com> References: <20181210050922.GC32339@thor.bakeyournoodle.com> Message-ID: <20181210140155.3np3gfjzgbwlxpwm@yuggoth.org> On 2018-12-10 16:09:22 +1100 (+1100), Tony Breeds wrote: [...] > I think this failed in create-afs-token [...] I concur, and the most common cause for that particular error is if the afsd daemon for OpenAFS isn't running (failed to start, crashed, et cetera). Indeed, this job ran from the ze12 Zuul executor we just brought on line at the end of last week, and it looks like we missed a bootstrapping step. I've brought it back down for the time being while I make sure it's got the right kernel booted and the corresponding openafs.ko LKM built/loaded. Thanks for spotting this!!! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bathanhtlu at gmail.com Mon Dec 10 14:20:21 2018 From: bathanhtlu at gmail.com (=?UTF-8?B?VGjDoG5oIE5ndXnhu4VuIELDoQ==?=) Date: Mon, 10 Dec 2018 21:20:21 +0700 Subject: [oslo] How to use library "oslo.messaging" Message-ID: Dear all, I have a question about "library oslo.messaging". How can i use this librabry to write simple program listen event on Openstack (like Nova, Cinder)? I had read on docs, nova support "oslo_messaging_notifications" to send message via RabbitMQ, so I’m try to use this library but it seem very hard for me. *Nguyễn Bá Thành* *Mobile*: 0128 748 0391 *Email*: bathanhtlu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From luisa.arches at nokia.com Mon Dec 10 14:35:03 2018 From: luisa.arches at nokia.com (Arches, Maria (Nokia - HU/Budapest)) Date: Mon, 10 Dec 2018 14:35:03 +0000 Subject: [nova] Fail to detach vhostuser vif type from VM Message-ID: Hello all, I tried to detach a vhostuser vif from VM. My network setup is neutron+ ovs dpdk and using Libvirt/KVM. I'm using Openstack Queens. Network interface was unplugged but was not removed from libvirt/VM. I also created a bug for this in Launchpad. Wrote here to find/discuss possible solutions. https://bugs.launchpad.net/nova/+bug/1807340 I got the following logs from nova-compute: 2018-12-10 12:29:49.008 3223 INFO os_vif [req-ffd43a33-b0b9-4b4d-a7ce-700b7aee2822 7f6d35b7b85042daa470658debca14c3 b8668325f4fa4b9085fd2ac43170dd42 - default default] Successfully unplugged vif VIFVHostUser(active=True,address=fa:16:3e:ca:4e:16,has_traffic_filtering=False,id=90ca01ab-9a43-4d5f-b0bd-cb334d07e22b,mode='server',network=Network(e108697b-f031-45cb-bf47-6031c69afd4b),path='/var/run/openvswitch/vhu90ca01ab-9a',plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='vhu90ca01ab-9a') 2018-12-10 12:29:49.011 3223 WARNING nova.virt.libvirt.driver [req-ffd43a33-b0b9-4b4d-a7ce-700b7aee2822 7f6d35b7b85042daa470658debca14c3 b8668325f4fa4b9085fd2ac43170dd42 - default default] [instance: a4dd7fb5-d234-4b2a-9ebd-b1e3f657ac3e] Detaching interface fa:16:3e:ca:4e:16 failed because the device is no longer found on the guest. Tried to investigate further the code and it seems that get_interface_cfg() does not find any interface that matches because target_dev is not the same. https://github.com/openstack/nova/blob/0f4cecb70c2d11593f18818523a61059c0196d88/nova/virt/libvirt/guest.py#L247 vhostuser backend does not set any value for target_dev, while LibvirtGuestInterface does. https://github.com/openstack/nova/blob/0f4cecb70c2d11593f18818523a61059c0196d88/nova/virt/libvirt/designer.py#L151 https://github.com/openstack/nova/blob/0f4cecb70c2d11593f18818523a61059c0196d88/nova/virt/libvirt/config.py#L1467 First question is, is it on purpose that vhostuser vif does not set any target_dev? Possible solution that I can think of: 1. Add setting of target_dev in set_vif_host_backend_vhostuser_config() 2. Skipping target_dev check when vif type is vhostuser. I also noticed, that when a vhostuser interface type is attached to VM during run-time there was no target dev present in the virsh dumpxml. First interface is created during creation. Second one, was attached run-time.
From doug at doughellmann.com Mon Dec 10 15:02:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Dec 2018 10:02:09 -0500 Subject: [Release-job-failures] Tag of openstack/freezer failed In-Reply-To: <20181210050922.GC32339@thor.bakeyournoodle.com> References: <20181210050922.GC32339@thor.bakeyournoodle.com> Message-ID: Tony Breeds writes: > On Mon, Dec 10, 2018 at 04:49:47AM +0000, zuul at openstack.org wrote: >> Build failed. >> >> - publish-openstack-releasenotes-python3 http://logs.openstack.org/31/31314e5e707bfe4933e1046dd0a18f1daa8cca6c/tag/publish-openstack-releasenotes-python3/bc14e14/ : POST_FAILURE in 2m 31s > > Looking at the logs I think this failed in create-afs-token[1]. If I > Understand correctly this means the releasenotes weren't published but > they will upon the next successful publish run. > > Yours Tony. > > [1] http://logs.openstack.org/31/31314e5e707bfe4933e1046dd0a18f1daa8cca6c/tag/publish-openstack-releasenotes-python3/bc14e14/job-output.txt.gz#_2018-12-10_04_49_23_454869 > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures Yes, that job should run again as part of the post pipeline when another patch is merged in the openstack/freezer repository. The published docs should reflect the newly tagged version. -- Doug From doug at doughellmann.com Mon Dec 10 15:08:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Dec 2018 10:08:23 -0500 Subject: [oslo] How to use library "oslo.messaging" In-Reply-To: References: Message-ID: Thành Nguyễn Bá writes: > Dear all, > I have a question about "library oslo.messaging". How can i use this > librabry to write simple program listen event on Openstack (like Nova, > Cinder)? I had read on docs, nova support "oslo_messaging_notifications" to > send message via RabbitMQ, so I’m try to use this library but it seem very > hard for me. > > *Nguyễn Bá Thành* > > *Mobile*: 0128 748 0391 > > *Email*: bathanhtlu at gmail.com There is a notification "listener" built into the library in the oslo_messaging.notify.listener module. The source file [1] includes a basic example of using it. You may also find the ceilometer source code interesting, since it consumes notifications using the "batch" listener [2]. [1] http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/notify/listener.py [2] http://git.openstack.org/cgit/openstack/ceilometer/tree/ceilometer/messaging.py -- Doug From juliaashleykreger at gmail.com Mon Dec 10 16:21:18 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 10 Dec 2018 08:21:18 -0800 Subject: [ironic] Meetings cancelled for the holdiays, Resuming January 7th Message-ID: Greetings ironic folks and all of those interested in ironic! As discussed in this week's meeting[1], we are cancelling our next three meetings and resuming our weekly meeting on January 7th. This time of year, the ironic community tends to enter a bit of a quiet period. Starting next week, contributor availability will begin to wind down for the holidays. Those of us that will be hanging around tend to shift gears to focused feature work and often pay a little less attention to IRC and the mailing list. If core reviewers are needed, the `ironic-cores` tag can be used in the #openstack-ironic channel, and we will do our best to marshal reviewers in the event of an emergency. During this time, our weekly priority review list[1] will go into autopilot. Contributors are welcome to add items they feel need review visibility and we will prune merged items out as needed. If there are any questions, please feel free to reach out to me or post to the mailing list. Thanks, -Julia [1] http://eavesdrop.openstack.org/meetings/ironic/2018/ironic.2018-12-10-15.00.log.html [2] https://etherpad.openstack.org/p/IronicWhiteBoard starting at line 112. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Dec 10 16:59:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 10 Dec 2018 10:59:46 -0600 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <16796fe1b6c.c3e4e009192453.8132097322747769095@ghanshyammann.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> <16796fe1b6c.c3e4e009192453.8132097322747769095@ghanshyammann.com> Message-ID: <408eff57-be20-5a58-ef2d-f1f3d4b17bfc@gmail.com> On 12/10/2018 1:21 AM, Ghanshyam Mann wrote: > I am getting few failure on tempest-slow (tempest-multinode-full) for stable branches which might take time to fix and till > then let's keep nova-multiattach on stable branches and remove only for master. Bug https://bugs.launchpad.net/cinder/+bug/1807723/ is blocking removing the nova-multiattach job from master. Something is going on with TestMultiAttachVolumeSwap when there are two hosts. That test is marked slow but runs in nova-multiattach which also runs slow tests, and nova-multiattach is a single node job. With tempest change: https://review.openstack.org/#/c/606978/ TestMultiAttachVolumeSwap gets run in the tempest-slow job which is multi-node, and as a result I'm seeing race failures in that test. I've put my notes into the bug, but I need some help from Cinder at this point. I thought I had initially identified a very obvious problem in nova, but now I think nova is working as designed (although very confusing) and we're hitting a race during the swap where deleting the attachment record for the volume/server we swapped *from* is failing saying the target is still active. The fact we used to run this on a single-node job likely masked some race issue. As far as next steps, we could: 1. Move forward with removing nova-multiattach but skip TestMultiAttachVolumeSwap until bug 1807723 is fixed. 2. Try to workaround bug 1807723 in Tempest by creating the multiattach volume and servers on the same host (by pinning them to an AZ). 3. Add some retry logic to Cinder and hope it is just a race failure when the volume is connected to servers across different hosts. Ultimately this is the best scenario but I'm just not yet sure if that is really the issue or if something is really messed up in the volume backend when this fails where retries wouldn't help. -- Thanks, Matt From kgiusti at gmail.com Mon Dec 10 17:00:07 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 10 Dec 2018 12:00:07 -0500 Subject: [oslo] How to use library "oslo.messaging" In-Reply-To: References: Message-ID: On Mon, Dec 10, 2018 at 9:24 AM Thành Nguyễn Bá wrote: > Dear all, > I have a question about "library oslo.messaging". How can i use this > librabry to write simple program listen event on Openstack (like Nova, > Cinder)? I had read on docs, nova support "oslo_messaging_notifications" to > send message via RabbitMQ, so I’m try to use this library but it seem very > hard for me. > > *Nguyễn Bá Thành* > > *Mobile*: 0128 748 0391 > > *Email*: bathanhtlu at gmail.com > Nguyen - just following up to the wider group: I've got a few simple example clients/servers that do RPC and Notifications. Check them out on my git repo: https://github.com/kgiusti/oslo-messaging-clients If folks find them useful I'll be more than glad to move them to the oslo.messaging repo (an examples directory, perhaps?) -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Dec 10 17:16:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Dec 2018 12:16:00 -0500 Subject: [oslo] How to use library "oslo.messaging" In-Reply-To: References: Message-ID: Ken Giusti writes: > On Mon, Dec 10, 2018 at 9:24 AM Thành Nguyễn Bá > wrote: > >> Dear all, >> I have a question about "library oslo.messaging". How can i use this >> librabry to write simple program listen event on Openstack (like Nova, >> Cinder)? I had read on docs, nova support "oslo_messaging_notifications" to >> send message via RabbitMQ, so I’m try to use this library but it seem very >> hard for me. >> >> *Nguyễn Bá Thành* >> >> *Mobile*: 0128 748 0391 >> >> *Email*: bathanhtlu at gmail.com >> > > Nguyen - just following up to the wider group: > > I've got a few simple example clients/servers that do RPC and Notifications. > Check them out on my git repo: > > https://github.com/kgiusti/oslo-messaging-clients > > If folks find them useful I'll be more than glad to move them to the > oslo.messaging repo (an examples directory, perhaps?) +1 -- maybe we can even include them in the published docs, if the code samples aren't too long > > -- > Ken Giusti (kgiusti at gmail.com) -- Doug From miguel at mlavalle.com Mon Dec 10 17:31:13 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 10 Dec 2018 11:31:13 -0600 Subject: [openstack-dev] [Neutron] Propose Nate Johnston for Neutron core In-Reply-To: References: Message-ID: Hi Stackers, It has been a week since I posted Nate's nomination to the Neutron core team and I have only received positive feedback. As a consequence, let's welcome Nate to the team. Congratulations and keep up the good work! Best regards Miguel On Tue, Dec 4, 2018 at 8:57 AM Brian Haley wrote: > Big +1 from me, keep up the great work Nate! > > -Brian > > On 12/3/18 4:38 PM, Miguel Lavalle wrote: > > Hi Stackers, > > > > I want to nominate Nate Johnston (irc:njohnston) as a member of the > > Neutron core team. Nate started contributing to Neutron back in the > > Liberty cycle. One of the highlight contributions of that early period > > is his collaboration with others to implement DSCP QoS rules > > (https://review.openstack.org/#/c/251738/). After a hiatus of a few > > cycles, we were lucky to have Nate come back to the community during the > > Rocky cycle. Since then, he has been a driving force in the adoption in > > Neutron of Oslo Versioned Objects, the "Run under Python 3 by default" > > community wide initiative and the optimization of ports creation in bulk > > to better support containerized workloads. He is a man with a wide range > > of interests, who is not afraid of expressing his opinions in any of > > them. The quality and number of his code reviews during the Stein cycle > > is on par with the leading members of the core team: > > http://stackalytics.com/?module=neutron-group. I especially admire his > > ability to forcefully handle disagreement in a friendly and easy going > > manner. > > > > On top of all that, he graciously endured me as his mentor over the past > > few months. For all these reasons, I think he is ready to join the team > > and we will be very lucky to have him as a fully voting core. > > > > I will keep this nomination open for a week as customary. > > > > Thank you > > > > Miguel > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Mon Dec 10 17:32:38 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 10 Dec 2018 11:32:38 -0600 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: <726003a1-da0b-f542-00b2-a7900cc308d9@gmail.com> References: <726003a1-da0b-f542-00b2-a7900cc308d9@gmail.com> Message-ID: Hi Stackers, It has been a week since I posted Hongbin's nomination to the Neutron core team and I have only received positive feedback. As a consequence, let's welcome Nate to the team. Congratulations and keep up the good work! Best regards Miguel On Tue, Dec 4, 2018 at 8:57 AM Brian Haley wrote: > Big +1 from me as well! > > -Brian > > On 12/3/18 5:14 PM, Miguel Lavalle wrote: > > Hi Stackers, > > > > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron > > core team. Hongbin started contributing to the OpenStack community in > > the Liberty cycle. Over time, he made great contributions in helping the > > community to better support containers by being core team member and / > > or PTL in projects such as Zun and Magnum. An then, fortune played in > > our favor and Hongbin joined the Neutron team in the Queens cycle. Since > > then, he has made great contributions such as filters validation in the > > ReST API, PF status propagation to to VFs (ports) in SR-IOV environments > > and leading the forking of RYU into the os-ken OpenStack project, which > > provides key foundational functionality for openflow. He is not a man > > who wastes words, but when he speaks up, his opinions are full of > > insight. This is reflected in the quality of his code reviews, which in > > number are on par with the leading members of the core team: > > http://stackalytics.com/?module=neutron-group. Even though Hongbin > > leaves in Toronto, he speaks Mandarin Chinese and was born and raised in > > China. This is a big asset in helping the Neutron team to incorporate > > use cases from that part of the world. > > > > Hongbin spent the past few months being mentored by Slawek Kaplonski, > > who has reported that Hongbin is ready for the challenge of being a core > > team member. I (and other core team members) concur. > > > > I will keep this nomination open for a week as customary. > > > > Thank you > > > > Miguel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Dec 10 17:33:19 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Dec 2018 17:33:19 +0000 Subject: [infra] OpenDev feedback forum session summary Message-ID: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> Wednesday afternoon at the OpenStack Summit we met to discuss the plan for the upcoming transition of the OpenStack Infrastructure team to an independent effort named OpenDev. Notes were recorded at https://etherpad.openstack.org/p/BER-opendev-feedback-and-missing-features and form the basis of the summary with follows. For those unfamiliar with this topic, the announcement at http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html provides some background and context. Much of what follows may be a reiteration of things also covered there, so please excuse any redundancy on my part. To start out, we (re)announced that we have chosen a name (OpenDev) and a domain (opendev.org), so can more effectively plan for DNS changes for most of the services we currently host under the "legacy" (for us) openstack.org domain. It was also pointed out that while we expect to maintain convenience redirects and aliases from old hostnames for all services we reasonably can so as to minimize disruption, there will still be some unavoidable discontinuities for users from time to time as we work through this. We talked for a bit about options for decentralizing GitHub repository mirroring so that the current team no longer needs to maintain it, and how to put it in control of people who want to manage those organizations there for themselves instead. Doing this with a job in Zuul's post pipeline (using encrypted secrets for authentication) was suggested as one possible means to avoid users all maintaining their own separate automation to accomplish the same thing. Interest in bare metal CI nodes in nodepool was brought up again. To reiterate, there's not really any technical reason we can't use them, more that prior offers to donate access to Nova/Ironic-managed nodes for this purpose never panned out. If you work for an organization which maintains a "bare metal cloud" we could reach over the open Internet and you'd consider carving out some of your capacity for our CI system, please do get in touch with us! We spent a bit of time covering user concerns about the transition to OpenDev and what reassurances we ought to provide. For starters, our regular contributors and root systems administrators will continue to be just as reachable and responsive as ever via IRC and mailing lists, even if the names of the channels and MLs may change as part of this transition. Similarly, our operations will remain as open and transparent as they are today... really nothing about how we maintain our systems is changing substantively as a part of the OpenDev effort, though certainly the ways in which we maintain our systems do still change and evolve over time as we seek to improve them so that will of course continue to be the case. Paul Belanger raised concerns that announcing OpenDev could result in a flood of new requests to host more projects. Well, really, I think that's what we hope for. I (apparently) pointed out that even when StackForge was first created back at the beginning of 2012, there wasn't much restriction as to what we would be willing to host. As interest in OpenDev spreads to new audiences, interest in participating in its maintenance and development should too grow. That said, we acknowledge that there are some scalability bottlenecks and manual/human steps in certain aspects of new project onboarding for now, so should be very up-front with any new projects about that fact. We're also not planning for any big marketing push to seek out additional projects at this point, but are happy to talk to any who discover us and are interested in the services we offer. Next, Paul Belanger brought up the possibility of "bring your own cloud" options for projects providing CI resources themselves. While we expect nodepool to have support for tenant-specific resources in the not-too-distant future, Jim Blair and Clark Boylan agreed the large pool of generic resources we operate with now is really where we see a lot of benefit and ability to drive efficiencies of scale. Then Monty Taylor talked for a while, according to the notes in the pad, and said things about earmarked resources potentially requiring a sort of "commons tax" or... something. Jim Rollenhagen asked whether we would potentially start to test and gate projects on GitHub too rather than just our Gerrit. Clark Boylan and Jim Blair noted that the current situation where we're testing pull requests for Kata's repositories is a bit of an experiment in that direction today and the challenges we've faced suggest that, while we'll likely continue to act as a third-party CI system for some projects hosted on GitHub (we're doing that with Ansible for example), we've discovered that trying to enforce gating in code review platforms we don't also control is not likely something we'll want to continue in the long term. It came up that our earlier ideas for flattening our Git namespace to reduce confusion and minimize future repository renames is now not looking as attractive. Instead we're probably going to need to embrace an explosion of new namespaces and find better ways to cope with the pain of renames in Gerrit as projects want to move between them over time. We're planning to only run one Gerrit for simplicity, so artificially creating "tenants" in it through prefixes in repository names is really the simplest solution we have to avoid different projects stepping on one another's toes with their name choices. Then we got into some policy discussions about namespace creation. Jim Blair talked about the potential to map Git/Gerrit repository namespaces to distinct Zuul tenants, and someone (might have been me? I was fairly jet-lagged and so don't really remember) asked about who decides what the requirements are for projects to create repositories in a particular namespace. In the case of OpenStack, the answer is almost certainly the OpenStack Technical Committee or at least some group to whom they delegate that responsibility. The OpenStack TC needs to discuss fairly early what its policies are for the "openstack" namespace (whether existing unofficial projects will be allowed to remain, whether addition of new unofficial projects will be allowed there) as well as whether it wants to allow creation of multiple separate namespaces for official OpenStack projects. The suggestion of nested "deep" namespaces like openstack/nova/nova came up at this point too. We also resolved that we need to look back into the project rename plugin for Gerrit. The last time we evaluated it, there wasn't much there. We've heard it's improved with newer Gerrit releases, but if it's still lacking we might want to contribute to making it more effective so we can handle the inevitable renames more easily in the future. And finally, as happens with most forum sessions, we stopped abruptly because we ran over and it was Kendall Nelson's turn to start getting ops feedback for the Contributor Guide. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Mon Dec 10 17:36:59 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 10 Dec 2018 11:36:59 -0600 Subject: [openstack-dev] [Neutron] Propose Hongbin Lu for Neutron core In-Reply-To: References: <726003a1-da0b-f542-00b2-a7900cc308d9@gmail.com> Message-ID: Hi Stackers, It has been a week since I posted Hongbin's nomination to the Neutron core team and I have only received positive feedback. As a consequence, let's welcome Hongbin to the team. Congratulations and keep up the good work! Best regards On Mon, Dec 10, 2018 at 11:32 AM Miguel Lavalle wrote: > Hi Stackers, > > It has been a week since I posted Hongbin's nomination to the Neutron core > team and I have only received positive feedback. As a consequence, let's > welcome Nate to the team. Congratulations and keep up the good work! > > Best regards > > Miguel > > On Tue, Dec 4, 2018 at 8:57 AM Brian Haley wrote: > >> Big +1 from me as well! >> >> -Brian >> >> On 12/3/18 5:14 PM, Miguel Lavalle wrote: >> > Hi Stackers, >> > >> > I want to nominate Hongbin Lu (irc: hongbin) as a member of the Neutron >> > core team. Hongbin started contributing to the OpenStack community in >> > the Liberty cycle. Over time, he made great contributions in helping >> the >> > community to better support containers by being core team member and / >> > or PTL in projects such as Zun and Magnum. An then, fortune played in >> > our favor and Hongbin joined the Neutron team in the Queens cycle. >> Since >> > then, he has made great contributions such as filters validation in the >> > ReST API, PF status propagation to to VFs (ports) in SR-IOV >> environments >> > and leading the forking of RYU into the os-ken OpenStack project, which >> > provides key foundational functionality for openflow. He is not a man >> > who wastes words, but when he speaks up, his opinions are full of >> > insight. This is reflected in the quality of his code reviews, which in >> > number are on par with the leading members of the core team: >> > http://stackalytics.com/?module=neutron-group. Even though Hongbin >> > leaves in Toronto, he speaks Mandarin Chinese and was born and raised >> in >> > China. This is a big asset in helping the Neutron team to incorporate >> > use cases from that part of the world. >> > >> > Hongbin spent the past few months being mentored by Slawek Kaplonski, >> > who has reported that Hongbin is ready for the challenge of being a >> core >> > team member. I (and other core team members) concur. >> > >> > I will keep this nomination open for a week as customary. >> > >> > Thank you >> > >> > Miguel >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From frode.nordahl at canonical.com Mon Dec 10 17:56:23 2018 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Mon, 10 Dec 2018 18:56:23 +0100 Subject: [charms] No meetings until January 7th Message-ID: Dear charmers and community at large, We will be taking a break from our weekly meetings and will reconvene after the holidays. First weekly meeting will be Monday January 7th. In the meantime, do not hesitate to contact us in #openstack-charmers on Freenode or by sending e-mail to the OpenStack Discuss mailinglist using the [charms] tag in the subject line. Cheers all! -- Frode Nordahl -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.beisner at canonical.com Mon Dec 10 17:59:53 2018 From: ryan.beisner at canonical.com (Ryan Beisner) Date: Mon, 10 Dec 2018 11:59:53 -0600 Subject: [charms] No meetings until January 7th In-Reply-To: References: Message-ID: Thanks, Frode. Happy festive season to everyone! :-) On Mon, Dec 10, 2018 at 11:58 AM Frode Nordahl wrote: > Dear charmers and community at large, > > We will be taking a break from our weekly meetings and will reconvene > after the holidays. > > First weekly meeting will be Monday January 7th. > > In the meantime, do not hesitate to contact us in #openstack-charmers on > Freenode or by sending e-mail to the OpenStack Discuss mailinglist using > the [charms] tag in the subject line. > > Cheers all! > > -- > Frode Nordahl > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Mon Dec 10 17:23:56 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 10 Dec 2018 18:23:56 +0100 Subject: Openstacl magnum Queens api error Message-ID: Hi All, I installed magnum in queens with Centos 7 and I am facing with the same issue descrive here: https://bugs.launchpad.net/magnum/+bug/1701381 Please, anyone solved it? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Dec 10 18:33:03 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Dec 2018 12:33:03 -0600 Subject: [dev][goal][python3][qa][devstack][ptl] changing devstack's python 3 behavior In-Reply-To: References: Message-ID: On 12/5/18 1:27 PM, Doug Hellmann wrote: > Today devstack requires each project to explicitly indicate that it can > be installed under python 3, even when devstack itself is running with > python 3 enabled. > > As part of the python3-first goal, I have proposed a change to devstack > to modify that behavior [1]. With the change in place, when devstack > runs with python3 enabled all services are installed under python 3, > unless explicitly listed as not supporting python 3. > > If your project has a devstack plugin or runs integration or functional > test jobs that use devstack, please test your project with the patch > (you can submit a trivial change to your project and use Depends-On to > pull in the devstack change). > > [1] https://review.openstack.org/#/c/622415/ > For Oslo, do we need to test every project or can we just do the devstack plugins and maybe one other as a sanity check? Since we don't own any devstack services this doesn't directly affect us, right? From doug at doughellmann.com Mon Dec 10 18:43:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 10 Dec 2018 13:43:42 -0500 Subject: [dev][goal][python3][qa][devstack][ptl] changing devstack's python 3 behavior In-Reply-To: References: Message-ID: Ben Nemec writes: > On 12/5/18 1:27 PM, Doug Hellmann wrote: >> Today devstack requires each project to explicitly indicate that it can >> be installed under python 3, even when devstack itself is running with >> python 3 enabled. >> >> As part of the python3-first goal, I have proposed a change to devstack >> to modify that behavior [1]. With the change in place, when devstack >> runs with python3 enabled all services are installed under python 3, >> unless explicitly listed as not supporting python 3. >> >> If your project has a devstack plugin or runs integration or functional >> test jobs that use devstack, please test your project with the patch >> (you can submit a trivial change to your project and use Depends-On to >> pull in the devstack change). >> >> [1] https://review.openstack.org/#/c/622415/ >> > > For Oslo, do we need to test every project or can we just do the > devstack plugins and maybe one other as a sanity check? Since we don't > own any devstack services this doesn't directly affect us, right? Given that we've been testing nova and cinder under python 3 for a while now I think it's probably safe to assume Oslo is working OK. This change is mostly about the fact that we were failing to install *all* of the services under python 3 by default. The existing library forward-testing jobs should handle future testing because they will install all of the services as well as the library under test using python 3. If you want to be extra careful, you could propose a patch to some of the more complicated libs (oslo.messaging and oslo.service come to mind) that depends on the patch above to ensure that those libraries don't trigger failures in the jobs. -- Doug From mriedemos at gmail.com Mon Dec 10 19:04:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 10 Dec 2018 13:04:43 -0600 Subject: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full? In-Reply-To: <408eff57-be20-5a58-ef2d-f1f3d4b17bfc@gmail.com> References: <4f1bf485-f663-69ae-309e-ab9286e588e1@gmail.com> <1538382375.30016.0@smtp.office365.com> <1662fd96896.121bfdb2733507.842876450808135416@ghanshyammann.com> <81332221-db29-5525-6f3f-a000c93e4939@gmail.com> <16796fe1b6c.c3e4e009192453.8132097322747769095@ghanshyammann.com> <408eff57-be20-5a58-ef2d-f1f3d4b17bfc@gmail.com> Message-ID: <422e5f8f-39e2-5ac5-95c4-c40a4184d0af@gmail.com> On 12/10/2018 10:59 AM, Matt Riedemann wrote: > TestMultiAttachVolumeSwap gets run in the tempest-slow job which is > multi-node, and as a result I'm seeing race failures in that test. I've > put my notes into the bug, but I need some help from Cinder at this > point. I thought I had initially identified a very obvious problem in > nova, but now I think nova is working as designed (although very > confusing) and we're hitting a race during the swap where deleting the > attachment record for the volume/server we swapped *from* is failing > saying the target is still active. After more debugging, it looks like when deleting the servers, the volume in question that fails to delete isn't being properly detached by nova-compute, so the connection still exists when tempest tries to delete the volume and then it fails. I'm not sure what is going on here, it's almost as if something is wrong in the DB and we're not finding the appropriate BDM in the DB during the server delete so we never detach. -- Thanks, Matt From chris at openstack.org Mon Dec 10 19:33:29 2018 From: chris at openstack.org (Chris Hoge) Date: Mon, 10 Dec 2018 11:33:29 -0800 Subject: [k8s] K8s-SIG-OpenStack Update Message-ID: Hi everyone, a quick update on SIG-OpenStack/K8s as we head into KubeCon. To start with, today David and I are at the contributors summit and will be available to catch up on any topics you're interested in personally. The only other event we have scheduled for KubeCon is the SIG Intro session[1], Tuesday at 2:35 PM in room 3A/B. Given the smaller turnout for the deep-dive session in Copenhagen, we opted to skip the deep dive session this time around. As such, the first half of the intro will serve as the official SIG-Update, then we will reserve the end of the time for collaboration and planning. If you're involved or interested in SIG-OpenStack at all, please be sure to make this session. Following up from the OpenStack Summit, we had a productive planning session and set a number of goals and tasks[2]. Top at the list is the rotation of the SIG leadership. We're currently accepting nominations to replace David Lyle and Robert Morse. Many thanks to both for serving this last year. Currently we have nominations for * Flavio Percoco * Melvin Hillsman If you would like to be considered for SIG leadership, or would like to nominate someone, please do so in this thread or personally to me. Another item on the agenda is the rebooting of the SIG-OpenStack/K8s meetings. We will return the previously scheduled time of Wednesdays at 4 PM PT, starting January 9. I'll be sending out agendas and reminders as we get closer to the date, and will be asking some sub-project owners to attend and contribute updates. Work on the cloud provider and associated code has been moving well, and the external provider has moved well beyond the capabilities of the now-deprecated in-tree provider. If you haven't started your migration plan off of the in-tree code, please start doing so now. We're targeting full removal in 2019. The OpenStack provider for the new Cluster API has started, and we are looking for more developers to get involved. This, and a possible Ironic provider, are important for the SIG to be involved with in the coming year. If SIG-Cluster-Lifecycle reaches its goals, the Cluster API will be one of the most valuable developments for operators and users in 2019. Finally, I continue to engage with SIG-Cloud-Provider as a co-lead to advocate for consistent provider development. One of the original goals was to wind down all provider SIGS. However, as the SIGs move beyond just provider code to larger support of a user and developer community, this plan is being reconsidered. I'll keep you updated on the developments. Thanks to everyone for your ongoing contributions, and I'm looking forward to seeing the folks who are attending KubeCon in Seattle. -Chris [1] Intro OpenStack SIG: https://sched.co/Grbk [2] SIG-K8s Berlin Summit Session: https://etherpad.openstack.org/p/sig-k8s-2018-berlin-summit From ignaziocassano at gmail.com Mon Dec 10 18:51:17 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 10 Dec 2018 19:51:17 +0100 Subject: Fwd: Openstacl magnum Queens api error In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Ignazio Cassano Date: Lun 10 Dic 2018 18:23 Subject: Openstacl magnum Queens api error To: OpenStack Operators Hi All, I installed magnum in queens with Centos 7 and I am facing with the same issue descrive here: https://bugs.launchpad.net/magnum/+bug/1701381 Please, anyone solved it? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From anteaya at anteaya.info Mon Dec 10 20:11:07 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Mon, 10 Dec 2018 15:11:07 -0500 Subject: On trust and risk, Australia's Assistance and Access Bill In-Reply-To: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> References: <20181207190926.z6fnjnevoh66yrqf@yuggoth.org> Message-ID: On 2018-12-07 2:09 p.m., Jeremy Stanley wrote: > Over the course of my career I've had to make personal > choices regarding installation and maintenance of legally-mandated > systems for spying on customers and users. All we can ever hope for > is that the relationships, systems and workflows we create are as > resistant as possible to these sorts of outside influences. To that end, I'd like to ensure that the people I know and trust are aware that they can talk to me for any reason or no reason at all. I'm not on irc as much as I used to be but my email still works. I care about you and having to face this kind of decision is really really difficult. Doing the right thing is frequently the least popular choice and requires a lot of internal fortitude to stand up for what you know is right and accept the consequences of doing so. Anyone I already know who might be facing this situation is welcome to contact me and I'll listen with all the support I can muster. Sometimes it helps if you remember you aren't alone. Thanks Jeremy, Anita From moshele at mellanox.com Mon Dec 10 20:22:58 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 10 Dec 2018 20:22:58 +0000 Subject: [neutron][ironic] Add Support for Smart NIC with baremetal Message-ID: Hi all, We started working on specs to support baremetal with smart-nic see [1] and [2]. There are some open issue and different approaches that require further discussion see [3]. To resolve them I would like to propose a meeting tomorrow , December 11th, at 15:00 UTC. For those of you interested in joining please use [4] to connect. [1] - https://review.openstack.org/#/c/582767/ [2] - https://review.openstack.org/#/c/619920/ [3] - https://etherpad.openstack.org/p/BER-ironic-smartnics [4] - https://bluejeans.com/u/jkreger Thanks, Moshe (moshele) -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Mon Dec 10 20:19:54 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 10 Dec 2018 20:19:54 +0000 Subject: [neutron][ironic] Add Support for Smart NIC with baremetal Message-ID: Hi all, We started working on specs to support baremetal with smart-nic see [1] and [2]. There are some open issue and different approaches that require further discussion see [3]. To resolve them I would like to propose a meeting tomorrow , December 11th, at 15:00 UTC. For those of you interested in joining please use [4] to connect. [1] - https://review.openstack.org/#/c/582767/ [2] - https://review.openstack.org/#/c/619920/ [3] - https://etherpad.openstack.org/p/BER-ironic-smartnics [4] - https://bluejeans.com/u/jkreger Thanks, Moshe (moshele) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Dec 10 20:49:33 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Dec 2018 14:49:33 -0600 Subject: [dev][goal][python3][qa][devstack][ptl] changing devstack's python 3 behavior In-Reply-To: References: Message-ID: On 12/10/18 12:43 PM, Doug Hellmann wrote: > Ben Nemec writes: > >> On 12/5/18 1:27 PM, Doug Hellmann wrote: >>> Today devstack requires each project to explicitly indicate that it can >>> be installed under python 3, even when devstack itself is running with >>> python 3 enabled. >>> >>> As part of the python3-first goal, I have proposed a change to devstack >>> to modify that behavior [1]. With the change in place, when devstack >>> runs with python3 enabled all services are installed under python 3, >>> unless explicitly listed as not supporting python 3. >>> >>> If your project has a devstack plugin or runs integration or functional >>> test jobs that use devstack, please test your project with the patch >>> (you can submit a trivial change to your project and use Depends-On to >>> pull in the devstack change). >>> >>> [1] https://review.openstack.org/#/c/622415/ >>> >> >> For Oslo, do we need to test every project or can we just do the >> devstack plugins and maybe one other as a sanity check? Since we don't >> own any devstack services this doesn't directly affect us, right? > > Given that we've been testing nova and cinder under python 3 for a while > now I think it's probably safe to assume Oslo is working OK. This change > is mostly about the fact that we were failing to install *all* of the > services under python 3 by default. The existing library forward-testing > jobs should handle future testing because they will install all of the > services as well as the library under test using python 3. > > If you want to be extra careful, you could propose a patch to some of > the more complicated libs (oslo.messaging and oslo.service come to mind) > that depends on the patch above to ensure that those libraries don't > trigger failures in the jobs. > Okay, sounds good. I have test patches proposed under https://review.openstack.org/#/q/topic:test-devstack-py3+(status:open+OR+status:merged) Some have already passed so I think we're in good shape, but we'll see what happens with the others. From rtidwell at suse.com Mon Dec 10 22:44:44 2018 From: rtidwell at suse.com (Ryan Tidwell) Date: Mon, 10 Dec 2018 16:44:44 -0600 Subject: [neutron] Subnet onboard and changing API definitions in neutron-lib Message-ID: <6043f799-3413-62f8-1f21-6e5f3dd5988a@suse.com> All, I alluded to some concerns about the current state of subnet onboard at the end of today's neutron team meeting. I felt the mailing list was probably the best starting point for a discussion, so here it goes :) As I'm dealing with the last loose ends, I'm starting to deal with the 'subnets' extension to the subnetpool resource on the API [1]. This has me scratching my head at what the intent of this is since there isn't much history to work with. When I look at the definition of the subnet onboard extension in neutron-lib, I can see two possible "meanings" of POST/PUT here: 1. Create a subnet from this subnetpool using the default prefix length of the subnet pool. 2. Onboard the given subnet into the subnet pool. In addition to this ambiguity around the usage of the API, I also have concerns that ONBOARD_SUBNET_SPECS as currently defined makes no sense for either case.  ONBOARD_SUBNET_SPECS requires that an id, network_id, and ip_version be sent on any request made. This seems unnecessary for both cases. Case 1 where we assume that we are using an alternate API to create a subnet is problematic in the following ways: - Specifying an ip_version is unnecessary because it can be inferred from the subnet pool - The definition as written doesn't seem to allow us to respond to the user with anything other than a UUID. The user then has to make a second call to actually go read the details like CIDR that are going to be useful for them going forward. - More importantly, we already have an API (and corresponding CLI) for creating subnets that works well and has much more flexibility than this provides. Why duplicate functionality here? Case 2 where we assume that we are onboarding a subnet is problematic in the following ways: - Specifying network_id and ip_version are unnecessary. These values can be read right off the subnet (which we would need to read for onboarding anyway), all that needs to be passed is the subnet UUID. - Again, we would be duplicating functionality here because we have already defined an API for onboarding a subnet in the ACTION_MAP [2] My intent is to figure out where to go from here to make this better and/or alleviate the confusion on my part so this feature can be wrapped up. Here is what I propose: Because subnet onboard is still incomplete and we are not claiming support for it yet, I am proposing we simply remove the 'subnets' extension to the subnetpools resource [3]. This simplifies the API and resolves the concerns I expressed above. It also allows us to quickly finish up subnet onboard without losing any of the desired functionality, namely the ability to move ("onboard") an existing subnet into a subnet pool (and by extension and address scope). The reason I am looking for input here is that it isn't clear to me whether the approach I'm suggesting is acceptable to the team given our policies around changing API definitions in neutron-lib. I'm not aware of a situation where we have had an unreleased feature and have discovered we don't like the API definition in neutron-lib (which has sat for quite a while unimplemented) and want to change it. I'm just not aware of any precedent for this situation, so I'm hoping the team has some thoughts on how best to move forward. For reference, the current review for subnet onboard is https://review.openstack.org/#/c/348080/. Any thoughts on this topic would be greatly appreciated by me, and hopefully this discussion can be useful to a broad audience going forward. -Ryan [1] https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/subnet_onboard.py#L32 [2] https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/subnet_onboard.py#L58 [3] https://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/subnet_onboard.py#L41 From juliaashleykreger at gmail.com Mon Dec 10 23:57:26 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 10 Dec 2018 15:57:26 -0800 Subject: [tc][forum] Summary for "community outreach when cultures, time zones, and languages differ" session Message-ID: Leading up to the Summit a number of us in the TC discussed how to improve outreach and thus communications outside of what we would consider our normal circles. The focus was how to better enable asynchronous communication. How do we share context? Ultimately we are talking about bridge building. The discussion went in many directions and we kind of started out discussing items that make our preferred communication methods a bit difficult. Thierry really focused us back to reality pointing out that "the earth is round". As the community has recognized before, time zones are absolutely a thing and that we shouldn't try and force people to be up at 3 AM to attend a meeting every week. As this discussion went on, a number of note worthy ideas began to surface. * Teams should likely reconsider meeting times periodically. Some teams have found success with alternating, but other teams have found that few attend the second time slot. * That it is easier to form a group of people in a specific time zone as they bring their established culture and don't have to integrate into another culture which is easier for them. The down side of this is that this leads to those people focusing on that specific project. * IRC is "super-addictive". ** The real time value of IRC is recognized, however on-boarding people into it can sometimes be a time consuming effort for some projects. ** Many, the TC included, have been trying to push more deep discussions to the mailing list or having someone summarize the outcome of conversations to the mailing list. ** IRC communication is often viewed as "too fast" of a discussion if your not a native speaker of the language being used in IRC. By the time you or software translates a statement, the discussion has moved on. * No matter what we do, there are two styles of communication that we need to be mindful of both asynchronous and synchronous communication. ** Some communities, such as the Linux kernel, have adopted fully asynchronous communication to handle the fact that the world is round. ** Projects have a tendency to use any communication method available. *** Some projects use etherpads for short term status on items, where others focus on its use for thought process preservation. *** Some projects try to drive such thought processes to an IRC discussion and then that often ends up as a discussion in gerrit. A point was eventually raised that meetings can still be useful, standing issues aside, as long as context is known in advance. The advance context does allow a team to determine in advance if a meeting is even required. Todo's and what can be done! * Simple reminders have apparently helped some meeting moderators keep people in attendance. * We reached consensus to encourage moderators to provide context in advance of meetings. This is in order to assist everyone to be more informed and hopefully shorten the exercise of trying to understand the issue as opposed to understand others impressions and reaching consensus. ** We also reached consensus that moderators should cancel a meeting if there is nothing on the agenda. * We want to encourage outreach messaging. This is not just the sub-communities making themselves known to the larger community, but raising awareness of "what is going on" or "what is new?" either in a project or portion of the community. Ultimately this falls into the category of "we do not know, what we do not know" and as such it is upon all of us to attempt to better communicate where we see gaps. As for specific action items, after re-reading the notes, I think that is up to us as a community to determine our next steps. We recorded no action items so I think further discussion may be needed. Perhaps all The etherpad can be found at https://etherpad.openstack.org/p/BER-tc-community-outreach. -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Dec 11 00:01:28 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 10 Dec 2018 16:01:28 -0800 Subject: [nova][dev] Stein blueprint tracking status Message-ID: <75e08d07-f32f-7e49-b506-258bf1f619f8@gmail.com> Hey all, Similar to previous cycles, I've created an etherpad for tracking status of Approved blueprints for Stein: https://etherpad.openstack.org/p/nova-stein-blueprint-status Please feel free to use it to help with code reviews and make notes on the pad if there's info to add. I'll keep the pad updated as we progress through the rest of the cycle. Cheers, -melanie From hongbin.lu at huawei.com Tue Dec 11 00:14:13 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Tue, 11 Dec 2018 00:14:13 +0000 Subject: [neutron][stadium] Switch from ryu to os-ken Message-ID: <0957CD8F4B55C0418161614FEC580D6B308A805A@yyzeml704-chm.china.huawei.com> Hi all, Due to the advice that Ryu library won't be maintained at the end of 2018, the neutron team decided to fork the Ryu library and maintain the fork, which is called os-ken [1], from now on. Right now, both neutron [2] and neutron-dynamic-routing [3] is switching over to os-ken. If other projects want to do the same, please feel free to handout to the openstack-neutron channel for helps or information. You can also find more in here: https://blueprints.launchpad.net/neutron/+spec/ryu-framework-maintenace-transition . [1] https://github.com/openstack/os-ken [2] https://review.openstack.org/#/c/607008/ [3] https://review.openstack.org/#/c/608357/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Tue Dec 11 01:22:57 2018 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Tue, 11 Dec 2018 10:22:57 +0900 Subject: [loci][openstack-helm] How to add some agent to loci images In-Reply-To: <824f452a87980106abca82287074083c8bb69201.camel@evrard.me> References: <2ED125CF-8AA3-4D81-8C0F-FDF8ED1EF2F0@openstack.org> <824f452a87980106abca82287074083c8bb69201.camel@evrard.me> Message-ID: On Mon, Dec 10, 2018 at 8:11 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > > > > On Fri, 2018-12-07 at 10:21 +0900, SIRI KIM wrote: > > > > our objetive: add neutron-lbaas & neutron-fwaas chart to openstack- > > helm > > upstream. > > problem: we need this loci image to push lbaas and fwaas chart into > > upstream repo. In order to pass osh gating, we need to have neutron- > > lbaas & > > > > neutron-fwaas agent image available for openstack-helm project. > > Yeah this seems more an OSH issue on how to properly leverage's LOCI > feature (and publishing a series of images) than a request to LOCI > directly I'd say :) > > > Hi, I do agree that this seems OSH issue, I will bring this up on osh weekly meeting. :) Thanks. -- *Jaesuk Ahn*, Ph.D. Software R&D Center, SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Tue Dec 11 01:54:45 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 11 Dec 2018 12:54:45 +1100 Subject: [OpenStack-Infra] [infra] OpenDev feedback forum session summary In-Reply-To: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> References: <20181210173319.lnqn2ydshtmpwkjj@yuggoth.org> Message-ID: Thank you for the update, it's much appreciated for those who couldn't make it :-) On Tue, Dec 11, 2018 at 4:34 AM Jeremy Stanley wrote: > Wednesday afternoon at the OpenStack Summit we met to discuss the > plan for the upcoming transition of the OpenStack Infrastructure > team to an independent effort named OpenDev. Notes were recorded at > https://etherpad.openstack.org/p/BER-opendev-feedback-and-missing-features > and form the basis of the summary with follows. > > For those unfamiliar with this topic, the announcement at > > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html > provides some background and context. Much of what follows may be a > reiteration of things also covered there, so please excuse any > redundancy on my part. > > To start out, we (re)announced that we have chosen a name (OpenDev) > and a domain (opendev.org), so can more effectively plan for DNS > changes for most of the services we currently host under the > "legacy" (for us) openstack.org domain. It was also pointed out that > while we expect to maintain convenience redirects and aliases from > old hostnames for all services we reasonably can so as to minimize > disruption, there will still be some unavoidable discontinuities for > users from time to time as we work through this. > > We talked for a bit about options for decentralizing GitHub > repository mirroring so that the current team no longer needs to > maintain it, and how to put it in control of people who want to > manage those organizations there for themselves instead. Doing this > with a job in Zuul's post pipeline (using encrypted secrets for > authentication) was suggested as one possible means to avoid users > all maintaining their own separate automation to accomplish the same > thing. > > Interest in bare metal CI nodes in nodepool was brought up again. To > reiterate, there's not really any technical reason we can't use > them, more that prior offers to donate access to Nova/Ironic-managed > nodes for this purpose never panned out. If you work for an > organization which maintains a "bare metal cloud" we could reach > over the open Internet and you'd consider carving out some of your > capacity for our CI system, please do get in touch with us! > > We spent a bit of time covering user concerns about the transition > to OpenDev and what reassurances we ought to provide. For starters, > our regular contributors and root systems administrators will > continue to be just as reachable and responsive as ever via IRC and > mailing lists, even if the names of the channels and MLs may change > as part of this transition. Similarly, our operations will remain as > open and transparent as they are today... really nothing about how > we maintain our systems is changing substantively as a part of the > OpenDev effort, though certainly the ways in which we maintain our > systems do still change and evolve over time as we seek to improve > them so that will of course continue to be the case. > > Paul Belanger raised concerns that announcing OpenDev could result > in a flood of new requests to host more projects. Well, really, I > think that's what we hope for. I (apparently) pointed out that even > when StackForge was first created back at the beginning of 2012, > there wasn't much restriction as to what we would be willing to > host. As interest in OpenDev spreads to new audiences, interest in > participating in its maintenance and development should too grow. > That said, we acknowledge that there are some scalability > bottlenecks and manual/human steps in certain aspects of new project > onboarding for now, so should be very up-front with any new projects > about that fact. We're also not planning for any big marketing push > to seek out additional projects at this point, but are happy to talk > to any who discover us and are interested in the services we offer. > > Next, Paul Belanger brought up the possibility of "bring your own > cloud" options for projects providing CI resources themselves. While > we expect nodepool to have support for tenant-specific resources in > the not-too-distant future, Jim Blair and Clark Boylan agreed the > large pool of generic resources we operate with now is really where > we see a lot of benefit and ability to drive efficiencies of scale. > Then Monty Taylor talked for a while, according to the notes in the > pad, and said things about earmarked resources potentially requiring > a sort of "commons tax" or... something. > > Jim Rollenhagen asked whether we would potentially start to test and > gate projects on GitHub too rather than just our Gerrit. Clark > Boylan and Jim Blair noted that the current situation where we're > testing pull requests for Kata's repositories is a bit of an > experiment in that direction today and the challenges we've faced > suggest that, while we'll likely continue to act as a third-party CI > system for some projects hosted on GitHub (we're doing that with > Ansible for example), we've discovered that trying to enforce gating > in code review platforms we don't also control is not likely > something we'll want to continue in the long term. > > It came up that our earlier ideas for flattening our Git namespace > to reduce confusion and minimize future repository renames is now > not looking as attractive. Instead we're probably going to need to > embrace an explosion of new namespaces and find better ways to cope > with the pain of renames in Gerrit as projects want to move between > them over time. We're planning to only run one Gerrit for > simplicity, so artificially creating "tenants" in it through > prefixes in repository names is really the simplest solution we have > to avoid different projects stepping on one another's toes with > their name choices. > > Then we got into some policy discussions about namespace creation. > Jim Blair talked about the potential to map Git/Gerrit repository > namespaces to distinct Zuul tenants, and someone (might have been > me? I was fairly jet-lagged and so don't really remember) asked > about who decides what the requirements are for projects to create > repositories in a particular namespace. In the case of OpenStack, > the answer is almost certainly the OpenStack Technical Committee or > at least some group to whom they delegate that responsibility. The > OpenStack TC needs to discuss fairly early what its policies are for > the "openstack" namespace (whether existing unofficial projects will > be allowed to remain, whether addition of new unofficial projects > will be allowed there) as well as whether it wants to allow creation > of multiple separate namespaces for official OpenStack projects. The > suggestion of nested "deep" namespaces like openstack/nova/nova came > up at this point too. > > We also resolved that we need to look back into the project rename > plugin for Gerrit. The last time we evaluated it, there wasn't much > there. We've heard it's improved with newer Gerrit releases, but if > it's still lacking we might want to contribute to making it more > effective so we can handle the inevitable renames more easily in the > future. > > And finally, as happens with most forum sessions, we stopped > abruptly because we ran over and it was Kendall Nelson's turn to > start getting ops feedback for the Contributor Guide. ;) > -- > Jeremy Stanley > _______________________________________________ > OpenStack-Infra mailing list > OpenStack-Infra at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Dec 11 02:51:48 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 11 Dec 2018 11:51:48 +0900 Subject: The 2nd Vietnam OSUG upstream contribution mentoring webinar Message-ID: Hello, Yes, we did it, the 2nd time [1] :). In this session, the OpenStack Upstream Contribution Mentoring program of the Vietnam OSUG focus on how to debug the OpenStack projects. We discussed and shared some experiences of using testing and debugging tools (tox, logs, zuul ci, ara, etc.). It was awesome! I also would love to mention Jitsi [2], the free and open source video conferencing tool. We make it through more than 1 hour of video chatting without any limitations or interruptions. The experience was great! I recommend you guys check this tool out if you want to organize video conferences. [1] https://www.dangtrinh.com/2018/12/viet-openstack-now-renamed-viet.html [2] https://jitsi.org/ Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Dec 11 06:14:18 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 11 Dec 2018 15:14:18 +0900 Subject: [Searchlight] Weekly report for the week of Stein R-19 & R-18 Message-ID: Hello team, Just so you know that we're having an issue with our functional tests [1]. Hopefully, I can fix it at the end of the week. [1] https://www.dangtrinh.com/2018/12/searchlight-weekly-report-stein-r-19-r.html Bests, -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Tue Dec 11 06:21:47 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Tue, 11 Dec 2018 15:21:47 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> Message-ID: Hi Clark, I'm trying to figure out how Zuul processes the test-setup.sh [1] file to adopt new dependencies. [1] https://github.com/openstack/searchlight/blob/master/tools/test-setup.sh Thanks, On Sat, Dec 8, 2018 at 9:10 AM Trinh Nguyen wrote: > Thanks, Clark. > > On Sat, Dec 8, 2018 at 3:16 AM Clark Boylan wrote: > >> On Fri, Dec 7, 2018, at 8:17 AM, Trinh Nguyen wrote: >> > Hi again, >> > Just wonder how the image for searchlight test was set up? Which user is >> > used for running ElasticSearch? Is there any way to indicate the user >> that >> > will run the test? Can I do it with [1]? Based on the output of [2] I >> can >> > see there are some permission issue of JDK if I run the functional tests >> > with the stack user on my dev environment. >> > >> > [1] >> > >> https://git.openstack.org/cgit/openstack/searchlight/tree/tools/test-setup.sh >> > [2] >> > >> https://review.openstack.org/#/c/622871/3/searchlight/tests/functional/__init__.py >> > >> >> The unittest jobs run as the Zuul user. This user has sudo access when >> test-setup.sh runs, but then we remove sudo access when tox is run. This is >> important as we are trying to help ensure that you can run tox locally >> without it making system level changes. >> >> When your test setup script runs `dpkg -i` this package install may start >> running an elasticsearch instance. It depends on how the package is set up. >> This daemon would run under the user the package has configured for that >> service. When you run an elasticsearch process from your test suite this >> will run as the zuul user. >> >> Hope this helps. I also left a couple of comments on change 622871. >> >> Clark >> > > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Dec 11 09:00:17 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 11 Dec 2018 18:00:17 +0900 Subject: [dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <1679c7ed819.ca179a2294220.1691595509360129661@ghanshyammann.com> Hello everyone, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. - gmann & TC From ifatafekn at gmail.com Tue Dec 11 09:45:49 2018 From: ifatafekn at gmail.com (Ifat Afek) Date: Tue, 11 Dec 2018 11:45:49 +0200 Subject: [vitrage][monasca] Monasca datasource for Vitrage Message-ID: Hi, In case you missed it, we have a POC of a Monasca datasource for Vitrage [1] J Thanks Bartosz Zurkowski for the effort! There is one issue that we need to close though – how to connect the Monasca alarms to the resources in Vitrage. In order to connect an alarm to a resource, Vitrage should be able to identify that resource in its entity graph using the dimensions coming from Monasca. I understood that these dimensions are configurable and not pre-defined. Basically, there are two cases. 1. For OpenStack resources, all Vitrage needs to know is the UUID of the resource. I assume that there is always a dimension for that resource, right? Is the dimension always called the same? If not, how can Vitrage identify that this is the interesting dimension (if there are other dimensions for this alarm)? 2. For other resources it is more complicated, as they might not have a natural unique id. For example, a NIC id in Vitrage may be generated out of the host name + NIC name. Monasca alarm may have two separate dimensions for the host and nic. How can Vitrage understand that those two dimensions identify the resource? In other cases, Vitrage may need a resource_type dimension to uniquely identify the resource. Another question: I understood that Monasca can send alarms that originate from other monitors like Ceilometer, Prometheus etc. What are the dimensions in these cases? Monasca team, I’ll be happy to hear your opinion about these questions. If you can provide several different examples it will be very helpful. Thanks, Ifat [1] https://review.openstack.org/#/c/622899 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Dec 11 15:01:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 11 Dec 2018 15:01:47 +0000 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> Message-ID: <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> On 2018-12-11 15:21:47 +0900 (+0900), Trinh Nguyen wrote: > I'm trying to figure out how Zuul processes the test-setup.sh [1] > file to adopt new dependencies. [...] Looking at what you've put in there for your JRE packages, those would be better handled in a bindep.txt file in your repository. Running tools/test-setup.sh in jobs is really to catch other sorts of setup steps like precreating a database or formatting and mounting a particular filesystem your tests are going to rely on. For system package dependencies of your jobs, we have a declarative mechanism already which doesn't require complex conditional handling: we install whatever bindep says is missing. I see you're also leveraging tools/test-setup.sh to obtain packages of elasticsearch which are not available in the distributions on which your jobs run, which I guess is a suitable workaround though it seems odd to test on platforms which don't package the software on which your project relies. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sfinucan at redhat.com Tue Dec 11 14:48:29 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 11 Dec 2018 14:48:29 +0000 Subject: [os-api-ref][doc] Removing inactive cores Message-ID: <38ad42a1ef5ca6cf8f6b599811d577e8d7e8716a.camel@redhat.com> We moved the os-api-ref project under the governance of the docs team at the beginning of the year. We've just done some housecleaning of the os-api-ref core list and have removed the following individuals owing to inactivity over the past 180 days. * Anne Gentle * Sean Dague * Karen Bradshaw We would like to thank them for their contributions and welcome them to rejoin the team if their priorities change. Stephen From thierry at openstack.org Tue Dec 11 15:12:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 11 Dec 2018 16:12:51 +0100 Subject: [tc][all] Documenting the role of the TC Message-ID: Hi everyone, At the Berlin Summit there was a Forum session to do a retrospective on the "TC vision for 2019" (written in April 2017). As Julia mentions in her summary[1] of that session, one action out of that session was to start working on a vision for the TC itself (rather than a vision by the TC for OpenStack), as a living document that would describe the role of the TC. Content was collaboratively drafted through an etherpad, then proposed as a governance review a few weeks ago. I just incorporated the latest early feedback in a revision. Please have a look at the current strawman and comment if you feel like something we are doing is missing, or something is there that should not be: https://review.openstack.org/622400 Like the "Technical vision for OpenStack clouds" document[2], this one is expected to be a living document describing the current situation, against which changes can be proposed. Once the initial version is approved, it will let us propose and discuss changes in the role of the TC. Thanks in advance for your comments ! [1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000586.html [2] https://governance.openstack.org/tc/reference/technical-vision.html -- Thierry Carrez (ttx) From juliaashleykreger at gmail.com Tue Dec 11 15:52:11 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 11 Dec 2018 07:52:11 -0800 Subject: [neutron][ironic] Add Support for Smart NIC with baremetal In-Reply-To: References: Message-ID: Following up on the discussion: Attendees: - Moshe Levi - Nate Johnston - Davidsha - Miguel - Adrian Chirls - Julia Kreger Notes from meeting: - NOTE: The discussions largely focused on the "local" execution on the smartnic case. The remote case is still under discussion but is not blocking to the overall effort. - We will create a new vnic binding type, as to not break baremetal port binding logic that already exists. Smartnics are viewed a separate path of logic that needs to be taken. If hierarchical port binding is needed, the logic will need to be updated at a later point in time. - The binding profile information to be transmitted from ironic to neutron to contain information about the smartnic. No manual port/smartnic mapping file will be added. - There will be no implicit mapping of ironic port.uuid to hostname on the operating smartnic. - Consensus was that the operators should supply the sufficient information to ironic for neutron to relate the smartnic (with the data sent in binding_profile) to the vif and the baremetal port. - Neutron OVS agent will perform the actual final port-plug using os-vif as compared to the current virtualization use case or the existing baremetal vnic use case. On Mon, Dec 10, 2018 at 12:34 PM Moshe Levi wrote: > Hi all, > > > > We started working on specs to support baremetal with smart-nic see [1] > and [2]. > > There are some open issue and different approaches that require further > discussion see [3]. > > To resolve them I would like to propose a meeting tomorrow , December > 11th, at 15:00 UTC. For those of you interested in joining please use [4] > to connect. > > > > [1] - https://review.openstack.org/#/c/582767/ > > [2] - https://review.openstack.org/#/c/619920/ > > [3] - https://etherpad.openstack.org/p/BER-ironic-smartnics > > [4] - https://bluejeans.com/u/jkreger > > > > Thanks, > > Moshe (moshele) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Tue Dec 11 19:56:44 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Wed, 12 Dec 2018 08:56:44 +1300 Subject: [scientific] No Scientific SIG meeting this week Message-ID: Hi all, Apologies for the late notice but we have 2 of 3 chairs travelling and one out of timezone this week. Cheers, b1airo -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Dec 11 20:54:35 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 11 Dec 2018 21:54:35 +0100 Subject: [sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository[sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository Message-ID: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> Hi all, we (sahara developers) are working on splitting the content of sahara.git, moving the code of each plugin to its own respository. Each plugin is used for a different Hadoop/Big Data provider. The change is better described in the following spec, which didn't happen in Rocky: http://specs.openstack.org/openstack/sahara-specs/specs/rocky/plugins-outside-sahara-core.html The change is going to impact other projects, but hopefully not the users. I will try to summarize the impact on the various consumers of sahara. == Deployment tools - the devstack plugin in the test repository (see below) was modified to support the installation of all the available plugins previously available in- tree, so devstack users should not notice any change. - puppet-sahara will require changes to install the additional repositories, and we would require and welcome any help; - I hope that the puppet-sahara changes would be enough to make TripleO work, but we would need some help on this. - openstack-ansibe-os_sahara will require changes as well, but we have a bit more expertise on Ansible and we can help with the changes. - kolla will require some changes too, depending on te (apologize if I forgot some project) == Packaging It would be nice if users could get all the plugins packages by installing the same packages as before the split, but I understand that it may be tricky, because each plugin depends on the core of sahara. This means that a binary subpackage like openstack-sahara (RDO) or sahara (Debian) can't depend on the binary packages of the plugins, or that would introduce a circular dependency. I raised the issue on the RDO list to get some initial feedback and the solution that we are going to implement in RDO requires flipping the value of a variable to handle the bootstrap case and the normal case, please check: https://lists.rdoproject.org/pipermail/dev/2018-November/008972.html and the rest of the thread, like: https://lists.rdoproject.org/pipermail/dev/2018-December/008977.html Any additional idea from other packagers would be really appreciated, also because it has a direct impact on at least the puppet modules. == Preview The work-in-progress repositories are available here: Sahara: https://github.com/tellesnobrega/sahara/tree/split-plugins Plugins: * Ambari: https://github.com/tellesnobrega/sahara-plugin-ambari * CDH: https://github.com/tellesnobrega/sahara-plugin-cdh * MapR: https://github.com/tellesnobrega/sahara-plugin-mapr * Spark: https://github.com/tellesnobrega/sahara-plugin-spark * Storm: https://github.com/tellesnobrega/sahara-plugin-storm * Vanilla: https://github.com/tellesnobrega/sahara-plugin-vanilla Any comment is more than welcome! Ciao -- Luigi From aspiers at suse.com Tue Dec 11 21:42:06 2018 From: aspiers at suse.com (Adam Spiers) Date: Tue, 11 Dec 2018 21:42:06 +0000 Subject: [all][ptl][heat][senlin][magnum][vitrage][watcher] New SIG for Autoscaling? In-Reply-To: References: Message-ID: <20181211214205.v73ki5mgidiqx42j@pacific.linksys.moosehall> Agreed. I also see the similarities, but I don't think there is enough overlap to cause problems, so +1 from me. My only suggestion would be to consider whether it makes sense to slightly widen the scope, since I believe Watcher operates in the very similar area of optimisation, and to me auto-scaling just sounds like one particular use case out of many different approaches to optimisation, of which Watcher covers several. So for example you could call it the "self-optimising SIG" ;-) Feel free to copy any of the self-healing SIG's structure if it helps! Ifat Afek wrote: >+1 for that. > >I see a lot of similarities between this SIG and the self-healing SIG, >although their scopes are slightly different. > >In both cases, there is a need to decide when an action should be taken >(based on Ceilometer, Monasca, Vitrage etc.) what action to take >(healing/scaling) and how to execute it (using Heat, Senlin, Mistral, …). >The main differences are the specific triggers and the actions to perform. > >I think that as a first step we should document the use cases. > >Ifat > >On Thu, Nov 29, 2018 at 9:34 PM Joseph Davis wrote: > >>I agree with Duc and Witek that this communication could be really good. >> >>One of the first items for a new SIG would be to define the relationship >>with the Self-Healing SIG. The two SIGs have a lot in common but some >>important differences. They can both use some of the same tools and data >>(Heat, Monasca, Senlin, Vitrage, etc) to achieve their purpose, but >>Self-Healing is about recovering a cloud when something goes wrong, while >>Autoscaling is about adjusting resources to avoid something going wrong. >>Having a clear statement may help a new user or contributor understand >>where there interests lie and how they can be part of the group. >> >>Writing some clear use cases will be really valuable for all the component >>teams to reference. It may also be of value to identify a few reference >>architectures or configurations to illustrate how the use cases could be >>addressed. I'm thinking of stories like "A cloud with Monasca and Senlin >>services has 20 active VMs. When Monasca recognizes the 20 VMs have hit 90% >>utilization each it raises an alarm and Senlin triggers the creation of 5 >>more VMs to meet expected loads." Plus lots of details I just skipped >>over. :) >> >>joseph >> >>On Wed, Nov 28, 2018 at 4:00 AM Rico Lin >>wrote: >>>I gonna use this ML to give a summary of the forum [1] and asking for >>>feedback for the idea of new SIG. >>> >>>So if you have any thoughts for the new SIG (good or bad) please share it >>>here. >>> >>>[1] >>https://etherpad.openstack.org/p/autoscaling-integration-and-feedback From msm at redhat.com Tue Dec 11 21:56:03 2018 From: msm at redhat.com (Michael McCune) Date: Tue, 11 Dec 2018 16:56:03 -0500 Subject: [sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository[sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository In-Reply-To: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> References: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> Message-ID: On Tue, Dec 11, 2018 at 3:57 PM Luigi Toscano wrote: > > Hi all, > > we (sahara developers) are working on splitting the content of sahara.git, > moving the code of each plugin to its own respository. Each plugin is used for > a different Hadoop/Big Data provider. The change is better described in the > following spec, which didn't happen in Rocky: > > http://specs.openstack.org/openstack/sahara-specs/specs/rocky/plugins-outside-sahara-core.html this sounds really cool, thanks for the update Luigi. peace o/ From mriedemos at gmail.com Tue Dec 11 22:41:00 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 11 Dec 2018 16:41:00 -0600 Subject: [all]Forum summary: Expose SIGs and WGs In-Reply-To: References: Message-ID: On 12/3/2018 11:42 AM, Rico Lin wrote: > We also have some real story (Luzi's story) for people to get a better > understanding of why current workflow can look like for someone who > tries to help. I looked over the note on this in the etherpad. They did what they were asked and things have stalled. At this point, I think it comes down to priorities, and in order to prioritize something big like this that requires coordinated work across several projects, we are going to need more stakeholders coming forward and saying they also want this feature so the vendors who are paying the people to work upstream can be given specific time to give this the attention it needs. And that ties back into getting the top 1 or 2 wishlist items from each SIG and trying to sort those based on what is the highest rated most common need for the greatest number of people - sort of like what we see happening with the resource delete API community wide goal proposal. -- Thanks, Matt From zigo at debian.org Tue Dec 11 22:42:40 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 11 Dec 2018 23:42:40 +0100 Subject: [sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository[sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository In-Reply-To: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> References: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> Message-ID: Hi Luigi, First thanks for communicating about this. I'm afraid you probably will not like much what I'm going to have to write here. Sorry in advance if I'm not enthusiastic about this. On 12/11/18 9:54 PM, Luigi Toscano wrote: > == Packaging > > It would be nice if users could get all the plugins packages by installing the > same packages as before the split, but I understand that it may be tricky, > because each plugin depends on the core of sahara. This means that a binary > subpackage like openstack-sahara (RDO) or sahara (Debian) can't depend on the > binary packages of the plugins, or that would introduce a circular dependency. > > I raised the issue on the RDO list to get some initial feedback and the > solution that we are going to implement in RDO requires flipping the value of > a variable to handle the bootstrap case and the normal case, please check: > https://lists.rdoproject.org/pipermail/dev/2018-November/008972.html > and the rest of the thread, like: > https://lists.rdoproject.org/pipermail/dev/2018-December/008977.html > > Any additional idea from other packagers would be really appreciated, also > because it has a direct impact on at least the puppet modules. We aren't "packagers", we *maintain* packages, and therefore are package maintainers. And from that perspective, such decision from upstream is always disruptive, both from the package maintainer view, and for our users. While a plugin architecture looks sexy from the outside, it can bite hard in many ways. In argumentation in the spec, the one and only thing which is annoying your users (according to the argumentation) is that plugins are tight to a milestone, and that it's not possible to upgrade a plug-in separately. I seriously doubt that Sahara users are going to do that, for many reasons. Here's 2 of them. First, because if they consume packages, I don't see downstream distribution making update to packages so often. Especially if there's so many to update, and so many combination to test, it's going to be very difficult for downstream distribution to know what to do, and do regular plugin updates. Second, because I very much doubt that users will want to update their plugins that often. Most OpenStack users are struggling with updates of the OpenStack core already. Another thing is experience we had with Cinder and Neutron. As you know, Cinder carries the drivers in the main repository, while Neutron decided to move to a plugin system. So we do trust the Cinder team to do a good job to maintain the plugins, and have them in sync with the release, for some Neutron plugins, it has been a complete disaster, with some plugins always lagging behind the needed API changes and so on. As for downstream distribution, it is very hard to know which plugins are worth packaging or not, and even harder to know which are well maintained upstream or not. What's your plan for addressing this issue? As for myself, I'm really not sure I'm going to invest so much time on Sahara packaging plugin. Maybe I'll decide to package a few, but probably not all plugins, as it's very time consuming. Maybe I'll find spare cycles. It wont be important anyway, because Debian Buster will be frozen, so I may just skip that work for the Stein release, as I'll probably be busy on other stuff. So, not best timing for me as a Debian package maintainer. :/ So, maybe you could reconsider? Or is it already too late? > == Preview > > The work-in-progress repositories are available here: > > Sahara: https://github.com/tellesnobrega/sahara/tree/split-plugins > > Plugins: > * Ambari: https://github.com/tellesnobrega/sahara-plugin-ambari > * CDH: https://github.com/tellesnobrega/sahara-plugin-cdh > * MapR: https://github.com/tellesnobrega/sahara-plugin-mapr > * Spark: https://github.com/tellesnobrega/sahara-plugin-spark > * Storm: https://github.com/tellesnobrega/sahara-plugin-storm > * Vanilla: https://github.com/tellesnobrega/sahara-plugin-vanilla Oh gosh... This becomes very embarrassing. I have absolutely no clue what these plugins are for, and there's absolutely no clue in the repositories. Appart from "Spark", which I vaguely heard about, the above names give zero clue on what they do. There's no hyperlink to the projects these plugins are supporting. I have no clue which of the above plugins is for Hadoop, which was historically the first target of Sahara. Please, please please please, for each plugin, provide: - A sort description of 80 chars, in one line - A long description of *AT LEAST* 3 lines of 80 chars That's IMO the minimum you own to the downstream package maintainers that are doing the work for your project for free (that's really my case, I never used Sahara, and I have zero professional insentive to package it, I really do it as a courtesy for Debian and OpenStack users). Thanks for working on Sahara, Cheers, Thomas Goirand (zigo) P.S: I've disregarded the added amount of work that all of this change imposes to all of us, as I thought you knew already about it, but this is not a good news to know 6 new packages will be needed... :( From openstack at nemebean.com Tue Dec 11 23:03:12 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 11 Dec 2018 17:03:12 -0600 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 Message-ID: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> The tooz gate is currently blocked on https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been introduced with grpcio 1.16.0. I need to raise that issue with the grpc folks, but in the meantime should we block the problematic versions? grpcio isn't currently in g-r and I can't remember what our policy on transitive dependencies is, so I thought I'd solicit opinions. Thanks. -Ben From ltoscano at redhat.com Tue Dec 11 23:10:39 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 12 Dec 2018 00:10:39 +0100 Subject: [sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository[sahara][ops][tripleo][openstack-ansible][puppet-openstack][kolla][i18n] Upcoming split of the sahara repository In-Reply-To: References: <17553662.qPKBozYCZC@whitebase.usersys.redhat.com> Message-ID: <4598759.uZ9kGMok9I@whitebase.usersys.redhat.com> On Tuesday, 11 December 2018 23:42:40 CET Thomas Goirand wrote: > Hi Luigi, > > First thanks for communicating about this. > > I'm afraid you probably will not like much what I'm going to have to > write here. Sorry in advance if I'm not enthusiastic about this. > > On 12/11/18 9:54 PM, Luigi Toscano wrote: > > == Packaging > > > > It would be nice if users could get all the plugins packages by installing > > the same packages as before the split, but I understand that it may be > > tricky, because each plugin depends on the core of sahara. This means > > that a binary subpackage like openstack-sahara (RDO) or sahara (Debian) > > can't depend on the binary packages of the plugins, or that would > > introduce a circular dependency. > > > > I raised the issue on the RDO list to get some initial feedback and the > > solution that we are going to implement in RDO requires flipping the value > > of a variable to handle the bootstrap case and the normal case, please > > check: > > https://lists.rdoproject.org/pipermail/dev/2018-November/008972.html> > > and the rest of the thread, like: > > https://lists.rdoproject.org/pipermail/dev/2018-December/008977.html > > > > Any additional idea from other packagers would be really appreciated, also > > because it has a direct impact on at least the puppet modules. > > We aren't "packagers", we *maintain* packages, and therefore are package > maintainers. And from that perspective, such decision from upstream is > always disruptive, both from the package maintainer view, and for our users. About the terminology, I'd like to point out that I've seen "packagers" used without any particular weird meaning attached in all the development communities I followed or I contributed to, even used by the people in charge of the packages, and I knew quite a lot of them. Going to the proper point: > While a plugin architecture looks sexy from the outside, it can bite > hard in many ways. > > In argumentation in the spec, the one and only thing which is annoying > your users (according to the argumentation) is that plugins are tight to > a milestone, and that it's not possible to upgrade a plug-in separately. > I seriously doubt that Sahara users are going to do that, for many > reasons. Here's 2 of them. > > First, because if they consume packages, I don't see downstream > distribution making update to packages so often. Especially if there's > so many to update, and so many combination to test, it's going to be > very difficult for downstream distribution to know what to do, and do > regular plugin updates. That's about people in charge of the packages, so me (partially for RDO), you, and other people. > > Second, because I very much doubt that users will want to update their > plugins that often. Most OpenStack users are struggling with updates of > the OpenStack core already. We didn't come out with this requirement outselves. We had various operators over the year who could not provide newer plugins to their users because they could not upgrade the whole of OpenStack and asked for this. This feature is there exactly to not upgrade the core of OpenStack. Upgrading leaves components like the plugin is a totally different story. > Another thing is experience we had with Cinder and Neutron. As you know, > Cinder carries the drivers in the main repository, while Neutron decided > to move to a plugin system. So we do trust the Cinder team to do a good > job to maintain the plugins, and have them in sync with the release, for > some Neutron plugins, it has been a complete disaster, with some plugins > always lagging behind the needed API changes and so on. As for > downstream distribution, it is very hard to know which plugins are worth > packaging or not, and even harder to know which are well maintained > upstream or not. What's your plan for addressing this issue? By keeping a proper set of constraints in the plugins. As I mentioned, the core of Sahara is more than stable and most of the changes are in the plugins. tl;dr we don't plan API changes in the interface between the core and the plugins. And we probably don't want them. At most we may add some of them, but not remove. > As for myself, I'm really not sure I'm going to invest so much time on > Sahara packaging plugin. Maybe I'll decide to package a few, but > probably not all plugins, as it's very time consuming. Maybe I'll find > spare cycles. It wont be important anyway, because Debian Buster will be > frozen, so I may just skip that work for the Stein release, as I'll > probably be busy on other stuff. So, not best timing for me as a Debian > package maintainer. :/ > > So, maybe you could reconsider? Or is it already too late? There is no going back. > > > == Preview > > > > The work-in-progress repositories are available here: > > > > Sahara: https://github.com/tellesnobrega/sahara/tree/split-plugins > > > > Plugins: > > * Ambari: https://github.com/tellesnobrega/sahara-plugin-ambari > > * CDH: https://github.com/tellesnobrega/sahara-plugin-cdh > > * MapR: https://github.com/tellesnobrega/sahara-plugin-mapr > > * Spark: https://github.com/tellesnobrega/sahara-plugin-spark > > * Storm: https://github.com/tellesnobrega/sahara-plugin-storm > > * Vanilla: https://github.com/tellesnobrega/sahara-plugin-vanilla > > Oh gosh... This becomes very embarrassing. I have absolutely no clue > what these plugins are for, and there's absolutely no clue in the > repositories. Appart from "Spark", which I vaguely heard about, the > above names give zero clue on what they do. There's no hyperlink to the > projects these plugins are supporting. I have no clue which of the above > plugins is for Hadoop, which was historically the first target of Sahara. That's the meaning of "testing repository"; they are not final (they will need at least one or two more rebases from the current master). The description in setup.cfg can certainly be extended, and the related content of README.rst too. This is a valuable suggestion. Ciao -- Luigi From mthode at mthode.org Tue Dec 11 23:51:47 2018 From: mthode at mthode.org (Matthew Thode) Date: Tue, 11 Dec 2018 17:51:47 -0600 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> Message-ID: <20181211235147.76ad2uwamon2dsvq@mthode.org> On 18-12-11 17:03:12, Ben Nemec wrote: > The tooz gate is currently blocked on > https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been > introduced with grpcio 1.16.0. I need to raise that issue with the grpc > folks, but in the meantime should we block the problematic versions? grpcio > isn't currently in g-r and I can't remember what our policy on transitive > dependencies is, so I thought I'd solicit opinions. > > Thanks. > I'm not aware of any policy on how we block dependencies like this. At the moment I'd suggest opening a bug with us so we don't forget and add it to global-requirements (preferably with a comment). Answering all the questions as normal. https://storyboard.openstack.org/#!/project/openstack/requirements -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dangtrinhnt at gmail.com Wed Dec 12 00:47:35 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 12 Dec 2018 09:47:35 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> Message-ID: Thank Jeremy. I will look into bindep. On Wed, Dec 12, 2018 at 12:04 AM Jeremy Stanley wrote: > On 2018-12-11 15:21:47 +0900 (+0900), Trinh Nguyen wrote: > > I'm trying to figure out how Zuul processes the test-setup.sh [1] > > file to adopt new dependencies. > [...] > > Looking at what you've put in there for your JRE packages, those > would be better handled in a bindep.txt file in your repository. > Running tools/test-setup.sh in jobs is really to catch other sorts > of setup steps like precreating a database or formatting and > mounting a particular filesystem your tests are going to rely on. > For system package dependencies of your jobs, we have a declarative > mechanism already which doesn't require complex conditional > handling: we install whatever bindep says is missing. > > I see you're also leveraging tools/test-setup.sh to obtain packages > of elasticsearch which are not available in the distributions on > which your jobs run, which I guess is a suitable workaround though > it seems odd to test on platforms which don't package the software > on which your project relies. > -- > Jeremy Stanley > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 12 01:03:28 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 Dec 2018 10:03:28 +0900 Subject: [dev] [tc][all] TC office hours is started now on #openstack-tc Message-ID: <1679ff0a97d.cfc3717a14880.8241100762744862054@ghanshyammann.com> Hello everyone, TC office hour is started on #openstack-tc channel. Feel free to reach to us for anything you want discuss/input/feedback/help from TC. - gmann & TC From niraj.singh at nttdata.com Wed Dec 12 02:30:46 2018 From: niraj.singh at nttdata.com (Singh, Niraj) Date: Wed, 12 Dec 2018 02:30:46 +0000 Subject: [tosca-parser] Need new release of tosca-parser library Message-ID: Hi Team, Heat-translator patch [1] is dependent on tosca-parser changes that is already merged in patch [2]. We need new tosco-parser release so that we can add the new release version in heat-translator and proceed further with the review. And tacker patch [3] is dependent on heat-translator [1]. So we also need new release of heat-translator after merging the new changes so that it can be used by tacker changes to successfully work with new functionality. Please let me know the plan when you will release tosco-parse library. [1] https://review.openstack.org/#/c/619154/ heat-translator [2] https://review.openstack.org/#/c/612973/ tosco-parser [3] https://review.openstack.org/#/c/622888/ tacker Best Regards, Niraj Singh | System Analyst| NTT DATA Global Delivery Services Ltd.| w. +91.20.6604.1500 x 403|m.8055861633| Niraj.Singh at nttdata.com Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Wed Dec 12 02:35:15 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 12 Dec 2018 13:35:15 +1100 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> Message-ID: <20181212023515.GB6373@thor.bakeyournoodle.com> On Tue, Dec 11, 2018 at 05:03:12PM -0600, Ben Nemec wrote: > The tooz gate is currently blocked on > https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been > introduced with grpcio 1.16.0. I need to raise that issue with the grpc > folks, but in the meantime should we block the problematic versions? grpcio > isn't currently in g-r and I can't remember what our policy on transitive > dependencies is, so I thought I'd solicit opinions. Yup just add it in this section: http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n479 Bonus points if you have a second patch that removes daiquiri from that section as we're clearly not blocking version any more. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dangtrinhnt at gmail.com Wed Dec 12 02:37:10 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 12 Dec 2018 11:37:10 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> Message-ID: Trying to look through the logs of the last merged patch set [1] but nothing there. Is there any other places that I can look into? [1] https://review.openstack.org/#/c/616056/ Thanks, On Wed, Dec 12, 2018 at 9:47 AM Trinh Nguyen wrote: > Thank Jeremy. I will look into bindep. > > On Wed, Dec 12, 2018 at 12:04 AM Jeremy Stanley wrote: > >> On 2018-12-11 15:21:47 +0900 (+0900), Trinh Nguyen wrote: >> > I'm trying to figure out how Zuul processes the test-setup.sh [1] >> > file to adopt new dependencies. >> [...] >> >> Looking at what you've put in there for your JRE packages, those >> would be better handled in a bindep.txt file in your repository. >> Running tools/test-setup.sh in jobs is really to catch other sorts >> of setup steps like precreating a database or formatting and >> mounting a particular filesystem your tests are going to rely on. >> For system package dependencies of your jobs, we have a declarative >> mechanism already which doesn't require complex conditional >> handling: we install whatever bindep says is missing. >> >> I see you're also leveraging tools/test-setup.sh to obtain packages >> of elasticsearch which are not available in the distributions on >> which your jobs run, which I guess is a suitable workaround though >> it seems odd to test on platforms which don't package the software >> on which your project relies. >> -- >> Jeremy Stanley >> > > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Dec 12 02:37:45 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 11 Dec 2018 18:37:45 -0800 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <20181212023515.GB6373@thor.bakeyournoodle.com> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> <20181212023515.GB6373@thor.bakeyournoodle.com> Message-ID: <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> On Tue, Dec 11, 2018, at 6:35 PM, Tony Breeds wrote: > On Tue, Dec 11, 2018 at 05:03:12PM -0600, Ben Nemec wrote: > > The tooz gate is currently blocked on > > https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been > > introduced with grpcio 1.16.0. I need to raise that issue with the grpc > > folks, but in the meantime should we block the problematic versions? grpcio > > isn't currently in g-r and I can't remember what our policy on transitive > > dependencies is, so I thought I'd solicit opinions. > > Yup just add it in this section: > http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n479 Do we need to have the constraints generation tooling install tooz[etcd3] so that the transitive dep for grpcio is installed too? Or will that get pulled in because it is in global-requirements? Enough has changed in how this all works I'm not totally sure what is necessary anymore but tooz lists etcd3 as an extra requires so you have to ask for it to get those deps. Clark From tony at bakeyournoodle.com Wed Dec 12 02:39:51 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 12 Dec 2018 13:39:51 +1100 Subject: [tosca-parser] Need new release of tosca-parser library In-Reply-To: References: Message-ID: <20181212023950.GC6373@thor.bakeyournoodle.com> On Wed, Dec 12, 2018 at 02:30:46AM +0000, Singh, Niraj wrote: > Hi Team, > > > Heat-translator patch [1] is dependent on tosca-parser changes that is already merged in patch [2]. > > > We need new tosco-parser release so that we can add the new release version in heat-translator and proceed further with the review. This can be done via the openstack/releases repo. We'll need either the heat PTL or release team liaison to approve the patch if neither of them can propose the change. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Wed Dec 12 02:47:21 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Wed, 12 Dec 2018 13:47:21 +1100 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> <20181212023515.GB6373@thor.bakeyournoodle.com> <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> Message-ID: <20181212024720.GD6373@thor.bakeyournoodle.com> On Tue, Dec 11, 2018 at 06:37:45PM -0800, Clark Boylan wrote: > Do we need to have the constraints generation tooling install > tooz[etcd3] so that the transitive dep for grpcio is installed too? We do not. > Or > will that get pulled in because it is in global-requirements? Yes that one :D Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mthode at mthode.org Wed Dec 12 02:54:03 2018 From: mthode at mthode.org (Matthew Thode) Date: Tue, 11 Dec 2018 20:54:03 -0600 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> <20181212023515.GB6373@thor.bakeyournoodle.com> <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> Message-ID: <20181212025403.nnj2txxbot2dha66@mthode.org> On 18-12-11 18:37:45, Clark Boylan wrote: > On Tue, Dec 11, 2018, at 6:35 PM, Tony Breeds wrote: > > On Tue, Dec 11, 2018 at 05:03:12PM -0600, Ben Nemec wrote: > > > The tooz gate is currently blocked on > > > https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been > > > introduced with grpcio 1.16.0. I need to raise that issue with the grpc > > > folks, but in the meantime should we block the problematic versions? grpcio > > > isn't currently in g-r and I can't remember what our policy on transitive > > > dependencies is, so I thought I'd solicit opinions. > > > > Yup just add it in this section: > > http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n479 > > Do we need to have the constraints generation tooling install tooz[etcd3] so that the transitive dep for grpcio is installed too? Or will that get pulled in because it is in global-requirements? > > Enough has changed in how this all works I'm not totally sure what is necessary anymore but tooz lists etcd3 as an extra requires so you have to ask for it to get those deps. > ya, it should be in global-requirements if it's an extra dep, the reason why gating didn't catch it is because it's not checking. see http://logs.openstack.org/47/624247/3/check/requirements-check/f8b35d2/job-output.txt.gz#_2018-12-11_22_58_07_751203 for example. going to see if I can add that... -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Wed Dec 12 03:07:26 2018 From: mthode at mthode.org (Matthew Thode) Date: Tue, 11 Dec 2018 21:07:26 -0600 Subject: [oslo][tooz][requirements] Tooz etcd3 tests blocked on grpcio>=1.16.0 In-Reply-To: <20181212025403.nnj2txxbot2dha66@mthode.org> References: <23246b05-d96f-f981-5ab9-a9d102fcb480@nemebean.com> <20181212023515.GB6373@thor.bakeyournoodle.com> <1544582265.3129550.1606583752.09D0F005@webmail.messagingengine.com> <20181212025403.nnj2txxbot2dha66@mthode.org> Message-ID: <20181212030726.gifmmnkwc5mskyyx@mthode.org> On 18-12-11 20:54:03, Matthew Thode wrote: > On 18-12-11 18:37:45, Clark Boylan wrote: > > On Tue, Dec 11, 2018, at 6:35 PM, Tony Breeds wrote: > > > On Tue, Dec 11, 2018 at 05:03:12PM -0600, Ben Nemec wrote: > > > > The tooz gate is currently blocked on > > > > https://bugs.launchpad.net/python-tooz/+bug/1808046 which seems to have been > > > > introduced with grpcio 1.16.0. I need to raise that issue with the grpc > > > > folks, but in the meantime should we block the problematic versions? grpcio > > > > isn't currently in g-r and I can't remember what our policy on transitive > > > > dependencies is, so I thought I'd solicit opinions. > > > > > > Yup just add it in this section: > > > http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt#n479 > > > > Do we need to have the constraints generation tooling install tooz[etcd3] so that the transitive dep for grpcio is installed too? Or will that get pulled in because it is in global-requirements? > > > > Enough has changed in how this all works I'm not totally sure what is necessary anymore but tooz lists etcd3 as an extra requires so you have to ask for it to get those deps. > > > > ya, it should be in global-requirements if it's an extra dep, the reason > why gating didn't catch it is because it's not checking. > > see > http://logs.openstack.org/47/624247/3/check/requirements-check/f8b35d2/job-output.txt.gz#_2018-12-11_22_58_07_751203 > for example. > > going to see if I can add that... > Ignore this, I thought we were talking about grpcio being in listed as a direct (optional) dependency. Time to go to sleep. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From bobh at haddleton.net Wed Dec 12 03:41:34 2018 From: bobh at haddleton.net (Bob Haddleton) Date: Tue, 11 Dec 2018 21:41:34 -0600 Subject: [tosca-parser] Need new release of tosca-parser library In-Reply-To: <20181212023950.GC6373@thor.bakeyournoodle.com> References: <20181212023950.GC6373@thor.bakeyournoodle.com> Message-ID: <6F5CE673-171B-436B-8D08-414494A9A60C@haddleton.net> I will submit this patch tomorrow morning (US/Central) Bob > On Dec 11, 2018, at 20:39, Tony Breeds wrote: > >> On Wed, Dec 12, 2018 at 02:30:46AM +0000, Singh, Niraj wrote: >> Hi Team, >> >> >> Heat-translator patch [1] is dependent on tosca-parser changes that is already merged in patch [2]. >> >> >> We need new tosco-parser release so that we can add the new release version in heat-translator and proceed further with the review. > > This can be done via the openstack/releases repo. We'll need either the > heat PTL or release team liaison to approve the patch if neither of them > can propose the change. > > Yours Tony. From chkumar246 at gmail.com Wed Dec 12 04:06:45 2018 From: chkumar246 at gmail.com (Chandan kumar) Date: Wed, 12 Dec 2018 09:36:45 +0530 Subject: [tripleo][openstack-ansible] collaboration on os_tempest role update II - Dec 12, 2018 Message-ID: Hello, Below is the another Iteration of updates on what happened in os_tempest[1.] side. Things got improved till Dec 4 - 12, 2018: * Use tempest run for generating subunit results - https://review.openstack.org/621584 Things still in progress: * Better blacklist and whitelist tests management - https://review.openstack.org/621605 * Use --profile feature for generating tempest.conf by passing all tempestconf named arguments in yaml file: - https://review.openstack.org/623187 - https://review.openstack.org/623413 * Use tempestconf rpm for generating tempest.conf in CentOS/SUSE distro Jobs: https://review.openstack.org/622999 Improvements in python-tempestconf: * Introducing profile feature to generate sample profile in tempestconf as well as user can create the same profile of named arguments and pass it to discover-tempest-config binary. It will help to improve automation across tooling. Next Week: * We will continue working on above in progress tasks. Here is the first update [2] and second update [3] Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel. Links: [1.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest [2.] http://lists.openstack.org/pipermail/openstack-dev/2018-November/136452.html [3.] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000537.html Thanks, Chandan Kumar From rico.lin.guanyu at gmail.com Wed Dec 12 05:31:41 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 12 Dec 2018 13:31:41 +0800 Subject: [tosca-parser] Need new release of tosca-parser library In-Reply-To: <6F5CE673-171B-436B-8D08-414494A9A60C@haddleton.net> References: <20181212023950.GC6373@thor.bakeyournoodle.com> <6F5CE673-171B-436B-8D08-414494A9A60C@haddleton.net> Message-ID: On Wed, Dec 12, 2018 at 11:47 AM Bob Haddleton wrote: > I will submit this patch tomorrow morning (US/Central) > I will help to give my aproval just to honor the project release workflow > > Bob > > > On Dec 11, 2018, at 20:39, Tony Breeds wrote: > > > >> On Wed, Dec 12, 2018 at 02:30:46AM +0000, Singh, Niraj wrote: > >> Hi Team, > >> > >> > >> Heat-translator patch [1] is dependent on tosca-parser changes that is > already merged in patch [2]. > >> > >> > >> We need new tosco-parser release so that we can add the new release > version in heat-translator and proceed further with the review. > > > > This can be done via the openstack/releases repo. We'll need either the > > heat PTL or release team liaison to approve the patch if neither of them > > can propose the change. > > > > Yours Tony. > > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Dec 12 07:54:36 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 12 Dec 2018 08:54:36 +0100 Subject: [neutron] gate issue Message-ID: Hi all, Recently we have issue with neutron-tempest-iptables_hybrid job which is failing 100% times. It is related to [1] and there is no need to recheck Your patches if CI failed on this job. To unblock our gate we proposed patch to make this job non-voting for some time [2]. When it will be merged, please rebase Your patches and it should be good then. [1] https://bugs.launchpad.net/os-vif/+bug/1807949 [2] https://review.openstack.org/#/c/624489/ — Slawek Kaplonski Senior software engineer Red Hat From rico.lin.guanyu at gmail.com Wed Dec 12 08:10:44 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 12 Dec 2018 16:10:44 +0800 Subject: [heat] changing our meeting time, and no meeting this week Message-ID: Dear team As we have some change recently, I will move Heat team meeting time to Wednesday 08:00 UTC, I will send the patch later. We will not hold a meeting today, but we definitely needs a meeting next week so we can spend some time to clear our debt. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Wed Dec 12 08:35:05 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 12 Dec 2018 17:35:05 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> Message-ID: Hi all, Finally, found the problem. For some reason, py27 has to wait a little bit longer for the elasticsearch instance to be up. I just let the functional setup code wait a little while. Problem solved. Amazing! :) Many thanks :) On Wed, Dec 12, 2018 at 11:37 AM Trinh Nguyen wrote: > Trying to look through the logs of the last merged patch set [1] but > nothing there. Is there any other places that I can look into? > > [1] https://review.openstack.org/#/c/616056/ > > Thanks, > > On Wed, Dec 12, 2018 at 9:47 AM Trinh Nguyen > wrote: > >> Thank Jeremy. I will look into bindep. >> >> On Wed, Dec 12, 2018 at 12:04 AM Jeremy Stanley >> wrote: >> >>> On 2018-12-11 15:21:47 +0900 (+0900), Trinh Nguyen wrote: >>> > I'm trying to figure out how Zuul processes the test-setup.sh [1] >>> > file to adopt new dependencies. >>> [...] >>> >>> Looking at what you've put in there for your JRE packages, those >>> would be better handled in a bindep.txt file in your repository. >>> Running tools/test-setup.sh in jobs is really to catch other sorts >>> of setup steps like precreating a database or formatting and >>> mounting a particular filesystem your tests are going to rely on. >>> For system package dependencies of your jobs, we have a declarative >>> mechanism already which doesn't require complex conditional >>> handling: we install whatever bindep says is missing. >>> >>> I see you're also leveraging tools/test-setup.sh to obtain packages >>> of elasticsearch which are not available in the distributions on >>> which your jobs run, which I guess is a suitable workaround though >>> it seems odd to test on platforms which don't package the software >>> on which your project relies. >>> -- >>> Jeremy Stanley >>> >> >> >> -- >> *Trinh Nguyen* >> *www.edlab.xyz * >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Dec 12 09:25:45 2018 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 12 Dec 2018 10:25:45 +0100 Subject: openstack magnum queens swarm error Message-ID: ello Everyone, I installed queens on centos with magnum and I am trying to create a swarm cluster with one muster and one node. The image I used is fedora-atomic 27 update 04 The stack generated end with an error and magnum conductor reports: ec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.304 17964 WARNING magnum.drivers.heat.template_def [req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have output_key api_address Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.305 17964 WARNING magnum.drivers.heat.template_def [req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have output_key swarm_masters Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.306 17964 WARNING magnum.drivers.heat.template_def [req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have output_key swarm_nodes Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.306 17964 WARNING magnum.drivers.heat.template_def [req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] stack does not have output_key discovery_url Dec 12 09:51:17 tst2-osctrl01 magnum-conductor: 2018-12-12 09:51:17.317 17964 ERROR magnum.drivers.heat.driver [req-bfa19294-5671-47a0-b0ac-9e544f0e5e38 - - - - -] Cluster error, stack status: CREATE_FAILED, stack_id: 306bd83a-7878-4d94-8ed0-1d297eec9768, reason: Resource CREATE failed: WaitConditionFailure: resources.swarm_nodes.resources[0].resources.node_wait_condition: swarm-agent service failed to start. I connected to the master node for verifyng if swarm agent is running. In the cloud init log I found: requests.exceptions.ConnectionError: HTTPConnectionPool(host='10.102.184.190', port=5000): Max retries exceeded with url: /v3//auth/tokens (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 110] Connection timed out',)) Cloud-init v. 0.7.9 running 'modules:final' at Wed, 12 Dec 2018 08:45:31 +0000. Up 55.54 seconds. 2018-12-12 08:47:45,858 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-005 [1] /var/lib/cloud/instance/scripts/part-006: line 13: /etc/etcd/etcd.conf: No such file or directory /var/lib/cloud/instance/scripts/part-006: line 26: /etc/etcd/etcd.conf: No such file or directory /var/lib/cloud/instance/scripts/part-006: line 38: /etc/etcd/etcd.conf: No such file or directory 2018-12-12 08:47:45,870 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-006 [1] Configuring docker network ... Configuring docker network service ... Removed /etc/systemd/system/multi-user.target.wants/docker-storage-setup.service. New size given (1280 extents) not larger than existing size (4863 extents) ERROR: There is not enough free space in volume group atomicos to create data volume of size MIN_DATA_SIZE=2G. 2018-12-12 08:47:46,206 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-010 [1] + systemctl stop docker + echo 'starting services' starting services + systemctl daemon-reload + for service in etcd docker.socket docker swarm-manager + echo 'activating service etcd' activating service etcd + systemctl enable etcd Failed to enable unit: Unit file etcd.service does not exist. + systemctl --no-block start etcd Failed to start etcd.service: Unit etcd.service not found. + for service in etcd docker.socket docker swarm-manager + echo 'activating service docker.socket' activating service docker.socket + systemctl enable docker.socket 1) Seems etcd service is not installed , 2) the instance required to contact controller on port 5000 (is it correct ?) Please help me. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Dec 12 12:21:15 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 Dec 2018 21:21:15 +0900 Subject: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04) In-Reply-To: <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> References: <20181106210549.4nv6co64qbqk5l7f@skaplons-mac> <20181106212533.v6eapwxd2ksggrlo@yuggoth.org> <4C6DAE05-6FFB-4671-89DA-5EB07229DB26@redhat.com> <166eb10f8fb.117c25ba675341.116732651371514382@ghanshyammann.com> <1673a198ed5.ffd33ea931022.2184156112862411416@ghanshyammann.com> <1677e3f2b18.e6941ff4108960.3659294646236910868@ghanshyammann.com> Message-ID: <167a25d310b.e84195b728207.4908393628290283653@ghanshyammann.com> ---- On Wed, 05 Dec 2018 21:02:08 +0900 Ghanshyam Mann wrote ---- Devstack and Tempest patch to move base job to Bionic are merged now. tempest-full, tempest-slow, etc + all jobs derived from tempest or devstack base jobs are running on bionic now. devstack provide bionic nodeset now, any zuulv3 native job running on xenail (zuulv3 jobs not derived from devstack/tempest base jobs) can be moved to bionic using those nodeset. Note- all the legacy jobs are still on xenial and they will be move to Bionic during migrating them to zullv3 native. -gmann > Reminder to test your project specific jobs if those are dependent on Devstack or Tempest base jobs and keep adding the results on etherpad- https://etherpad.openstack.org/p/devstack-bionic > > We will merge the Devstack and Tempest base job on Bionic on 10th Dec 2018. > > -gmann > > > ---- On Thu, 22 Nov 2018 15:26:52 +0900 Ghanshyam Mann wrote ---- > > Hi All, > > > > Let's go with approach 1 means migrating the Devstack and Tempest base jobs to Bionic. This will move most of the jobs to Bionic. > > > > We have two patches up which move all Devstack and Tempest jobs to Bionic and it's working fine. > > > > 1. All DevStack jobs to Bionic - https://review.openstack.org/#/c/610977/ > > - This will move devstack-minimal, devstack, devstack-ipv6, devstack-multinode jobs to bionic only for master which means it will be stein onwards. All these jobs will use > > xenial till stable/rocky. > > > > 2. All Tempest base jobs (except stable branches job running on master) to Bionic - https://review.openstack.org/#/c/618169/ > > - This will move devstack-tempest, tempest-all, devstack-tempest-ipv6, tempest-full, tempest-full-py3, tempest-multinode-full, tempest-slow jobs to bionic. > > Note- Even Tempest is branchless and these tempest jobs have been moved to Bionic, they will still use xenial for all stable branches(till stable/rocky) testing. with zuulv3 magic and devstack base jobs nodeset for stable branch (xenial) and master (stein onwards -bionic) will take care of that. Tested on [1] and working fine. Thanks corvus and clarkb for guiding to this optimized way. > > > > 3. Legacy jobs are not migrated to bionic. They should get migrated to Bionic while they are moved to zuulv3 native. So if your projects have many legacy jobs then, they will still run on xenial. > > > > > > Any job inherits from those base jobs will behave the same way (running on bionic from stein onwards and xenial till stable/rocky). > > > > I am writing the plan and next action item to complete this migration activity: > > > > 1 Project teams: need to test their jobs 1. which are inherited from devstack/tempest base jobs and should pass as it is 2. Any zuulv3 jobs not using devstack/tempest base job required to migrate to use bionic (Devstack patch provide the bionic nodeset) and test it. Start writing the results on etherpad[2] > > > > 2 QA team will merge the above patches by 10th Dec so that we can find and fix any issues as early and to avoid the same during release time. > > > > Let's finish the pre-testing till 10th Dec and then merge the bionic migration patches. > > > > > > [1] https://review.openstack.org/#/c/618181/ https://review.openstack.org/#/c/618176/ > > [2] https://etherpad.openstack.org/p/devstack-bionic > > > > -gmann > > > > ---- On Wed, 07 Nov 2018 08:45:45 +0900 Doug Hellmann wrote ---- > > > Ghanshyam Mann writes: > > > > > > > ---- On Wed, 07 Nov 2018 06:51:32 +0900 Slawomir Kaplonski wrote ---- > > > > > Hi, > > > > > > > > > > > Wiadomość napisana przez Jeremy Stanley w dniu 06.11.2018, o godz. 22:25: > > > > > > > > > > > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote: > > > > > > [...] > > > > > >> also add jobs like "devstack-xenial" and "tempest-full-xenial" > > > > > >> which projects can use still for some time if their job on Bionic > > > > > >> would be broken now? > > > > > > [...] > > > > > > > > > > > > That opens the door to piecemeal migration, which (as we similarly > > > > > > saw during the Trusty to Xenial switch) will inevitably lead to > > > > > > projects who no longer gate on Xenial being unable to integration > > > > > > test against projects who don't yet support Bionic. At the same > > > > > > time, projects which have switched to Bionic will start merging > > > > > > changes which only work on Bionic without realizing it, so that > > > > > > projects which test on Xenial can't use them. In short, you'll be > > > > > > broken either way. On top of that, you can end up with projects that > > > > > > don't get around to switching completely before release comes, and > > > > > > then they're stuck having to manage a test platform transition on a > > > > > > stable branch. > > > > > > > > > > I understand Your point here but will option 2) from first email lead to the same issues then? > > > > > > > > seems so. approach 1 is less risky for such integrated testing issues and requires less work. In approach 1, we can coordinate the base job migration with project side testing with bionic. > > > > > > > > -gmann > > > > > > I like the approach of updating the devstack jobs to move everything to > > > Bionic at one time because it sounds like it presents less risk of us > > > ending up with something that looks like it works together but doesn't > > > actually because it's tested on a different platform, as well as being > > > less likely to cause us to have to do major porting work in stable > > > branches after the release. > > > > > > We'll need to take the same approach when updating the version of Python > > > 3 used inside of devstack. > > > > > > Doug > > > > > > > > > From aspiers at suse.com Wed Dec 12 12:28:06 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 12 Dec 2018 12:28:06 +0000 Subject: [all]Forum summary: Expose SIGs and WGs In-Reply-To: References: Message-ID: <20181212122806.kll3gw6w65wz3js3@pacific.linksys.moosehall> Rico Lin wrote: >Dear all > >Here is the summary for forum `Expose SIGs and WGs` (etherpad [1] ). >This concept still under development, so this is an open discussion and >we need more feedbacks. >Here are some general agreements on actions or ideas that we think it's >worth to find the answer. Thanks for driving this, Rico! And sorry for the slow reply. >*Set up guidelines for SIGs/WGs/Teams for interaction specific to this >around tracking cross-project work* >We tend to agree that we kind of lack for a guideline or a sample for >SIGs/WGs, since all SIGs/WGs formed for different interest, we won't try to >unify tools (unless that's what everyone agrees on) or rules for all >groups. What we can do is to give more help to groups and provide a clear >way for how they can set up cross-project works if they want to. Also, we >can provide information on how to reach to users, ops, and developers and >bridge them up. And we can even do a more general guideline or sample on >how other SIGs/WGs are doing with their own workflow. Like self-healing SIG >working on getting user story and feedback and use them to general better >document/guideline for other users. Also, public cloud WG working on >collect issues from public cloud providers and bring those issues to >projects. Those IMO are great examples that we should put them down >somewhere for cross SIGs/WGs consideration. We already have some basic guidelines for communication here: https://governance.openstack.org/sigs/ Maybe we should just extend that with some suggested best practices? For example: - Set up an openstack/$signame-sig git repository (we could even create a cookiecutter repo, or is that overkill?) - Set up a StoryBoard project linked with that git repository - For cross-project collaboration, recommend the following: - submit cross-project stories to StoryBoard - submit cross-project specs to the SIG's git repo (and the SIG lead could set up a template for these, e.g. http://git.openstack.org/cgit/openstack/self-healing-sig/tree/specs/template.rst ) - post cross-project discussion to openstack-discuss@ with [$signame-sig] and all the appropriate [$project] tags in the Subject header >As a further idea, we can even >discuss if it's a common interest to have a SIG to help on SIGs. We already have that ;-) It's the "meta" SIG, mentioned here: https://governance.openstack.org/sigs/ >*A workflow for tracking:* >This kind of follow above item. If we're going to set up a workflow, what >we can add in to help people live an easier life? This is also an idea that >no one in the room thinks it's a bad one, so it means in long term, it >might worth our time to provide more clear information on what exactly >workflow that we suggest everyone use. Doesn't StoryBoard handle tracking nicely? >*Discuss SIG spec repo*: >The discussion here is how can we monitoring SIGs/WGs health and track >tasks. When talking about tasks we not just talking about bugs, but also >features that's been considered as essential tasks for SIGs/WGs. We need a >place to put them down in a trackable way (from a user story to a patch for >implementation). Again I think StoryBoard works nicely for this. >*Ask foundation about have project update for SIGs/WGs*: >One action we can start right now is to let SIGs/WGs present a project >update (or even like a session but give each group 5 mins to present). >This should help group getting more attention. And even capable to send >out messages like what's the most important features or bug fixes they >need from project teams, or what's the most important tasks that are under >planning or working on. >Fortunately, we got Melvin Hillsman (UC) volunteer on this task. Yes, I think this is really important. And from personal experience as a SIG chair, I know it would help me to have someone pestering me to provide project updates ;-) >The thing that we also wish to do is to clear the message here. We think >most of the tools are already there, so we shouldn't need to ask project >teams to do any huge change. But still, we found there are definitely some >improvements that we can do to better bridge users, ops, and developers. >You might find some information here didn't give you a clear answer. And >that's because of we still under open discussion for this. And I assume we >gonna keep finding actions from discussions that we can do right away. We >will try to avoid that we have to do the exact same session with the same >argument over and over again. >So please give your feedback, any idea, or give us your help if you also >care about this. Yes, I think you are right. My simple suggestion is above: just add some best practices to the governance-sigs page. From dangtrinhnt at gmail.com Wed Dec 12 12:32:23 2018 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Wed, 12 Dec 2018 21:32:23 +0900 Subject: [Searchlight][Zuul] tox failed tests at zuul check only In-Reply-To: References: <1543855087.1459125.1597252016.5DBF506E@webmail.messagingengine.com> <1544206572.507632.1602338888.7C5CDA48@webmail.messagingengine.com> <20181211150147.f3oj4gzkufpbp2fc@yuggoth.org> Message-ID: BTW, I switched to use the bindep.txt and make the test script much simpler. Thanks :) On Wed, Dec 12, 2018 at 5:35 PM Trinh Nguyen wrote: > Hi all, > > Finally, found the problem. For some reason, py27 has to wait a little bit > longer for the elasticsearch instance to be up. I just let the functional > setup code wait a little while. > > Problem solved. Amazing! :) > > Many thanks :) > > On Wed, Dec 12, 2018 at 11:37 AM Trinh Nguyen > wrote: > >> Trying to look through the logs of the last merged patch set [1] but >> nothing there. Is there any other places that I can look into? >> >> [1] https://review.openstack.org/#/c/616056/ >> >> Thanks, >> >> On Wed, Dec 12, 2018 at 9:47 AM Trinh Nguyen >> wrote: >> >>> Thank Jeremy. I will look into bindep. >>> >>> On Wed, Dec 12, 2018 at 12:04 AM Jeremy Stanley >>> wrote: >>> >>>> On 2018-12-11 15:21:47 +0900 (+0900), Trinh Nguyen wrote: >>>> > I'm trying to figure out how Zuul processes the test-setup.sh [1] >>>> > file to adopt new dependencies. >>>> [...] >>>> >>>> Looking at what you've put in there for your JRE packages, those >>>> would be better handled in a bindep.txt file in your repository. >>>> Running tools/test-setup.sh in jobs is really to catch other sorts >>>> of setup steps like precreating a database or formatting and >>>> mounting a particular filesystem your tests are going to rely on. >>>> For system package dependencies of your jobs, we have a declarative >>>> mechanism already which doesn't require complex conditional >>>> handling: we install whatever bindep says is missing. >>>> >>>> I see you're also leveraging tools/test-setup.sh to obtain packages >>>> of elasticsearch which are not available in the distributions on >>>> which your jobs run, which I guess is a suitable workaround though >>>> it seems odd to test on platforms which don't package the software >>>> on which your project relies. >>>> -- >>>> Jeremy Stanley >>>> >>> >>> >>> -- >>> *Trinh Nguyen* >>> *www.edlab.xyz * >>> >>> >> >> -- >> *Trinh Nguyen* >> *www.edlab.xyz * >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Wed Dec 12 13:20:44 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 12 Dec 2018 13:20:44 +0000 Subject: [all][security-sig][meta-sig] Forum summary: Expose SIGs and WGs In-Reply-To: References: Message-ID: <20181212132044.ibvr6dd54zrbqkj3@pacific.linksys.moosehall> Matt Riedemann wrote: >On 12/3/2018 11:42 AM, Rico Lin wrote: >>We also have some real story (Luzi's story) for people to get a >>better understanding of why current workflow can look like for >>someone who tries to help. > >I looked over the note on this in the etherpad. Me too - in case anyone missed the link to this initiative around image encryption, it's near the bottom of: https://etherpad.openstack.org/p/expose-sigs-and-wgs And BTW it sounds like a really cool initiative to me! In fact I think it could nicely complement the work I am doing on adding AMD SEV support to nova: https://review.openstack.org/#/c/609779/ >They did what they >were asked and things have stalled. At this point, I think it comes >down to priorities, and in order to prioritize something big like this >that requires coordinated work across several projects, we are going >to need more stakeholders coming forward and saying they also want >this feature so the vendors who are paying the people to work upstream >can be given specific time to give this the attention it needs. And >that ties back into getting the top 1 or 2 wishlist items from each >SIG and trying to sort those based on what is the highest rated most >common need for the greatest number of people - sort of like what we >see happening with the resource delete API community wide goal >proposal. Agreed. The Security SIG sounds like a natural home for it. I'm going to wildly speculate that maybe part of the reason it stalled is that it was perceived as coming from a couple of individuals rather than a SIG. If the initiative had been backed by the Security SIG as something worth prioritising, then maybe it could have received wider attention. Also maybe copying a couple of tricks from the Self-healing SIG might (or might not) help. Firstly, try to find one or two security-minded people from each involved project who are willing to act as liasons with the Security SIG: https://wiki.openstack.org/wiki/Self-healing_SIG#Project_liasons Those people won't necessarily need to commit any time to development themselves, but hopefully they could volunteer to review specs specific to their project, and later patches too. Secondly, track all work on StoryBoard so that the current status is always clearly visible. A couple of other things struck me about this initiative: - They were requested to propose separate specs for each involved project (Nova, Cinder and Glance in this case). This resulted in quite a bit of duplication between the specs, but maybe that was unavoidable. - The question where to put the shared encryption and decryption code remained unresolved, even though of the three options proposed, only the oslo option had no cons listed: https://etherpad.openstack.org/p/library-for-image-encryption-and-decryption oslo seems like a natural place to put it, so maybe the solution is to submit this spec to oslo? Although if the initiative was hosted by the Security SIG, then as a last resort the SIG could set up a git repository to host the code, at least as a temporary measure. From josephine.seifert at secustack.com Wed Dec 12 13:57:31 2018 From: josephine.seifert at secustack.com (Josephine Seifert) Date: Wed, 12 Dec 2018 14:57:31 +0100 Subject: [all][security-sig][meta-sig] Forum summary: Expose SIGs and WGs In-Reply-To: <20181212132044.ibvr6dd54zrbqkj3@pacific.linksys.moosehall> Referen